WorldWideScience

Sample records for continuous depth-of-interaction encoding

  1. Pulse shape discrimination and classification methods for continuous depth of interaction encoding PET detectors

    International Nuclear Information System (INIS)

    Roncali, Emilie; Phipps, Jennifer E; Marcu, Laura; Cherry, Simon R

    2012-01-01

    In previous work we demonstrated the potential of positron emission tomography (PET) detectors with depth-of-interaction (DOI) encoding capability based on phosphor-coated crystals. A DOI resolution of 8 mm full-width at half-maximum was obtained for 20 mm long scintillator crystals using a delayed charge integration linear regression method (DCI-LR). Phosphor-coated crystals modify the pulse shape to allow continuous DOI information determination, but the relationship between pulse shape and DOI is complex. We are therefore interested in developing a sensitive and robust method to estimate the DOI. Here, linear discriminant analysis (LDA) was implemented to classify the events based on information extracted from the pulse shape. Pulses were acquired with 2×2×20 mm 3 phosphor-coated crystals at five irradiation depths and characterized by their DCI values or Laguerre coefficients. These coefficients were obtained by expanding the pulses on a Laguerre basis set and constituted a unique signature for each pulse. The DOI of individual events was predicted using LDA based on Laguerre coefficients (Laguerre-LDA) or DCI values (DCI-LDA) as discriminant features. Predicted DOIs were compared to true irradiation depths. Laguerre-LDA showed higher sensitivity and accuracy than DCI-LDA and DCI-LR and was also more robust to predict the DOI of pulses with higher statistical noise due to low light levels (interaction depths further from the photodetector face). This indicates that Laguerre-LDA may be more suitable to DOI estimation in smaller crystals where lower collected light levels are expected. This novel approach is promising for calculating DOI using pulse shape discrimination in single-ended readout depth-encoding PET detectors. (paper)

  2. Depth of interaction decoding of a continuous crystal detector module

    International Nuclear Information System (INIS)

    Ling, T; Lewellen, T K; Miyaoka, R S

    2007-01-01

    We present a clustering method to extract the depth of interaction (DOI) information from an 8 mm thick crystal version of our continuous miniature crystal element (cMiCE) small animal PET detector. This clustering method, based on the maximum-likelihood (ML) method, can effectively build look-up tables (LUT) for different DOI regions. Combined with our statistics-based positioning (SBP) method, which uses a LUT searching algorithm based on the ML method and two-dimensional mean-variance LUTs of light responses from each photomultiplier channel with respect to different gamma ray interaction positions, the position of interaction and DOI can be estimated simultaneously. Data simulated using DETECT2000 were used to help validate our approach. An experiment using our cMiCE detector was designed to evaluate the performance. Two and four DOI region clustering were applied to the simulated data. Two DOI regions were used for the experimental data. The misclassification rate for simulated data is about 3.5% for two DOI regions and 10.2% for four DOI regions. For the experimental data, the rate is estimated to be ∼25%. By using multi-DOI LUTs, we also observed improvement of the detector spatial resolution, especially for the corner region of the crystal. These results show that our ML clustering method is a consistent and reliable way to characterize DOI in a continuous crystal detector without requiring any modifications to the crystal or detector front end electronics. The ability to characterize the depth-dependent light response function from measured data is a major step forward in developing practical detectors with DOI positioning capability

  3. Silicon photomultipliers for positron emission tomography detectors with depth of interaction encoding capability

    International Nuclear Information System (INIS)

    Taghibakhsh, Farhad; Reznik, Alla; Rowlands, John A.

    2011-01-01

    Silicon photomultipliers (SiPMs) are receiving increasing attention in the field of positron emission tomography (PET) detectors. Compared to photomultiplier tubes, they offer novel detector configurations for the extraction of depth of interaction (DOI) information, or enable emerging medical imaging modalities such as simultaneous PET-magnetic resonant imaging (MRI). In this article, we used 2x2x20 mm 3 LYSO scintillator crystals coupled to SiPMs on both ends (dual-ended readout configuration) to evaluate the detector performance for DOI-PET applications. We investigated the effect of scintillator crystal surface finishing on sensitivity and resolution of DOI, as well as on energy and timing resolution. Measurements indicate DOI sensitivity and resolution of 7.1% mm -1 and 2.1±0.6 mm for saw-cut, and 1.3% mm -1 and 9.0±1.5 mm, for polished scintillator crystals, respectively. Energy resolution varies from 19% when DOI is in the center, to 15% with DOI at either end of the saw-cut crystal, while it remains constant at ∼14% for polished scintillators. Based on our results we conclude that 2x2x20 mm 3 saw-cut (without any special side wall polishing) LYSO crystals coupled to 2x2 mm 2 silicon photomultipliers are optimal for isotropic 2 mm resolution DOI-PET applications.

  4. Development of a novel depth of interaction PET detector using highly multiplexed G-APD cross-strip encoding

    Energy Technology Data Exchange (ETDEWEB)

    Kolb, A., E-mail: armin.kolb@med.uni-tuebingen.de; Parl, C.; Liu, C. C.; Pichler, B. J. [Werner Siemens Imaging Center, Department of Preclinical Imaging and Radiopharmacy, Eberhard Karls University, 72076 Tübingen (Germany); Mantlik, F. [Werner Siemens Imaging Center, Department of Preclinical Imaging and Radiopharmacy, Eberhard Karls University, 72076 Tübingen, Germany and Department of Empirical Inference, Max Planck Institute for Intelligent Systems, 72076 Tübingen (Germany); Lorenz, E. [Max Planck Institute for Physics, Föhringer Ring 6, 80805 München (Germany); Renker, D. [Department of Physics, Technische Universität München, 85748 Garching (Germany)

    2014-08-15

    Purpose: The aim of this study was to develop a prototype PET detector module for a combined small animal positron emission tomography and magnetic resonance imaging (PET/MRI) system. The most important factor for small animal imaging applications is the detection sensitivity of the PET camera, which can be optimized by utilizing longer scintillation crystals. At the same time, small animal PET systems must yield a high spatial resolution. The measured object is very close to the PET detector because the bore diameter of a high field animal MR scanner is limited. When used in combination with long scintillation crystals, these small-bore PET systems generate parallax errors that ultimately lead to a decreased spatial resolution. Thus, we developed a depth of interaction (DoI) encoding PET detector module that has a uniform spatial resolution across the whole field of view (FOV), high detection sensitivity, compactness, and insensitivity to magnetic fields. Methods: The approach was based on Geiger mode avalanche photodiode (G-APD) detectors with cross-strip encoding. The number of readout channels was reduced by a factor of 36 for the chosen block elements. Two 12 × 2 G-APD strip arrays (25μm cells) were placed perpendicular on each face of a 12 × 12 lutetium oxyorthosilicate crystal block with a crystal size of 1.55 × 1.55 × 20 mm. The strip arrays were multiplexed into two channels and used to calculate the x, y coordinates for each array and the deposited energy. The DoI was measured in step sizes of 1.8 mm by a collimated {sup 18}F source. The coincident resolved time (CRT) was analyzed at all DoI positions by acquiring the waveform for each event and applying a digital leading edge discriminator. Results: All 144 crystals were well resolved in the crystal flood map. The average full width half maximum (FWHM) energy resolution of the detector was 12.8% ± 1.5% with a FWHM CRT of 1.14 ± 0.02 ns. The average FWHM DoI resolution over 12 crystals was 2.90

  5. An investigation on continuous depth-of-interaction detection using a monolithic scintillator with single-ended readout

    International Nuclear Information System (INIS)

    Zhang, H; Zhou, R; Yang, C

    2014-01-01

    PET detectors with depth-of-interaction (DOI) capability have been studied to improve imaging resolution widely over the world. Since discrete DOI and continuous DOI detection with dual-ended readout technology have their respective limitations, we in this work focus on the continuous DOI detection with single-ended readout using a monolithic LSO scintillator and a multi-pixel photodetector. Based on a non-linear least squares data fitting method and Geant4 simulation, we studied the relationship between the spatial resolution of gamma positioning and the pixel number of photodetector. The results show that for a pixel number larger than 6x6, the positioning spatial resolution does not become significantly better when increasing the pixel number moreover. Another aspect studied is the effect of crystal thickness on the spatial resolution. Increasing the thickness of crystal leads to higher detection efficiency but lower spatial resolution

  6. submitter A new method for depth of interaction determination in PET detectors

    CERN Document Server

    Pizzichemi, M; Niknejad, T; Liu, Z; Lecoq, P; Tavernier, S; Varela, J; Paganoni, M; Auffray, E

    2016-01-01

    A new method for obtaining depth of interaction (DOI) information in PET detectors is presented in this study, based on sharing and redirection of scintillation light among multiple detectors, together with attenuation of light over the length of the crystals. The aim is to obtain continuous DOI encoding with single side readout, and at the same time without the need for one-toone coupling between scintillators and detectors, allowing the development of a PET scanner with good spatial, energy and timing resolutions while keeping the complexity of the system low. A prototype module has been produced and characterized to test the proposed method, coupling a LYSO scintillator matrix to a commercial SiPMs array. Excellent crystal separation is obtained for all the scintillators in the array, light loss due to depolishing is found to be negligible, energy resolution is shown to be on average 12.7% FWHM. The mean DOI resolution achieved is 4.1mm FWHM on a 15mm long crystal and preliminary coincidence time resolutio...

  7. Universal Quantum Computing with Arbitrary Continuous-Variable Encoding

    OpenAIRE

    Lau, Hoi-Kwan; Plenio, Martin B.

    2016-01-01

    Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and ...

  8. Universal Quantum Computing with Arbitrary Continuous-Variable Encoding.

    Science.gov (United States)

    Lau, Hoi-Kwan; Plenio, Martin B

    2016-09-02

    Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.

  9. Depth of interaction detection for {gamma}-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, Ch.W. [Instituto de Aplicaciones de las Tecnologias de la Informacion y de las Comunicaciones Avanzadas, (UPV) Camino de Vera s/n, E46022 (Spain)], E-mail: lerche@ific.uv.es; Doering, M. [Institut fuer Kernphysik, Forschungszentrum Juelich GmbH, D52425 Juelich (Germany); Ros, A. [Institute de Fisica Corpuscular (CSIC-UV), 22085, Valencia E46071 (Spain); Herrero, V.; Gadea, R.; Aliaga, R.J.; Colom, R.; Mateo, F.; Monzo, J.M.; Ferrando, N.; Toledo, J.F.; Martinez, J.D.; Sebastia, A. [Instituto de Aplicaciones de las Tecnologias de la Informacion y de las Comunicaciones Avanzadas, (UPV) Camino de Vera s/n, E46022 (Spain); Sanchez, F.; Benlloch, J.M. [Institute de Fisica Corpuscular (CSIC-UV), 22085, Valencia E46071 (Spain)

    2009-03-11

    A novel design for an inexpensive depth of interaction capable detector for {gamma}-ray imaging has been developed. The design takes advantage of the strong correlation between the width of the scintillation light distribution in monolithic crystals and the interaction depth of {gamma}-rays. We present in this work an inexpensive modification of the commonly used charge dividing circuits which enables the instantaneous and simultaneous computation of the second order moment of light distribution. This measure provides a good estimate for the depth of interaction and does not affect the determination of the position centroids and the energy release of {gamma}-ray impact. The method has been tested with a detector consisting of a monolithic LSO block sized 42x42x10mm{sup 3} and a position-sensitive photomultiplier tube H8500 from Hamamatsu. The mean spatial resolution of the detector was found to be 3.4mm for the position centroids and 4.9mm for the DOI. The best spatial resolutions were observed at the center of the detector and yielded 1.4mm for the position centroids and 1.9mm for the DOI.

  10. Depth of interaction detection for γ-ray imaging

    International Nuclear Information System (INIS)

    Lerche, Ch.W.; Doering, M.; Ros, A.; Herrero, V.; Gadea, R.; Aliaga, R.J.; Colom, R.; Mateo, F.; Monzo, J.M.; Ferrando, N.; Toledo, J.F.; Martinez, J.D.; Sebastia, A.; Sanchez, F.; Benlloch, J.M.

    2009-01-01

    A novel design for an inexpensive depth of interaction capable detector for γ-ray imaging has been developed. The design takes advantage of the strong correlation between the width of the scintillation light distribution in monolithic crystals and the interaction depth of γ-rays. We present in this work an inexpensive modification of the commonly used charge dividing circuits which enables the instantaneous and simultaneous computation of the second order moment of light distribution. This measure provides a good estimate for the depth of interaction and does not affect the determination of the position centroids and the energy release of γ-ray impact. The method has been tested with a detector consisting of a monolithic LSO block sized 42x42x10mm 3 and a position-sensitive photomultiplier tube H8500 from Hamamatsu. The mean spatial resolution of the detector was found to be 3.4mm for the position centroids and 4.9mm for the DOI. The best spatial resolutions were observed at the center of the detector and yielded 1.4mm for the position centroids and 1.9mm for the DOI.

  11. Development of high-resolution detector module with depth of interaction identification for positron emission tomography

    International Nuclear Information System (INIS)

    Niknejad, Tahereh; Pizzichemi, Marco; Stringhini, Gianluca; Auffray, Etiennette; Bugalho, Ricardo; Da Silva, Jose Carlos; Di Francesco, Agostino; Ferramacho, Luis; Lecoq, Paul; Leong, Carlos; Paganoni, Marco; Rolo, Manuel; Silva, Rui; Silveira, Miguel; Tavernier, Stefaan; Varela, Joao; Zorraquino, Carlos

    2017-01-01

    We have developed a Time-of-flight high resolution and commercially viable detector module for the application in small PET scanners. A new approach to depth of interaction (DOI) encoding with low complexity for a pixelated crystal array using a single side readout and 4-to-1 coupling between scintillators and photodetectors was investigated. In this method the DOI information is estimated using the light sharing technique. The detector module is a 1.53×1.53×15 mm"3 matrix of 8×8 LYSO scintillator with lateral surfaces optically depolished separated by reflective foils. The crystal array is optically coupled to 4×4 silicon photomultipliers (SiPM) array and readout by a high performance front-end ASIC with TDC capability (50 ps time binning). The results show an excellent crystal identification for all the scintillators in the matrix, a timing resolution of 530 ps, an average DOI resolution of 5.17 mm FWHM and an average energy resolution of 18.29% FWHM. - Highlights: • A new method for DOI encoding for PET detectors based on light sharing is proposed. • A prototype module with LYSO scintillator matrix coupled to SiPMs array is produced. • The module has one side readout and 4-to-1 coupling between scintillators and SiPMs. • A compact TOF front-end ASIC is used. • Excellent performances are shown by the prototype module.

  12. Development of high-resolution detector module with depth of interaction identification for positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Niknejad, Tahereh, E-mail: tniknejad@lip.pt [Laboratory of Instrumentation and Experimental Particles Physics, Lisbon (Portugal); Pizzichemi, Marco [University of Milano-Bicocca (Italy); Stringhini, Gianluca [University of Milano-Bicocca (Italy); CERN, Geneve (Switzerland); Auffray, Etiennette [CERN, Geneve (Switzerland); Bugalho, Ricardo; Da Silva, Jose Carlos; Di Francesco, Agostino [Laboratory of Instrumentation and Experimental Particles Physics, Lisbon (Portugal); Ferramacho, Luis [PETsys Electronics, Oeiras (Portugal); Lecoq, Paul [CERN, Geneve (Switzerland); Leong, Carlos [PETsys Electronics, Oeiras (Portugal); Paganoni, Marco [University of Milano-Bicocca (Italy); Rolo, Manuel [Laboratory of Instrumentation and Experimental Particles Physics, Lisbon (Portugal); INFN, Turin (Italy); Silva, Rui [Laboratory of Instrumentation and Experimental Particles Physics, Lisbon (Portugal); Silveira, Miguel [PETsys Electronics, Oeiras (Portugal); Tavernier, Stefaan [PETsys Electronics, Oeiras (Portugal); Vrije Universiteit Brussel (Belgium); Varela, Joao [Laboratory of Instrumentation and Experimental Particles Physics, Lisbon (Portugal); CERN, Geneve (Switzerland); Zorraquino, Carlos [Biomedical Image Technologies Lab, Universidad Politécnica de Madrid (Spain); CIBER-BBN, Universidad Politécnica de Madrid (Spain)

    2017-02-11

    We have developed a Time-of-flight high resolution and commercially viable detector module for the application in small PET scanners. A new approach to depth of interaction (DOI) encoding with low complexity for a pixelated crystal array using a single side readout and 4-to-1 coupling between scintillators and photodetectors was investigated. In this method the DOI information is estimated using the light sharing technique. The detector module is a 1.53×1.53×15 mm{sup 3} matrix of 8×8 LYSO scintillator with lateral surfaces optically depolished separated by reflective foils. The crystal array is optically coupled to 4×4 silicon photomultipliers (SiPM) array and readout by a high performance front-end ASIC with TDC capability (50 ps time binning). The results show an excellent crystal identification for all the scintillators in the matrix, a timing resolution of 530 ps, an average DOI resolution of 5.17 mm FWHM and an average energy resolution of 18.29% FWHM. - Highlights: • A new method for DOI encoding for PET detectors based on light sharing is proposed. • A prototype module with LYSO scintillator matrix coupled to SiPMs array is produced. • The module has one side readout and 4-to-1 coupling between scintillators and SiPMs. • A compact TOF front-end ASIC is used. • Excellent performances are shown by the prototype module.

  13. Depth of interaction detection with enhanced position-sensitive proportional resistor network

    International Nuclear Information System (INIS)

    Lerche, Ch.W.; Benlloch, J.M.; Sanchez, F.; Pavon, N.; Gimenez, N.; Fernandez, M.; Gimenez, M.; Sebastia, A.; Martinez, J.; Mora, F.J.

    2005-01-01

    A new method of determining the depth of interaction of γ-rays in thick inorganic scintillation crystals was tested experimentally. The method uses the strong correlation between the width of the scintillation light distribution within large continuous crystals and the γ-ray's interaction depth. This behavior was successfully reproduced by a theoretical model distribution based on the inverse square law. For the determination of the distribution's width, its standard deviation σ is computed using an enhanced position-sensitive proportional resistor network which is often used in γ-ray-imaging devices. Minor changes of this known resistor network allow the analog and real-time determination of the light distribution's 2nd moment without impairing the measurement of the energy and centroid. First experimental results are presented that confirm that the described method works correctly. Since only some cheap electronic components, but no additional detectors or crystals are required, the main advantage of this method is its low cost

  14. Front-end circuit for position sensitive silicon and vacuum tube photomultipliers with gain control and depth of interaction measurement

    International Nuclear Information System (INIS)

    Herrero, Vicente; Colom, Ricardo; Gadea, Rafael; Lerche, Christoph W.; Cerda, Joaquin; Sebastia, Angel; Benlloch, Jose M.

    2007-01-01

    Silicon Photomultipliers, though still under development for mass production, may be an alternative to traditional Vacuum Photomultipliers Tubes (VPMT). As a consequence, electronic front-ends initially designed for VPMT will need to be modified. In this simulation, an improved architecture is presented which is able to obtain impact position and depth of interaction of a gamma ray within a continuous scintillation crystal, using either kind of PM. A current sensitive preamplifier stage with individual gain adjustment interfaces the multi-anode PM outputs with a current division resistor network. The preamplifier stage allows to improve front-end processing delay and temporal resolution behavior as well as to increase impact position calculation resolution. Depth of interaction (DOI) is calculated from the width of the scintillation light distribution, which is related to the sum of voltages in resistor network input nodes. This operation is done by means of a high-speed current mode scheme

  15. A depth-of-interaction PET detector using mutual gain-equalized silicon photomultiplier

    International Nuclear Information System (INIS)

    Xi, W.; Weisenberger, A.G.; Dong, H.; Kross, Brian; Lee, S.; McKisson, J.; Zorn, Carl

    2012-01-01

    We developed a prototype high resolution, high efficiency depth-encoding detector for PET applications based on dual-ended readout of LYSO array with two silicon photomultipliers (SiPMs). Flood images, energy resolution, and depth-of-interaction (DOI) resolution were measured for a LYSO array - 0.7 mm in crystal pitch and 10 mm in thickness - with four unpolished parallel sides. Flood images were obtained such that individual crystal element in the array is resolved. The energy resolution of the entire array was measured to be 33%, while individual crystal pixel elements utilizing the signal from both sides ranged from 23.3% to 27%. By applying a mutual-gain equalization method, a DOI resolution of 2 mm for the crystal array was obtained in the experiments while simulations indicate ∼1 mm DOI resolution could possibly be achieved. The experimental DOI resolution can be further improved by obtaining revised detector supporting electronics with better energy resolutions. This study provides a detailed detector calibration and DOI response characterization of the dual-ended readout SiPM-based PET detectors, which will be important in the design and calibration of a PET scanner in the future.

  16. Depth of interaction calibration for PET detectors with dual-ended readout by PSAPDs

    International Nuclear Information System (INIS)

    Yang Yongfeng; Qi Jinyi; Wu Yibao; St James, Sara; Cherry, Simon R; Farrell, Richard; Dokhale, Purushottam A; Shah, Kanai S

    2009-01-01

    Many laboratories develop depth-encoding detectors to improve the trade-off between spatial resolution and sensitivity in positron emission tomography (PET) scanners. One challenge in implementing these detectors is the need to calibrate the depth of interaction (DOI) response for the large numbers of detector elements in a scanner. In this work, we evaluate two different methods, a linear detector calibration and a linear crystal calibration, for determining DOI calibration parameters. Both methods can use measurements from any source distribution and location, or even the intrinsic lutetium oxyorthosilicate (LSO) background activity, and are therefore well suited for use in a depth-encoding PET scanner. The methods were evaluated by measuring detector and crystal DOI responses for all eight detectors in a prototype depth-encoding PET scanner. The detectors utilize dual-ended readout of LSO scintillator arrays with position-sensitive avalanche photodiodes (PSAPDs). The LSO arrays have 7 x 7 elements, with a crystal size of 0.92 x 0.92 x 20 mm 3 and pitch of 1.0 mm. The arrays are read out by two 8 x 8 mm 2 area PSAPDs placed at opposite ends of the arrays. DOI is measured by the ratio of the amplitude of the total energy signals measured by the two PSAPDs. Small variations were observed in the DOI responses of different crystals within an array as well as DOI responses for different arrays. A slightly nonlinear dependence of the DOI ratio on depth was observed and the nonlinearity was larger for the corner and edge crystals. The DOI calibration parameters were obtained from the DOI responses measured in a singles mode. The average error between the calibrated DOI and the known DOI was 0.8 mm if a linear detector DOI calibration was used and 0.5 mm if a linear crystal DOI calibration was used. A line source phantom and a hot rod phantom were scanned on the prototype PET scanner. DOI measurement significantly improved the image spatial resolution no matter which DOI

  17. Depth of interaction resolution measurements for a high resolution PET detector using position sensitive avalanche photodiodes

    International Nuclear Information System (INIS)

    Yang Yongfeng; Dokhale, Purushottam A; Silverman, Robert W; Shah, Kanai S; McClish, Mickel A; Farrell, Richard; Entine, Gerald; Cherry, Simon R

    2006-01-01

    We explore dual-ended read out of LSO arrays with two position sensitive avalanche photodiodes (PSAPDs) as a high resolution, high efficiency depth-encoding detector for PET applications. Flood histograms, energy resolution and depth of interaction (DOI) resolution were measured for unpolished LSO arrays with individual crystal sizes of 1.0, 1.3 and 1.5 mm, and for a polished LSO array with 1.3 mm pixels. The thickness of the crystal arrays was 20 mm. Good flood histograms were obtained for all four arrays, and crystals in all four arrays can be clearly resolved. Although the amplitude of each PSAPD signal decreases as the interaction depth moves further from the PSAPD, the sum of the two PSAPD signals is essentially constant with irradiation depth for all four arrays. The energy resolutions were similar for all four arrays, ranging from 14.7% to 15.4%. A DOI resolution of 3-4 mm (including the width of the irradiation band which is ∼2 mm) was obtained for all the unpolished arrays. The best DOI resolution was achieved with the unpolished 1 mm array (average 3.5 mm). The DOI resolution for the 1.3 mm and 1.5 mm unpolished arrays was 3.7 and 4.0 mm respectively. For the polished array, the DOI resolution was only 16.5 mm. Summing the DOI profiles across all crystals for the 1 mm array only degraded the DOI resolution from 3.5 mm to 3.9 mm, indicating that it may not be necessary to calibrate the DOI response separately for each crystal within an array. The DOI response of individual crystals in the array confirms this finding. These results provide a detailed characterization of the DOI response of these PSAPD-based PET detectors which will be important in the design and calibration of a PET scanner making use of this detector approach

  18. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera

    Science.gov (United States)

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G.; Nagarkar, Vivek V.

    2011-06-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99 m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional “straight-cut” (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.

  19. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera.

    Science.gov (United States)

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G; Nagarkar, Vivek V

    2011-06-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional "straight-cut" (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.

  20. A machine learning method for fast and accurate characterization of depth-of-interaction gamma cameras

    DEFF Research Database (Denmark)

    Pedemonte, Stefano; Pierce, Larry; Van Leemput, Koen

    2017-01-01

    to impose the depth-of-interaction in an experimental set-up. In this article we introduce a machine learning approach for extracting accurate forward models of gamma imaging devices from simple pencil-beam measurements, using a nonlinear dimensionality reduction technique in combination with a finite...

  1. Investigation of depth-of-interaction by pulse shape discrimination in multicrystal detectors read out by avalanche photodiodes

    International Nuclear Information System (INIS)

    Saoudi, A.; Pepin, C.M.; Dion, F.; Bentourkia, M.; Lecomte, R.; Dautet, H.

    1999-01-01

    The measurement of depth of interaction (DOI) within detectors is necessary to improve resolution uniformity across the FOV of small diameter PET scanners. DOI encoding by pulse shape discrimination (PSD) has definite advantages as it requires only one readout per pixel and it allows DOI measurement of photoelectric and Compton events. The PSD time characteristics of various scintillators were studied with avalanche photodiodes (APD) and the identification capability was tested in multi-crystal assemblies with up to four scintillators. In the PSD time spectrum of an APD-GSO/LSO/BGO/CsI(Tl) assembly, four distinct time peaks at 45, 26, 88 and 150 ns relative to a fast test pulse, having resolution of 10.6, 5.2, 20 and 27 ns, can be easily separated. Whereas the number and position of scintillators in the multi-crystal assemblies affect detector performance, the ability to identify crystals is not compromised. Compton events have a significant effect on PSD accuracy, suggesting that photopeak energy gating should be used for better crystal identification. However, more sophisticated PSD techniques using parametric time-energy histograms can also improve crystal identification in cases where PSD time or energy discrimination alone is inadequate. These results confirm the feasibility of PSD DOI encoding with APD-based detectors for PET

  2. Position-Sensitive Detector with Depth-of-Interaction Determination for Small Animal PET

    CERN Document Server

    Fedorov, A; Kholmetsky, A L; Korzhik, M V; Lecoq, P; Lobko, A S; Missevitch, O V; Tkatchev, A

    2002-01-01

    Crystal arrays made of LSO and LuAP crystals 2x2x10 mm pixels were manufactured for evaluation of detector with depth-of-interaction (DOI) determination capability intended for small animal positron emission tomograph. Position-sensitive LSO/LuAP phoswich DOI detector based on crystal 8x8 arrays and HAMAMATSU R5900-00-M64 position-sensitive multi-anode photomultiplier tube was developed and evaluated. Time resolution was found to be not worse than 1.0 ns FWHM for both layers, and spatial resolution mean value was 1.5 mm FWHM for the center of field-of-view.

  3. Energy resolution of a four-layer depth of interaction detector block for small animal PET

    International Nuclear Information System (INIS)

    Tsuda, Tomoaki; Kawai, Hideyuki; Orita, Narimichi; Murayama, Hideo; Yoshida, Eiji; Inadama, Naoko; Yamaya, Taiga; Omura, Tomohide

    2004-01-01

    We are now planning to develop a positron emission tomograph dedicated to small animals such as rats and mice which meets the demand for higher sensitivity. We proposed a new depth of interaction (DOI) detector arrangement to obtain DOI information by using a four-layer detector with all the same crystal elements. In this DOI detector, we control the behavior of scintillation photons by inserting the reflectors between crystal elements so that the DOI information of four layers can be extracted from one two-dimensional (2D) position histogram made by Anger-type calculation. In this work, we evaluate the energy resolution of this four-layer DOI detector. (author)

  4. A method for correcting the depth-of-interaction blurring in PET cameras

    International Nuclear Information System (INIS)

    Rogers, J.G.

    1993-11-01

    A method is presented for the purpose of correcting PET images for the blurring caused by variations in the depth-of-interaction in position-sensitive gamma ray detectors. In the case of a fine-cut 50x50x30 mm BGO block detector, the method is shown to improve the detector resolution by about 25%, measured in the geometry corresponding to detection at the edge of the field-of-view. Strengths and weaknesses of the method are discussed and its potential usefulness for improving the images of future PET cameras is assessed. (author). 8 refs., 3 figs

  5. Algebraic 2D PET image reconstruction using depth-of-interaction information

    International Nuclear Information System (INIS)

    Yamaya, Taiga; Obi, Takashi; Yamaguchi, Masahiro; Kita, Kouichi

    2001-01-01

    Recently a high-performance PET scanner, which measures depth-of-interaction (DOI) information, is being developed for molecular imaging. DOI measurement of multi-layered thin crystals can improve spatial resolution and scanner sensitivity simultaneously. In this paper, we apply an algebraic image reconstruction method to 2-dimensional (2D) DOI-PET scanners using accurate system modeling, in order to evaluate the effects of using DOI information on PET image quality. Algebraic image reconstruction methods have been successfully used to improve PET image quality, compared with the conventional filtered backprojection method. The proposed method is applied to simulated data for a small 2D DOI-PET scanner. The results show that accurate system modeling improves spatial resolution without noise emphasis, and that DOI information improves uniformity of spatial resolution. (author)

  6. A practical depth-of-interaction PET/MR detector with dichotomous-orthogonal-symmetry decoding

    International Nuclear Information System (INIS)

    Zhang, Yuxuan; Baghaei, Hossain; Yan, Han; Wong, Wai-Hoi

    2015-01-01

    Conventional dual-end depth-of-interaction (DOI) PET detector readout requires two 2D SiPM arrays; with top and bottom SiPM reading the same pixel, there is information redundancy. We proposed a dichotomous-orthogonal-symmetric (DOS) dual-end DOI readout to eliminate this redundancy to significantly reduce SiPM usage, electronic channels, and heat load. Reflecting films are used within the scintillator array to channel light exiting the top along the X-direction, while light exiting the bottom is channeled along the orthogonal Y-direction. Despite the unidirectional channeling on each end, the top readout can provide X-Y information using two 1-D SiPM arrays; similarly, the bottom readout also provides X-Y information with two 1-D SiPM arrays. Thus four 1-D SiPM arrays (4xN) are used to decode XYZ to replace two 2D SiPM arrays (2NxN); SiPM usage is reduced from 2N**2 to 4N. Monte Carlo simulations (GATE) were carried out to study the XY decoding accuracy, energy resolution, and DOI resolution. Coupling the DOS-DOI design with a channel-decoding scheme, an array of 15x15 LSO (2.4x2.4x20 mm pixels) can be decoded by 18 SiPMs (2 rows of nine 3x3mm SiPM) on top and 18 SiPMs at bottom, thus achieving a 10X reduction in SiPM usage, electronic channels and heat load. For BGO detectors, an 8x8 array (2.4x2.4x20 mm pixels) can be achieved with 6.4X reduction. Simulations show 5-6mm DOI resolution, 0.45-0.96mm XY decoding blurring, 20-24% energy resolution. This study shows the feasibility of the DOS-DOI design. Even comparing to non-DOI detectors, there is a 5X/3X SiPM reduction for LSO/BGO. The proposed detector may yield practical ultrahigh-resolution PET/MR systems with depth-of-interaction with a production cost below current non-DOI systems.

  7. Depth-of-interaction measurement in a single-layer crystal array with a single-ended readout using digital silicon photomultiplier

    International Nuclear Information System (INIS)

    Lee, Min Sun; Lee, Jae Sung

    2015-01-01

    We present the first experimental evaluation of a depth-of-interaction (DOI) positron emission tomography (PET) detector using a digital silicon photomultiplier (dSiPM). To measure DOI information from a mono-layer array of scintillation crystals with a single-ended readout, our group has previously proposed and developed a new method based on light spread using triangular reflectors. Since this method relies on measurement of the light distribution, dSiPM, which has a fully digital interface, has several merits for our DOI measurement. The DOI PET detector comprised of a dSiPM sensor (DPC-3200-22-44) coupled with a 14   ×   14 array of 2 mm  ×  2 mm  ×  20 mm unpolished LGSO crystals. All crystals were covered with triangular reflectors. To obtain a good performance of the DOI PET detector, several parameters of detector were selected as a preliminary experiment. Detector performance was evaluated with the selected parameters and the optimal experimental setup, and a DOI measurement was conducted by irradiating the crystal block at five DOI positions spaced at intervals of 4 mm. Maximum-likelihood estimation was employed for DOI positioning and the optimal DOI estimation scheme was also investigated in this study. As a result, the DOI PET detector showed clear crystal identification. The energy resolution (full-width at half-maximum (FWHM)) averaged over all depths was 10.21%  ±  0.15% at 511 keV, and time resolution averaged over all depths was 1198.61   ±   39.70 ps FWHM. The average DOI positioning accuracy for all depths was 74.22%  ±  6.77%, which equates to DOI resolution of 4.67 mm. Energy and DOI resolutions were uniform over all crystal positions except for the back parts of the array. Furthermore, additional simulation studies were conducted to verify the results of our DOI measurement method that is combined with dSiPM technology. In conclusion, our continuous DOI PET detector

  8. Development of a prototype PET scanner with depth-of-interaction measurement using solid-state photomultiplier arrays and parallel readout electronics.

    Science.gov (United States)

    Shao, Yiping; Sun, Xishan; Lan, Kejian A; Bircher, Chad; Lou, Kai; Deng, Zhi

    2014-03-07

    In this study, we developed a prototype animal PET by applying several novel technologies to use solid-state photomultiplier (SSPM) arrays to measure the depth of interaction (DOI) and improve imaging performance. Each PET detector has an 8 × 8 array of about 1.9 × 1.9 × 30.0 mm(3) lutetium-yttrium-oxyorthosilicate scintillators, with each end optically connected to an SSPM array (16 channels in a 4 × 4 matrix) through a light guide to enable continuous DOI measurement. Each SSPM has an active area of about 3 × 3 mm(2), and its output is read by a custom-developed application-specific integrated circuit to directly convert analogue signals to digital timing pulses that encode the interaction information. These pulses are transferred to and are decoded by a field-programmable gate array-based time-to-digital convertor for coincident event selection and data acquisition. The independent readout of each SSPM and the parallel signal process can significantly improve the signal-to-noise ratio and enable the use of flexible algorithms for different data processes. The prototype PET consists of two rotating detector panels on a portable gantry with four detectors in each panel to provide 16 mm axial and variable transaxial field-of-view (FOV) sizes. List-mode ordered subset expectation maximization image reconstruction was implemented. The measured mean energy, coincidence timing and DOI resolution for a crystal were about 17.6%, 2.8 ns and 5.6 mm, respectively. The measured transaxial resolutions at the center of the FOV were 2.0 mm and 2.3 mm for images reconstructed with and without DOI, respectively. In addition, the resolutions across the FOV with DOI were substantially better than those without DOI. The quality of PET images of both a hot-rod phantom and mouse acquired with DOI was much higher than that of images obtained without DOI. This study demonstrates that SSPM arrays and advanced readout/processing electronics can be used to develop a practical DOI

  9. A LSO scintillator array for a PET detector module with depth of interaction measurement

    International Nuclear Information System (INIS)

    Huber, J.S.; Moses, W.W.; Andreaco, M.S.; Petterson, O.

    2000-01-01

    We present construction methods and performance results for a production scintillator array of 64 optically isolated, 3 mm x 3 mm x 30 mm sized LSO crystals. This scintillator array has been developed for a PET detector module consisting of the 8x8 LSO array coupled on one end to a single photomultiplier tube (PMT) and on the opposite end to a 64 pixel array of silicon photodiodes (PD). The PMT provides an accurate timing pulse and initial energy discrimination, the PD identifies the crystal of interaction, the sum provides a total energy signal, and the PD/(PD+PMT) ratio determines the depth of interaction (DOI). Unlike the previous LSO array prototypes, we now glue Lumirror reflector material directly onto 4 sides of each crystal to obtain an easily manufactured, mechanically rugged array with our desired depth dependence. With 511 keV excitation, we obtain a total energy signal of 3600 electrons, pulse-height resolution of 25% fwhm, and 6-15 mm fwhm DOI resolution

  10. A room temperature LSO/PIN photodiode PET detector module that measures depth of interaction

    International Nuclear Information System (INIS)

    Moses, W.W.; Derenzo, S.E.; Melcher, C.L.; Manente, R.A.

    1994-11-01

    We present measurements of a 4 element PET detector module that uses a 2x2 array of 3 mm square PIN photodiodes to both measure the depth of interaction (DOI) and identify the crystal of interaction. Each photodiode is coupled to one end of a 3x3x25 mm LSO crystal, with the opposite ends of all 4 crystals attached to a single PMT that provides a timing signal and initial energy discrimination. Each LSO crystal is coated with a open-quotes lossyclose quotes reflector, so the ratio of light detected in the photodiode and PMT depends on the position of interaction in the crystal, and is used to determine this position on an event by event basis. This module is operated at +25 degrees C with a photodiode amplifier peaking time of 2 μs. When excited by a collimated beam of 511 keV photons at the photodiode end of the module (i.e. closest to the patient), the DOI resolution is 4 mm fwhm and the crystal of interaction is identified correctly 95% of the time. When excited at the opposite end of the module, the DOI resolution is 13 mm fwhm and the crystal of interaction is identified correctly 73% of the time. The channel to channel variations in performance are minimal

  11. Reconstruction in PET cameras with irregular sampling and depth of interaction capability

    International Nuclear Information System (INIS)

    Virador, P.R.G.; Moses, W.W.; Huesman, R.H.

    1998-01-01

    The authors present 2D reconstruction algorithms for a rectangular PET camera capable of measuring depth of interaction (DOI). The camera geometry leads to irregular radial and angular sampling of the tomographic data. DOI information increases sampling density, allowing the use of evenly spaced quarter-crystal width radial bins with minimal interpolation of irregularly spaced data. In the regions where DOI does not increase sampling density (chords normal to crystal faces), fine radial sinogram binning leads to zero efficiency bins if uniform angular binning is used. These zero efficiency sinogram bins lead to streak artifacts if not corrected. To minimize these unnormalizable sinogram bins the authors use two angular binning schemes: Fixed Width and Natural Width. Fixed Width uses a fixed angular width except in the problem regions where appropriately chosen widths are applied. Natural Width uses angle widths which are derived from intrinsic detector sampling. Using a modified filtered-backprojection algorithm to accommodate these angular binning schemes, the authors reconstruct artifact free images with nearly isotropic and position independent spatial resolution. Results from Monte Carlo data indicate that they have nearly eliminated image degradation due to crystal penetration

  12. Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques

    International Nuclear Information System (INIS)

    Sharma, Diksha; Sze, Christina; Bhandari, Harish; Nagarkar, Vivek; Badano, Aldo

    2017-01-01

    Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.

  13. Evaluation of algorithms for photon depth of interaction estimation for the TRIMAGE PET component

    Energy Technology Data Exchange (ETDEWEB)

    Camarlinghi, Niccolo; Belcari, Nicola [University of Pisa (Italy); Cerello, Piergiorgio [University of Torino (Italy); Sportelli, Giancarlo [University of Pisa (Italy); Pennazio, Francesco [University of Torino (Italy); Zaccario, Emanuele; Del Guerra, Alberto [University of Pisa (Italy)

    2015-05-18

    The TRIMAGE consortium aims to develop a multimodal PET/MR/EEG brain scanner dedicated to the early diagnosis of schizophrenia and other mental health disorders. The PET component features a full ring made of 18 detectors, each one consisting of twelve 8x8 Silicon PhotoMultipliers (SiPMs) tiles coupled to two segmented LYSO crystal matrices with staggered layers. In each module, the crystals belonging to the bottom layer are coupled one to one to the SiPMs, while each crystal of the top layer is coupled to four crystals of the bottom layer. This configuration allows to increase the crystal thickness while reducing the depth of interaction uncertainty, as photons interacting in different layers are expected to produce different light patterns on the SiPMs. The PET scanner will implement the pixel/layer identification on a front-end FPGA. This will allow increasing the effective bandwidth, setting at the same time restrictions on the complexity of the algorithms to be implemented. In this work two algorithms whose implementation is feasible directly on an FPGA are presented and evaluated. The first algorithm implements a method based on adaptive thresholding, while the other uses a linear Support Vector Machine (SVM) trained to distinguish the light pattern coming from two different layers. The validation of the algorithm performance is carried out by using simulated data generated with the GAMOS Monte Carlo. The obtained results show that the achieved accuracy in layer and pixel identification is above the 90% for both the proposed approaches.

  14. Three-layer GSO depth-of-interaction detector for high-energy gamma camera

    International Nuclear Information System (INIS)

    Yamamoto, S.; Watabe, H.; Kawachi, N.; Fujimaki, S.; Kato, K.; Hatazawa, J.

    2014-01-01

    Using Ce-doped Gd 2 SiO 5 (GSO) of different Ce concentrations, three-layer DOI block detectors were developed to reduce the parallax error at the edges of a pinhole gamma camera for high-energy gamma photons. GSOs with Ce concentrations of 1.5 mol% (decay time ∼40 ns), 0.5 mol% crystal (∼60 ns), 0.4 mol% (∼80 ns) were selected for the depth of interaction (DOI) detectors. These three types of GSOs were optically coupled in the depth direction, arranged in a 22×22 matrix and coupled to a flat panel photomultiplier tube (FP-PMT, Hamamatsu H8500). Sizes of these GSO cells were 1.9 mm×1.9 mm×4 mm, 1.9 mm×1.9 mm×5 mm, and 1.9 mm×1.9 mm×6 mm for 1.5 mol%, 0.5 mol%, and 0.4 mol%, respectively. With these combinations of GSOs, all spots corresponding to GSO cells were clearly resolved in the position histogram. Pulse shape spectra showed three peaks for these three decay times of GSOs. The block detector was contained in a 2-cm-thick tungsten shield, and a pinhole collimator with a 0.5-mm aperture was mounted. With pulse shape discrimination, we separated the point source images of the Cs-137 for each DOI layer. The point source image of the lower layer was detected at the most central part of the field-of-view, and the distribution was the smallest. The point source image of the higher layer was detected at the most peripheral part of the field-of-view, and the distribution was widest. With this information, the spatial resolution of the pinhole gamma camera can be improved. We conclude that DOI detection is effective for pinhole gamma cameras for high energy gamma photons

  15. Development of GAGG depth-of-interaction (DOI) block detectors based on pulse shape analysis

    International Nuclear Information System (INIS)

    Yamamoto, Seiichi; Kobayashi, Takahiro; Yeol Yeom, Jung; Morishita, Yuki; Sato, Hiroki; Endo, Takanori; Usuki, Yoshiyuki; Kamada, Kei; Yoshikawa, Akira

    2014-01-01

    A depth-of-interaction (DOI) detector is required for developing a high resolution and high sensitivity PET system. Ce-doped Gd 3 Al 2 Ga 3 O 12 (GAGG fast: GAGG-F) is a promising scintillator for PET applications with high light output, no natural radioisotope and suitable light emission wavelength for semiconductor based photodetectors. However, no DOI detector based on pulse shape analysis with GAGG-F has been developed to date, due to the lack of appropriate scintillators of pairing. Recently a new variation of this scintillator with different Al/Ga ratios—Ce-doped Gd 3 Al 2.6 Ga 2.4 O 12 (GAGG slow: GAGG-S), which has slower decay time was developed. The combination of GAGG-F and GAGG-S may allow us to realize high resolution DOI detectors based on pulse shape analysis. We developed and tested two GAGG phoswich DOI block detectors comprised of pixelated GAGG-F and GAGG-S scintillation crystals. One phoswich block detector comprised of 2×2×5 mm pixel that were assembled into a 5×5 matrix. The DOI block was optically coupled to a silicon photomultiplier (Si-PM) array (Hamamatsu MPPC S11064-050P) with a 2-mm thick light guide. The other phoswich block detector comprised of 0.5×0.5×5 mm (GAGG-F) and 0.5×0.5×6 mm 3 (GAGG-S) pixels that were assembled into a 20×20 matrix. The DOI block was also optically coupled to the same Si-PM array with a 2-mm thick light guide. In the block detector of 2-mm crystal pixels (5×5 matrix), the 2-dimensional histogram revealed excellent separation with an average energy resolution of 14.1% for 662-keV gamma photons. The pulse shape spectrum displayed good separation with a peak-to-valley ratio of 8.7. In the block detector that used 0.5-mm crystal pixels (20×20 matrix), the 2-dimensional histogram also showed good separation with energy resolution of 27.5% for the 662-keV gamma photons. The pulse shape spectrum displayed good separation with a peak-to-valley ratio of 6.5. These results indicate that phoswich DOI

  16. High spatial resolution measurement of depth-of-interaction of a PET LSO crystal

    International Nuclear Information System (INIS)

    Simon, A.; Kalinka, G.; Novak, D.; Sipos, A.; Vegh, J.; Molnar, J.

    2004-01-01

    Complete text of publication follows. A new type of experimental technique to investigate the depth-of-interaction (DOI) dependence in small scintillator elements designed for high-resolution animal PET [1] has been introduced at our institute, recently. A lutetium oxyorthosilicate (LSO) crystal (2x2x10 mm 3 ) was irradiated with a highly focused 2 MeV He + beam at the ATOMKI nuclear microprobe laboratory. Pulse height spectra from a photomultiplier (PMT) attached to one end of the LSO crystal were collected in list mode. Sequential scans of 1000x1000 μm 2 areas along the 10 mm long crystal were made to get high lateral resolution images of pulse height spectra at different distances from the window of the PMT. A mean pulse height algorithm was applied to each pixel to generate two dimensional intensity images and the corresponding spectra of 100 μmx1 mm areas. Representative pulse height spectra are shown in Fig. 1 for different distances between the position of irradiation and the PMT. The mean value of the pulse height spectrum describing the position of the full energy peak is a way to measure DOI effects. It is seen that the closer the DOI to the PMT-end of the crystal the higher the energy of the peak. The centre of the detected peak varies about 30 % along the lateral side of the crystal. This effect is due to the increasing number of reflections with associated loss of light when the distance between the DOI position and the light collecting PMT grows. Further these results, no difference in the light intensity was found depending on which position across (perpendicular to the length of) the crystal was irradiated with the microbeam. The obtained results of the overall DOI dependence confirm previous measurements on LSO crystals with similar geometry and wrapping but based on collimated gamma-ray irradiation. Since the present experimental setup allows obtaining data with several orders of magnitude better spatial resolution (from μm up to mm) than with

  17. A combined time-of-flight and depth-of-interaction detector for total-body positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Berg, Eric, E-mail: eberg@ucdavis.edu; Roncali, Emilie; Du, Junwei; Cherry, Simon R. [Department of Biomedical Engineering, University of California, Davis, One Shields Avenue, Davis, California 95616 (United States); Kapusta, Maciej [Molecular Imaging, Siemens Healthcare, Knoxville, Tennessee 37932 (United States)

    2016-02-15

    Purpose: In support of a project to build a total-body PET scanner with an axial field-of-view of 2 m, the authors are developing simple, cost-effective block detectors with combined time-of-flight (TOF) and depth-of-interaction (DOI) capabilities. Methods: This work focuses on investigating the potential of phosphor-coated crystals with conventional PMT-based block detector readout to provide DOI information while preserving timing resolution. The authors explored a variety of phosphor-coating configurations with single crystals and crystal arrays. Several pulse shape discrimination techniques were investigated, including decay time, delayed charge integration (DCI), and average signal shapes. Results: Pulse shape discrimination based on DCI provided the lowest DOI positioning error: 2 mm DOI positioning error was obtained with single phosphor-coated crystals while 3–3.5 mm DOI error was measured with the block detector module. Minimal timing resolution degradation was observed with single phosphor-coated crystals compared to uncoated crystals, and a timing resolution of 442 ps was obtained with phosphor-coated crystals in the block detector compared to 404 ps without phosphor coating. Flood maps showed a slight degradation in crystal resolvability with phosphor-coated crystals; however, all crystals could be resolved. Energy resolution was degraded by 3%–7% with phosphor-coated crystals compared to uncoated crystals. Conclusions: These results demonstrate the feasibility of obtaining TOF–DOI capabilities with simple block detector readout using phosphor-coated crystals.

  18. A combined time-of-flight and depth-of-interaction detector for total-body positron emission tomography

    International Nuclear Information System (INIS)

    Berg, Eric; Roncali, Emilie; Du, Junwei; Cherry, Simon R.; Kapusta, Maciej

    2016-01-01

    Purpose: In support of a project to build a total-body PET scanner with an axial field-of-view of 2 m, the authors are developing simple, cost-effective block detectors with combined time-of-flight (TOF) and depth-of-interaction (DOI) capabilities. Methods: This work focuses on investigating the potential of phosphor-coated crystals with conventional PMT-based block detector readout to provide DOI information while preserving timing resolution. The authors explored a variety of phosphor-coating configurations with single crystals and crystal arrays. Several pulse shape discrimination techniques were investigated, including decay time, delayed charge integration (DCI), and average signal shapes. Results: Pulse shape discrimination based on DCI provided the lowest DOI positioning error: 2 mm DOI positioning error was obtained with single phosphor-coated crystals while 3–3.5 mm DOI error was measured with the block detector module. Minimal timing resolution degradation was observed with single phosphor-coated crystals compared to uncoated crystals, and a timing resolution of 442 ps was obtained with phosphor-coated crystals in the block detector compared to 404 ps without phosphor coating. Flood maps showed a slight degradation in crystal resolvability with phosphor-coated crystals; however, all crystals could be resolved. Energy resolution was degraded by 3%–7% with phosphor-coated crystals compared to uncoated crystals. Conclusions: These results demonstrate the feasibility of obtaining TOF–DOI capabilities with simple block detector readout using phosphor-coated crystals

  19. A novel depth-of-interaction block detector for positron emission tomography using a dichotomous orthogonal symmetry decoding concept

    International Nuclear Information System (INIS)

    Zhang, Yuxuan; Yan, Han; Baghaei, Hossain; Wong, Wai-Hoi

    2016-01-01

    Conventionally, a dual-end depth-of-interaction (DOI) block detector readout requires two two-dimensional silicon photomultiplier (SiPM) arrays, one on top and one on the bottom, to define the XYZ positions. However, because both the top and bottom SiPM arrays are reading the same pixels, this creates information redundancy. We propose a dichotomous orthogonal symmetric (DOS) dual-end readout block detector design, which removes this redundancy by reducing the number of SiPMs and still achieves XY and DOI (Z) decoding for positron emission tomography (PET) block detector. Reflecting films are used within the block detector to channel photons going to the top of the block to go only in the X direction, and photons going to the bottom are channeled along the Y direction. Despite the unidirectional channeling on each end, the top readout provides both X and Y information using two one-dimensional SiPM arrays instead of a two-dimensional SiPM array; similarly, the bottom readout also provides both X and Y information with just two one-dimensional SiPM arrays. Thus, a total of four one-dimensional SiPM arrays (4  ×  N SiPMs) are used to decode the XYZ positions of the firing pixels instead of two two-dimensional SiPM arrays (2  ×  N  ×  N SiPMs), reducing the number of SiPM arrays per block from 2N 2 to 4 N for PET/MR or PET/CT systems. Moreover, the SiPM arrays on one end can be replaced by two regular photomultiplier tubes (PMTs), so that a block needs only 2 N SiPMs  +  2 half-PMTs; this hybrid-DOS DOI block detector can be used in PET/CT systems. Monte Carlo simulations were carried out to study the performance of our DOS DOI block detector design, including the XY-decoding quality, energy resolution, and DOI resolution. Both BGO and LSO scintillators were studied. We found that 4 mm pixels were well decoded for 5  ×  5 BGO and 9  ×  9 LSO arrays with 4 to 5 mm DOI resolution and 16–20% energy resolution

  20. Development and evaluation of a practical method to measure the Depth of Interaction function for a single side readout PET detector

    CERN Document Server

    Stringhini, G.; Ghezzi, A.; Stojkovic, A.; Paganoni, M.; Auffray, E.

    2016-01-01

    In small animal and organ dedicated PET scanners, the knowledge of depth of interaction (DOI) of the gamma ray along the main axis of the scintillator is a fundamental information in order to avoid parallax error and to achieve high performances in terms of spatial resolution. Recently we developed a new method to obtain the DOI function for a single side readout PET module, recirculating the scintillation light in the matrix by means of a mirror placed on top of the module. In a complete PET scanner, periodical DOI calibrations have to be performed to prevent time dependent miscalibrations and performance degradations. The current DOI calibration relies on a coincidence system between the module and an external scintillator to provide a priori the DOI information and it is clearly not feasible in a real system without unpractical disassemblies of the scanner. In this paper we develop instead a fast and precise calibration method based on uniform irradiation of the scintillators. Three irradiation modalities ...

  1. Towards PDT with Genetically Encoded Photosensitizer KillerRed: A Comparison of Continuous and Pulsed Laser Regimens in an Animal Tumor Model.

    Directory of Open Access Journals (Sweden)

    Marina Shirmanova

    Full Text Available The strong phototoxicity of the red fluorescent protein KillerRed allows it to be considered as a potential genetically encoded photosensitizer for the photodynamic therapy (PDT of cancer. The advantages of KillerRed over chemical photosensitizers are its expression in tumor cells transduced with the appropriate gene and direct killing of cells through precise damage to any desired cell compartment. The ability of KillerRed to affect cell division and to induce cell death has already been demonstrated in cancer cell lines in vitro and HeLa tumor xenografts in vivo. However, the further development of this approach for PDT requires optimization of the method of treatment. In this study we tested the continuous wave (593 nm and pulsed laser (584 nm, 10 Hz, 18 ns modes to achieve an antitumor effect. The research was implemented on CT26 subcutaneous mouse tumors expressing KillerRed in fusion with histone H2B. The results showed that the pulsed mode provided a higher rate of photobleaching of KillerRed without any temperature increase on the tumor surface. PDT with the continuous wave laser was ineffective against CT26 tumors in mice, whereas the pulsed laser induced pronounced histopathological changes and inhibition of tumor growth. Therefore, we selected an effective regimen for PDT when using the genetically encoded photosensitizer KillerRed and pulsed laser irradiation.

  2. Four-layer depth-of-interaction PET detector for high resolution PET using a multi-pixel S8550 avalanche photodiode

    International Nuclear Information System (INIS)

    Nishikido, Fumihiko; Inadama, Naoko; Oda, Ichiro; Shibuya, Kengo; Yoshida, Eiji; Yamaya, Taiga; Kitamura, Keishi; Murayama, Hideo

    2010-01-01

    Avalanche photodiodes (APDs) are being used as photodetectors in positron emission tomography (PET) because they have many advantages over photomultipliers (PMTs) typically used in PET detectors. We have developed a PET detector that consists of a multi-pixel APD and a 6x6x4 array of 1.46x1.46 mm 2 x4.5 m LYSO crystals for a small animal PET scanner. The detector can identify four-layer depth of interaction (DOI) with a position-sensitive APD coupled to the backside of a crystal array by just an optimized reflector arrangement. Since scintillation lights are shared among many pixels by the method, weaker signals in APD pixels far from the interacting crystals are affected by noise. To evaluate the performance of the four-layer DOI detector with the APD and the influence of electrical noise on our method, we constructed a prototype DOI detector and tested its performance. We found, except for crystal elements on the edge of the crystal array, all crystal elements could be identified from the 2D position histogram. An energy resolution of 16.9% was obtained for the whole crystal array of the APD detector. The results of noise dependence of detector performances indicated that the DOI detector using the APD could achieve sufficient performance even when using application-specific integrated circuits.

  3. Detector normalization and scatter correction for the jPET-D4: A 4-layer depth-of-interaction PET scanner

    Energy Technology Data Exchange (ETDEWEB)

    Kitamura, Keishi [Shimadzu Corporation, 1 Nishinokyo-Kuwabaracho, Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan)]. E-mail: kitam@shimadzu.co.jp; Ishikawa, Akihiro [Shimadzu Corporation, 1 Nishinokyo-Kuwabaracho, Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan); Mizuta, Tetsuro [Shimadzu Corporation, 1 Nishinokyo-Kuwabaracho, Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan); Yamaya, Taiga [National Institute of Radiological Sciences, 9-1 Anagawa-4, Inage-ku, Chiba-shi, Chiba 263-8555 (Japan); Yoshida, Eiji [National Institute of Radiological Sciences, 9-1 Anagawa-4, Inage-ku, Chiba-shi, Chiba 263-8555 (Japan); Murayama, Hideo [National Institute of Radiological Sciences, 9-1 Anagawa-4, Inage-ku, Chiba-shi, Chiba 263-8555 (Japan)

    2007-02-01

    The jPET-D4 is a brain positron emission tomography (PET) scanner composed of 4-layer depth-of-interaction (DOI) detectors with a large number of GSO crystals, which achieves both high spatial resolution and high scanner sensitivity. Since the sensitivity of each crystal element is highly dependent on DOI layer depth and incidental {gamma} ray energy, it is difficult to estimate normalization factors and scatter components with high statistical accuracy. In this work, we implemented a hybrid scatter correction method combined with component-based normalization, which estimates scatter components with a dual energy acquisition using a convolution subtraction-method for an estimation of trues from an upper energy window. In order to reduce statistical noise in sinograms, the implemented scheme uses the DOI compression (DOIC) method, that combines deep pairs of DOI layers into the nearest shallow pairs of DOI layers with natural detector samplings. Since the compressed data preserve the block detector configuration, as if the data are acquired using 'virtual' detectors with high {gamma}-ray stopping power, these correction methods can be applied directly to DOIC sinograms. The proposed method provides high-quality corrected images with low statistical noise, even for a multi-layer DOI-PET.

  4. Four-layer depth-of-interaction PET detector for high resolution PET using a multi-pixel S8550 avalanche photodiode

    Energy Technology Data Exchange (ETDEWEB)

    Nishikido, Fumihiko, E-mail: funis@nirs.go.j [Molecular Imaging Center, National Institute of Radiological Sciences, 4-9-1 Anagawa, Inage-ku, Chiba 263-8555 (Japan); Inadama, Naoko [Molecular Imaging Center, National Institute of Radiological Sciences, 4-9-1 Anagawa, Inage-ku, Chiba 263-8555 (Japan); Oda, Ichiro [Shimadzu Corporation, Nishinokyo Kuwabaracho 1 Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan); Shibuya, Kengo; Yoshida, Eiji; Yamaya, Taiga [Molecular Imaging Center, National Institute of Radiological Sciences, 4-9-1 Anagawa, Inage-ku, Chiba 263-8555 (Japan); Kitamura, Keishi [Shimadzu Corporation, Nishinokyo Kuwabaracho 1 Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan); Murayama, Hideo [Molecular Imaging Center, National Institute of Radiological Sciences, 4-9-1 Anagawa, Inage-ku, Chiba 263-8555 (Japan)

    2010-09-21

    Avalanche photodiodes (APDs) are being used as photodetectors in positron emission tomography (PET) because they have many advantages over photomultipliers (PMTs) typically used in PET detectors. We have developed a PET detector that consists of a multi-pixel APD and a 6x6x4 array of 1.46x1.46 mm{sup 2}x4.5 m LYSO crystals for a small animal PET scanner. The detector can identify four-layer depth of interaction (DOI) with a position-sensitive APD coupled to the backside of a crystal array by just an optimized reflector arrangement. Since scintillation lights are shared among many pixels by the method, weaker signals in APD pixels far from the interacting crystals are affected by noise. To evaluate the performance of the four-layer DOI detector with the APD and the influence of electrical noise on our method, we constructed a prototype DOI detector and tested its performance. We found, except for crystal elements on the edge of the crystal array, all crystal elements could be identified from the 2D position histogram. An energy resolution of 16.9% was obtained for the whole crystal array of the APD detector. The results of noise dependence of detector performances indicated that the DOI detector using the APD could achieve sufficient performance even when using application-specific integrated circuits.

  5. Depth of interaction in scintillation crystal by light collimation for the design of open-quotes universal nuclear medicine imagerclose quotes

    International Nuclear Information System (INIS)

    Bosnjakovic, V.B.

    1996-01-01

    The detector head for a dual headed open-quotes Universal Nuclear Medicine Imagerclose quotes (UNMI) is designed. It is based on position sensitive (PS) area detectors fitted with a thick (0.75 inches) NaI(Tl) crystal. In order to make it open-quotes universalclose quotes, a means for determining the depth of interaction (DOI) in the crystal is devised. Crystal thickness is divided into two layers, each 3/8 inches thick, by insertion of a thin (0.03 inches) quartz (glass) layer; these layers added to the external lower and upper surfaces of crystal, too, form two internal open-quotes light collimatorclose quotes (LC) channels by their reflectively (specular) polished couplings. These DOI channels, according to the light refraction / reflection law direct the light reflected from quartz layers (as exceeding a critical angle) through NaI(Tl) layers to the external parts of LC system; these are connected via quartz optical cables to the DOI determining PM tubes. DOI electronics including a coincidence circuit, connecting signals from conventional PS and DOI PM tubes arrays, identifies a DOI layer and shifts an, x, y, address to the particular DOI memory stack. The UNMI design is likely to retain spatial resolution of PS detectors - gamma cameras for low energy single photon emitters and to improve spatial resolution in PET studies done by PS detectors fitted with thick NaI crystals, enabling a proper correction for parallax error

  6. Development and evaluation of a practical method to measure the Depth of Interaction function for a single side readout PET detector

    International Nuclear Information System (INIS)

    Stringhini, G.; Auffray, E.; Pizzichemi, M.; Ghezzi, A.; Paganoni, M.; Stojkovic, A.

    2016-01-01

    In small animal and organ dedicated PET scanners, the knowledge of depth of interaction (DOI) of the gamma ray along the main axis of the scintillator is a fundamental information in order to avoid parallax error and to achieve high performances in terms of spatial resolution. Recently we developed a new method to obtain the DOI function for a single side readout PET module, recirculating the scintillation light in the matrix by means of a mirror placed on top of the module. In a complete PET scanner, periodical DOI calibrations have to be performed to prevent time dependent miscalibrations and performance degradations. The current DOI calibration relies on a coincidence system between the module and an external scintillator to provide a priori the DOI information and it is clearly not feasible in a real system without unpractical disassemblies of the scanner. In this paper we develop instead a fast and precise calibration method based on uniform irradiation of the scintillators. Three irradiation modalities are presented, in particular one where the source is placed on top of the module, one with the source placed on one side of the module and one that exploits the internal radioactivity of the scintillator. The three different procedures are evaluated and the calibration method is validated by comparing the information provided by the coincidence setup.

  7. Design and simulation of a novel method for determining depth-of-interaction in a PET scintillation crystal array using a single-ended readout by a multi-anode PMT

    International Nuclear Information System (INIS)

    Ito, Mikiko; Sim, Kwang-Souk; Lee, Jae Sung; Park, Min-Jae; Hong, Seong Jong

    2010-01-01

    PET detectors with depth-of-interaction (DOI) encoding capability allow high spatial resolution and high sensitivity to be achieved simultaneously. To obtain DOI information from a mono-layer array of scintillation crystals using a single-ended readout, the authors devised a method based on light spreading within a crystal array and performed Monte Carlo simulations with individual scintillation photon tracking to prove the concept. A scintillation crystal array model was constructed using a grid method. Conventional grids are constructed using comb-shaped reflector strips with rectangular teeth to isolate scintillation crystals optically. However, the authors propose the use of triangularly shaped teeth, such that scintillation photons spread only in the x-direction in the upper halves of crystals and in the y-direction in lower halves. DOI positions can be estimated by considering the extent of two-dimensional light dispersion, which can be determined from the multiple anode outputs of a position-sensitive PMT placed under the crystal array. In the main simulation, a crystal block consisting of a 29 x 29 array of 1.5 mm x 1.5 mm x 20 mm crystals and a multi-anode PMT with 16 x 16 pixels were used. The effects of crystal size and non-uniform PMT output gain were also explored by simulation. The DOI resolution estimated for 1.5 x 1.5 x 20 mm 3 crystals was 2.16 mm on average. Although the flood map was depth dependent, each crystal was well identified at all depths when a corner of the crystal array was irradiated with 511 keV gamma rays (peak-to-valley ratio ∼9:1). DOI resolution was better than 3 mm up to a crystal length of 28 mm with a 1.5 x 1.5 mm 2 or 2.0 x 2.0 mm 2 crystal surface area. The devised light-sharing method allowed excellent DOI resolutions to be obtained without the use of dual-ended readout or multiple crystal arrays.

  8. Displacement encoder

    International Nuclear Information System (INIS)

    Hesketh, T.G.

    1983-01-01

    In an optical encoder, light from an optical fibre input A is encoded by means of the encoding disc and is subsequently collected for transmission via optical fibre B. At some point in the optical path between the fibres A and B, the light is separated into component form by means of a filtering or dispersive system and each colour component is associated with a respective one of the coding channels of the disc. In this way, the significance of each bit of the coded information is represented by a respective colour thereby enabling the components to be re-combined for transmission by the fibre B without loss of information. (author)

  9. Depth of interaction and bias voltage depenence of the spectral response in a pixellated CdTe detector operating in time-over-threshold mode subjected to monochromatic X-rays

    Science.gov (United States)

    Fröjdh, E.; Fröjdh, C.; Gimenez, E. N.; Maneuski, D.; Marchal, J.; Norlin, B.; O'Shea, V.; Stewart, G.; Wilhelm, H.; Modh Zain, R.; Thungström, G.

    2012-03-01

    High stopping power is one of the most important figures of merit for X-ray detectors. CdTe is a promising material but suffers from: material defects, non-ideal charge transport and long range X-ray fluorescence. Those factors reduce the image quality and deteriorate spectral information. In this project we used a monochromatic pencil beam collimated through a 20μm pinhole to measure the detector spectral response in dependance on the depth of interaction. The sensor was a 1mm thick CdTe detector with a pixel pitch of 110μm, bump bonded to a Timepix readout chip operating in Time-Over-Threshold mode. The measurements were carried out at the Extreme Conditions beamline I15 of the Diamond Light Source. The beam was entering the sensor at an angle of \\texttildelow20 degrees to the surface and then passed through \\texttildelow25 pixels before leaving through the bottom of the sensor. The photon energy was tuned to 77keV giving a variation in the beam intensity of about three orders of magnitude along the beam path. Spectra in Time-over-Threshold (ToT) mode were recorded showing each individual interaction. The bias voltage was varied between -30V and -300V to investigate how the electric field affected the spectral information. For this setup it is worth noticing the large impact of fluorescence. At -300V the photo peak and escape peak are of similar height. For high bias voltages the spectra remains clear throughout the whole depth but for lower voltages as -50V, only the bottom part of the sensor carries spectral information. This is an effect of the low hole mobility and the longer range the electrons have to travel in a low field.

  10. Design studies of a depth encoding large aperture PET camera

    International Nuclear Information System (INIS)

    Moisan, C.; Rogers, J.G.; Buckley, K.R.; Ruth, T.J.; Stazyk, M.W.; Tsang, G.

    1994-10-01

    The feasibility of a wholebody PET tomograph with the capacity to correct for the parallax error induced by the Depth-Of-Interaction of γ-rays is assessed through simulation. The experimental energy, depth, and transverse position resolutions of BGO block detector candidates are the main inputs to a simulation that predicts the point source resolution of the Depth Encoding Large Aperture Camera (DELAC). The results indicate that a measured depth resolution of 7 mm (FWHM) is sufficient to correct a substantial part of the parallax error for a point source at the edge of the Field-Of-View. A search for the block specifications and camera ring radius that would optimize the spatial resolution and its uniformity across the Field-Of-View is also presented. (author). 10 refs., 1 tab., 5 figs

  11. Landscape encodings enhance optimization.

    Directory of Open Access Journals (Sweden)

    Konstantin Klemm

    Full Text Available Hard combinatorial optimization problems deal with the search for the minimum cost solutions (ground states of discrete systems under strong constraints. A transformation of state variables may enhance computational tractability. It has been argued that these state encodings are to be chosen invertible to retain the original size of the state space. Here we show how redundant non-invertible encodings enhance optimization by enriching the density of low-energy states. In addition, smooth landscapes may be established on encoded state spaces to guide local search dynamics towards the ground state.

  12. Landscape Encodings Enhance Optimization

    Science.gov (United States)

    Klemm, Konstantin; Mehta, Anita; Stadler, Peter F.

    2012-01-01

    Hard combinatorial optimization problems deal with the search for the minimum cost solutions (ground states) of discrete systems under strong constraints. A transformation of state variables may enhance computational tractability. It has been argued that these state encodings are to be chosen invertible to retain the original size of the state space. Here we show how redundant non-invertible encodings enhance optimization by enriching the density of low-energy states. In addition, smooth landscapes may be established on encoded state spaces to guide local search dynamics towards the ground state. PMID:22496860

  13. Blind encoding into qudits

    International Nuclear Information System (INIS)

    Shaari, J.S.; Wahiddin, M.R.B.; Mancini, S.

    2008-01-01

    We consider the problem of encoding classical information into unknown qudit states belonging to any basis, of a maximal set of mutually unbiased bases, by one party and then decoding by another party who has perfect knowledge of the basis. Working with qudits of prime dimensions, we point out a no-go theorem that forbids 'shift' operations on arbitrary unknown states. We then provide the necessary conditions for reliable encoding/decoding

  14. An encoding device and a method of encoding

    DEFF Research Database (Denmark)

    2012-01-01

    The present invention relates to an encoding device, such as an optical position encoder, for encoding input from an object, and a method for encoding input from an object, for determining a position of an object that interferes with light of the device. The encoding device comprises a light source...... in the area in the space and may interfere with the light, which interference may be encoded into a position or activation....

  15. a permutation encoding te algorithm solution of reso tation encoding

    African Journals Online (AJOL)

    eobe

    Keywords: Genetic algorithm, resource constrained. 1. INTRODUCTION. 1. .... Nigerian Journal of Technology. Vol. 34, No. 1, January 2015. 128 ... 4. ENCODING OF CHROMOSOME. ENCODING OF CHROMOSOME .... International Multi conference of Engineers and ... method”, Naval Research Logistics, vol 48, issue 2,.

  16. A Monte Carlo study of the acceptance to scattered events in a depth encoding PET camera

    International Nuclear Information System (INIS)

    Moisan, C.; Tupper, P.; Rogers, J.G.; DeJong, J.K.

    1995-10-01

    We present a Monte Carlo study of acceptance to scattered events in a Depth Encoding Large Aperture Camera (DELAC), a hypothetical PET scanner with the capacity to encode the depth-of-interaction (DOI) of incident γ-rays. The simulation is initially validated against the measured energy resolution and scatter fraction of the ECAT-953B scanner. It is then used to assess the response to scattered events in a PET camera made of position encoding blocks of the EXACT HR PLUS type, modified to have DOI resolution through a variation in the photopeak pulse height. The detection efficiency for 511 keV γ-rays, as well as for those that scattered in the object or left only part of their energy in the block, is studied for several combinations of DOI sensitivities and block thicknesses. The scatter fraction predicted by the simulation for DELACs of various ring radii is compared to that of the ECAT-953B as a function of the energy threshold. The results indicate that the poorer discrimination of object scatters with depth sensitive blocks does not lead to a dramatic increase of the scatter fraction. (author). 10 refs., 1 tab., 5 figs

  17. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  18. Selecting Operations for Assembler Encoding

    Directory of Open Access Journals (Sweden)

    Tomasz Praczyk

    2010-04-01

    Full Text Available Assembler Encoding is a neuro-evolutionary method in which a neural network is represented in the form of a simple program called Assembler Encoding Program. The task of the program is to create the so-called Network Definition Matrix which maintains all the information necessary to construct the network. To generate Assembler Encoding Programs and the subsequent neural networks evolutionary techniques are used.
    The performance of Assembler Encoding strongly depends on operations used in Assembler Encoding Programs. To select the most effective operations, experiments in the optimization and the predator-prey problem were carried out. In the experiments, Assembler Encoding Programs equipped with different types of operations were tested. The results of the tests are presented at the end of the paper.

  19. Analysing and Comparing Encodability Criteria

    Directory of Open Access Journals (Sweden)

    Kirstin Peters

    2015-08-01

    Full Text Available Encodings or the proof of their absence are the main way to compare process calculi. To analyse the quality of encodings and to rule out trivial or meaningless encodings, they are augmented with quality criteria. There exists a bunch of different criteria and different variants of criteria in order to reason in different settings. This leads to incomparable results. Moreover it is not always clear whether the criteria used to obtain a result in a particular setting do indeed fit to this setting. We show how to formally reason about and compare encodability criteria by mapping them on requirements on a relation between source and target terms that is induced by the encoding function. In particular we analyse the common criteria full abstraction, operational correspondence, divergence reflection, success sensitiveness, and respect of barbs; e.g. we analyse the exact nature of the simulation relation (coupled simulation versus bisimulation that is induced by different variants of operational correspondence. This way we reduce the problem of analysing or comparing encodability criteria to the better understood problem of comparing relations on processes.

  20. Continuous Non-malleable Codes

    DEFF Research Database (Denmark)

    Faust, Sebastian; Mukherjee, Pratyay; Nielsen, Jesper Buus

    2014-01-01

    or modify it to the encoding of a completely unrelated value. This paper introduces an extension of the standard non-malleability security notion - so-called continuous non-malleability - where we allow the adversary to tamper continuously with an encoding. This is in contrast to the standard notion of non...... is necessary to achieve continuous non-malleability in the split-state model. Moreover, we illustrate that none of the existing constructions satisfies our uniqueness property and hence is not secure in the continuous setting. We construct a split-state code satisfying continuous non-malleability. Our scheme...... is based on the inner product function, collision-resistant hashing and non-interactive zero-knowledge proofs of knowledge and requires an untamperable common reference string. We apply continuous non-malleable codes to protect arbitrary cryptographic primitives against tampering attacks. Previous...

  1. Correcting false information in memory: manipulating the strength of misinformation encoding and its retraction.

    Science.gov (United States)

    Ecker, Ullrich K H; Lewandowsky, Stephan; Swire, Briony; Chang, Darren

    2011-06-01

    Information that is presumed to be true at encoding but later on turns out to be false (i.e., misinformation) often continues to influence memory and reasoning. In the present study, we investigated how the strength of encoding and the strength of a later retraction of the misinformation affect this continued influence effect. Participants read an event report containing misinformation and a subsequent correction. Encoding strength of the misinformation and correction were orthogonally manipulated either via repetition (Experiment 1) or by imposing a cognitive load during reading (Experiment 2). Results suggest that stronger retractions are effective in reducing the continued influence effects associated with strong misinformation encoding, but that even strong retractions fail to eliminate continued influence effects associated with relatively weak encoding. We present a simple computational model based on random sampling that captures this effect pattern, and conclude that the continued influence effect seems to defy most attempts to eliminate it.

  2. High resolution detectors based on continuous crystals and SiPMs for small animal PET

    International Nuclear Information System (INIS)

    Cabello, J.; Barrillon, P.; Barrio, J.; Bisogni, M.G.; Del Guerra, A.; Lacasta, C.; Rafecas, M.; Saikouk, H.; Solaz, C.; Solevi, P.; La Taille, C. de; Llosá, G.

    2013-01-01

    Sensitivity and spatial resolution are the two main factors to maximize in emission imaging. The improvement of one factor deteriorates the other with pixelated crystals. In this work we combine SiPM matrices with monolithic crystals, using an accurate γ-ray interaction position determination algorithm that provides depth of interaction. Continuous crystals provide higher sensitivity than pixelated crystals, while an accurate interaction position determination does not degrade the spatial resolution. Monte Carlo simulations and experimental data show good agreement both demonstrating sub-millimetre intrinsic spatial resolution. A system consisting in two rotating detectors in coincidence is currently under operation already producing tomographic images

  3. Multidimensionally encoded magnetic resonance imaging.

    Science.gov (United States)

    Lin, Fa-Hsuan

    2013-07-01

    Magnetic resonance imaging (MRI) typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here, we propose the multidimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel radiofrequency coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled. Copyright © 2012 Wiley Periodicals, Inc.

  4. Performance of a DOI-encoding small animal PET system with monolithic scintillators

    International Nuclear Information System (INIS)

    Carles, M.; Lerche, Ch.W.; Sánchez, F.; Orero, A.; Moliner, L.; Soriano, A.; Benlloch, J.M.

    2012-01-01

    PET systems designed for specific applications require high resolution and sensitivity instrumentation. In dedicated system design smaller ring diameters and deeper crystals are widely used in order to increase the system sensitivity. However, this design increases the parallax error, which degrades the spatial image resolution gradually from the center to the edge of the field-of-view (FOV). Our group has designed a depth of interaction(DOI)-encoding small animal PET system based on monolithic crystals. In this work we investigate the restoration of radial resolution for transaxially off-center sources using the DOI information provided by our system. For this purpose we have designed a support for point like sources adapted to our system geometry that allows a spatial compression and resolution response study. For different point source radial positions along vertical and horizontal axes of a FOV transaxial plane we compare the results obtained by three methods: without DOI information, with the DOI provided by our system and with the assumption that all the γ-rays interact at half depth of the crystal thickness. Results show an improvement of the mean resolution of 10% with the half thickness assumption and a 16% achieved using the DOI provided by the system. Furthermore, a 10% restoration of the resolution uniformity is obtained using the half depth assumption and an 18% restoration using measured DOI.

  5. Virally encoded 7TM receptors

    DEFF Research Database (Denmark)

    Rosenkilde, M M; Waldhoer, M; Lüttichau, H R

    2001-01-01

    expression of this single gene in certain lymphocyte cell lineages leads to the development of lesions which are remarkably similar to Kaposi's sarcoma, a human herpesvirus 8 associated disease. Thus, this and other virally encoded 7TM receptors appear to be attractive future drug targets.......A number of herpes- and poxviruses encode 7TM G-protein coupled receptors most of which clearly are derived from their host chemokine system as well as induce high expression of certain 7TM receptors in the infected cells. The receptors appear to be exploited by the virus for either immune evasion...

  6. Continuous auditing & continuous monitoring : Continuous value?

    NARCIS (Netherlands)

    van Hillo, Rutger; Weigand, Hans; Espana, S; Ralyte, J; Souveyet, C

    2016-01-01

    Advancements in information technology, new laws and regulations and rapidly changing business conditions have led to a need for more timely and ongoing assurance with effectively working controls. Continuous Auditing (CA) and Continuous Monitoring (CM) technologies have made this possible by

  7. Encoding information into precipitation structures

    International Nuclear Information System (INIS)

    Martens, Kirsten; Bena, Ioana; Droz, Michel; Rácz, Zoltan

    2008-01-01

    Material design at submicron scales would be profoundly affected if the formation of precipitation patterns could be easily controlled. It would allow the direct building of bulk structures, in contrast to traditional techniques which consist of removing material in order to create patterns. Here, we discuss an extension of our recent proposal of using electrical currents to control precipitation bands which emerge in the wake of reaction fronts in A + + B – → C reaction–diffusion processes. Our main result, based on simulating the reaction–diffusion–precipitation equations, is that the dynamics of the charged agents can be guided by an appropriately designed time-dependent electric current so that, in addition to the control of the band spacing, the width of the precipitation bands can also be tuned. This makes straightforward the encoding of information into precipitation patterns and, as an amusing example, we demonstrate the feasibility by showing how to encode a musical rhythm

  8. Continuous Problem of Function Continuity

    Science.gov (United States)

    Jayakody, Gaya; Zazkis, Rina

    2015-01-01

    We examine different definitions presented in textbooks and other mathematical sources for "continuity of a function at a point" and "continuous function" in the context of introductory level Calculus. We then identify problematic issues related to definitions of continuity and discontinuity: inconsistency and absence of…

  9. Hall effect encoding of brushless dc motors

    Science.gov (United States)

    Berard, C. A.; Furia, T. J.; Goldberg, E. A.; Greene, R. C.

    1970-01-01

    Encoding mechanism integral to the motor and using the permanent magnets embedded in the rotor eliminates the need for external devices to encode information relating the position and velocity of the rotating member.

  10. Flipped-Adversarial AutoEncoders

    OpenAIRE

    Zhang, Jiyi; Dang, Hung; Lee, Hwee Kuan; Chang, Ee-Chien

    2018-01-01

    We propose a flipped-Adversarial AutoEncoder (FAAE) that simultaneously trains a generative model G that maps an arbitrary latent code distribution to a data distribution and an encoder E that embodies an "inverse mapping" that encodes a data sample into a latent code vector. Unlike previous hybrid approaches that leverage adversarial training criterion in constructing autoencoders, FAAE minimizes re-encoding errors in the latent space and exploits adversarial criterion in the data space. Exp...

  11. The new INRIM rotating encoder angle comparator (REAC)

    International Nuclear Information System (INIS)

    Pisani, Marco; Astrua, Milena

    2017-01-01

    A novel angle comparator has been built and tested at INRIM. The device is based on a double air bearing structure embedding a continuously rotating encoder, which is read by two heads: one fixed to the base of the comparator and a second fixed to the upper moving part of the comparator. The phase measurement between the two heads’ signals is proportional to the relative angle suspended between them (and, therefore, the angle between the base and the upper, movable part of the comparator). The advantage of this solution is to reduce the encoder graduation errors and to cancel the cyclic errors due to the interpolation of the encoder lines. By using only two pairs of reading heads, we have achieved an intrinsic accuracy of  ±0.04″ (rectangular distribution) that can be reduced through self-calibration. The residual cyclic errors have shown to be less than 0.01″ peak-to-peak. The random fluctuations are less than 0.01″ rms on a 100 s time interval. A further advantage of the rotating encoder is the intrinsic knowledge of the absolute position without the need of a zeroing procedure. Construction details of the rotating encoder angle comparator (REAC), characterization tests, and examples of practical use are given. (paper)

  12. A practical block detector for a depth encoding PET camera

    International Nuclear Information System (INIS)

    Rogers, J.G.; Moisan, C.; Hoskinson, E.M.

    1995-10-01

    The depth-of-interaction effect in block detectors degrades the image resolution in commercial PET cameras and impedes the natural evolution of smaller, less expensive cameras. A method for correcting the measured position of each detected gamma ray by measuring its depth-of-interaction was tested and found to recover 38% of the lost resolution in a table-top 50 cm diameter camera. To obtain the desired depth sensitivity, standard commercial detectors were modified by a simple and practical process, which is suitable for mass production of the detectors. The impact of the detectors modifications on central image resolution and on the ability of the camera to correct for object scatter were also measured. (authors)

  13. Business continuity

    International Nuclear Information System (INIS)

    Breunhoelder, Gert

    2002-01-01

    This presentation deals with the following keypoints: Information Technology (IT) Business Continuity and Recovery essential for any business; lessons learned after Sept. 11 event; Detailed planning, redundancy and testing being the key elements for probability estimation of disasters

  14. Continuous tokamaks

    International Nuclear Information System (INIS)

    Peng, Y.K.M.

    1978-04-01

    A tokamak configuration is proposed that permits the rapid replacement of a plasma discharge in a ''burn'' chamber by another one in a time scale much shorter than the elementary thermal time constant of the chamber first wall. With respect to the chamber, the effective duty cycle factor can thus be made arbitrarily close to unity minimizing the cyclic thermal stress in the first wall. At least one plasma discharge always exists in the new tokamak configuration, hence, a continuous tokamak. By incorporating adiabatic toroidal compression, configurations of continuous tokamak compressors are introduced. To operate continuous tokamaks, it is necessary to introduce the concept of mixed poloidal field coils, which spatially groups all the poloidal field coils into three sets, all contributing simultaneously to inducing the plasma current and maintaining the proper plasma shape and position. Preliminary numerical calculations of axisymmetric MHD equilibria in continuous tokamaks indicate the feasibility of their continued plasma operation. Advanced concepts of continuous tokamaks to reduce the topological complexity and to allow the burn plasma aspect ratio to decrease for increased beta are then suggested

  15. Tagging, Encoding, and Jones Optimality

    DEFF Research Database (Denmark)

    Danvy, Olivier; Lopez, Pablo E. Martinez

    2003-01-01

    A partial evaluator is said to be Jones-optimal if the result of specializing a self-interpreter with respect to a source program is textually identical to the source program, modulo renaming. Jones optimality has already been obtained if the self-interpreter is untyped. If the selfinterpreter...... is typed, however, residual programs are cluttered with type tags. To obtain the original source program, these tags must be removed. A number of sophisticated solutions have already been proposed. We observe, however, that with a simple representation shift, ordinary partial evaluation is already Jones......-optimal, modulo an encoding. The representation shift amounts to reading the type tags as constructors for higherorder abstract syntax. We substantiate our observation by considering a typed self-interpreter whose input syntax is higher-order. Specializing this interpreter with respect to a source program yields...

  16. Continuous Dropout.

    Science.gov (United States)

    Shen, Xu; Tian, Xinmei; Liu, Tongliang; Xu, Fang; Tao, Dacheng

    2017-10-03

    Dropout has been proven to be an effective algorithm for training robust deep networks because of its ability to prevent overfitting by avoiding the co-adaptation of feature detectors. Current explanations of dropout include bagging, naive Bayes, regularization, and sex in evolution. According to the activation patterns of neurons in the human brain, when faced with different situations, the firing rates of neurons are random and continuous, not binary as current dropout does. Inspired by this phenomenon, we extend the traditional binary dropout to continuous dropout. On the one hand, continuous dropout is considerably closer to the activation characteristics of neurons in the human brain than traditional binary dropout. On the other hand, we demonstrate that continuous dropout has the property of avoiding the co-adaptation of feature detectors, which suggests that we can extract more independent feature detectors for model averaging in the test stage. We introduce the proposed continuous dropout to a feedforward neural network and comprehensively compare it with binary dropout, adaptive dropout, and DropConnect on Modified National Institute of Standards and Technology, Canadian Institute for Advanced Research-10, Street View House Numbers, NORB, and ImageNet large scale visual recognition competition-12. Thorough experiments demonstrate that our method performs better in preventing the co-adaptation of feature detectors and improves test performance.

  17. Emotional arousal and memory after deep encoding.

    Science.gov (United States)

    Leventon, Jacqueline S; Camacho, Gabriela L; Ramos Rojas, Maria D; Ruedas, Angelica

    2018-05-22

    Emotion often enhances long-term memory. One mechanism for this enhancement is heightened arousal during encoding. However, reducing arousal, via emotion regulation (ER) instructions, has not been associated with reduced memory. In fact, the opposite pattern has been observed: stronger memory for emotional stimuli encoded with an ER instruction to reduce arousal. This pattern may be due to deeper encoding required by ER instructions. In the current research, we examine the effects of emotional arousal and deep-encoding on memory across three studies. In Study 1, adult participants completed a writing task (deep-encoding) for encoding negative, neutral, and positive picture stimuli, whereby half the emotion stimuli had the ER instruction to reduce the emotion. Memory was strong across conditions, and no memory enhancement was observed for any condition. In Study 2, adult participants completed the same writing task as Study 1, as well as a shallow-encoding task for one-third of negative, neutral, and positive trials. Memory was strongest for deep vs. shallow encoding trials, with no effects of emotion or ER instruction. In Study 3, adult participants completed a shallow-encoding task for negative, neutral, and positive stimuli, with findings indicating enhanced memory for negative emotional stimuli. Findings suggest that deep encoding must be acknowledged as a source of memory enhancement when examining manipulations of emotion-related arousal. Copyright © 2018. Published by Elsevier B.V.

  18. Continuity theory

    CERN Document Server

    Nel, Louis

    2016-01-01

    This book presents a detailed, self-contained theory of continuous mappings. It is mainly addressed to students who have already studied these mappings in the setting of metric spaces, as well as multidimensional differential calculus. The needed background facts about sets, metric spaces and linear algebra are developed in detail, so as to provide a seamless transition between students' previous studies and new material. In view of its many novel features, this book will be of interest also to mature readers who have studied continuous mappings from the subject's classical texts and wish to become acquainted with a new approach. The theory of continuous mappings serves as infrastructure for more specialized mathematical theories like differential equations, integral equations, operator theory, dynamical systems, global analysis, topological groups, topological rings and many more. In light of the centrality of the topic, a book of this kind fits a variety of applications, especially those that contribute to ...

  19. SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata.

    Directory of Open Access Journals (Sweden)

    Benjamin C Hitz

    Full Text Available The Encyclopedia of DNA elements (ENCODE project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data has been released as a separate Python package.

  20. Simulation study of a depth-encoding positron emission tomography detector inserting horizontal-striped glass between crystal layers

    Science.gov (United States)

    Kim, Kyu Bom; Choi, Yong; Kang, Jihoon

    2017-10-01

    This study introduces a depth-encoding positron emission tomography (PET) detector inserting a horizontal-striped glass between the pixilated scintillation crystal layers. This design allows light spreading so that scintillation photons can travel only through the X direction and allows alteration in the light distribution so that it can generate a unique pattern diagram of the two-dimensional (2-D) flood histogram that identifies depth position as well as X-Y position of γ-ray interaction. A Monte Carlo simulation was conducted for the assessment of the depth of interaction (DOI)-PET detector. The traced light distribution for each event was converted into the 2-D flood histogram. Light loss caused by inserting the horizontal-striped glass between the crystal layers was estimated. Applicable weighting factors were examined for each DOI-PET detector. No considerable degradation of light loss was observed. The flood histogram, without overlapping of each crystal position, can be generated for the DOI detector based on each crystal block by inserting the horizontal-striped glass with a thickness of >1 mm and the modified resistive charge division networks with applicable weighting factors. This study demonstrated that the proposed DOI-PET detector can extract the three-dimensional γ-ray interaction position without considerable performance degradations of the PET detector from the 2-D flood histogram.

  1. Performance of a high-resolution depth-encoding PET detector module using linearly-graded SiPM arrays

    Science.gov (United States)

    Du, Junwei; Bai, Xiaowei; Gola, Alberto; Acerbi, Fabio; Ferri, Alessandro; Piemonte, Claudio; Yang, Yongfeng; Cherry, Simon R.

    2018-02-01

    The goal of this study was to exploit the excellent spatial resolution characteristics of a position-sensitive silicon photomultiplier (SiPM) and develop a high-resolution depth-of-interaction (DOI) encoding positron emission tomography (PET) detector module. The detector consists of a 30  ×  30 array of 0.445  ×  0.445  ×  20 mm3 polished LYSO crystals coupled to two 15.5  ×  15.5 mm2 linearly-graded SiPM (LG-SiPM) arrays at both ends. The flood histograms show that all the crystals in the LYSO array can be resolved. The energy resolution, the coincidence timing resolution and the DOI resolution were 21.8  ±  5.8%, 1.23  ±  0.10 ns and 3.8  ±  1.2 mm, respectively, at a temperature of -10 °C and a bias voltage of 35.0 V. The performance did not degrade significantly for event rates of up to 130 000 counts s-1. This detector represents an attractive option for small-bore PET scanner designs that simultaneously emphasize high spatial resolution and high detection efficiency, important, for example, in preclinical imaging of the rodent brain with neuroreceptor ligands.

  2. Continuation calculus

    NARCIS (Netherlands)

    Geron, B.; Geuvers, J.H.; de'Liguoro, U.; Saurin, A.

    2013-01-01

    Programs with control are usually modeled using lambda calculus extended with control operators. Instead of modifying lambda calculus, we consider a different model of computation. We introduce continuation calculus, or CC, a deterministic model of computation that is evaluated using only head

  3. NMDA receptors and memory encoding.

    Science.gov (United States)

    Morris, Richard G M

    2013-11-01

    It is humbling to think that 30 years have passed since the paper by Collingridge, Kehl and McLennan showing that one of Jeff Watkins most interesting compounds, R-2-amino-5-phosphonopentanoate (d-AP5), blocked the induction of long-term potentiation in vitro at synapses from area CA3 of the hippocampus to CA1 without apparent effect on baseline synaptic transmission (Collingridge et al., 1983). This dissociation was one of the key triggers for an explosion of interest in glutamate receptors, and much has been discovered since that collectively contributes to our contemporary understanding of glutamatergic synapses - their biophysics and subunit composition, of the agonists and antagonists acting on them, and their diverse functions in different networks of the brain and spinal cord. It can be fairly said that Collingridge et al.'s (1983) observation was the stimulus that has led, on the one hand, to structural biological work at the atomic scale describing the key features of NMDA receptors that enables their coincidence function to happen; and, on the other, to work with whole animals investigating the contributions that calcium signalling via this receptor can have on rhythmical activities controlled by spinal circuits, memory encoding in the hippocampus (the topic of this article), visual cortical plasticity, sensitization in pain, and other functions. In this article, I lay out how my then interest in long-term potentiation (LTP) as a model of memory enabled me to recognise the importance of Collingridge et al.'s discovery - and how I and my colleagues endeavoured to take things forward in the area of learning and memory. This is in some respects a personal story, and I tell it as such. The idea that NMDA receptor activation is essential for memory encoding, though not for storage, took time to develop and to be accepted. Along the way, there have been confusions, challenges, and surprises surrounding the idea that activation of NMDA receptors can

  4. Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes

    International Nuclear Information System (INIS)

    Houshmand, Monireh; Hosseini-Khayat, Saied

    2011-01-01

    Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation and practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.

  5. Encoder designed to work in harsh environments

    Energy Technology Data Exchange (ETDEWEB)

    Toop, L.

    2007-05-15

    Dynapar has developed the Acuro AX71 absolute encoder for use on offshore or land-based oil rig operations. It provides feedback on the operation of automated systems such as draw works, racking systems, rotary tables and top drives. By ensuring that automated systems function properly, this encoder responds to a need by the oil and gas industry to keep workers safe and improve efficiency, particularly for operations in rugged situations. The encoder provides feedback from motor systems to controllers, giving information about position and speed of downhole drill bits. This newly developed encoder is better than commonly used incremental encoders which are not precise in strong electrical noise environments. Rather, the absolute encoder uses a different method of reporting to the controller. A digital signal is transmitted constantly as the device operates. It is less susceptible to noise issues. It is highly accurate, tolerant of noise and is not affected by power outages. However, the absolute encoder is generally more delicate in drilling applications with high ambient temperatures and shock levels. Dynapar addressed this issue by developing compact stainless steel housing that is useful for corrosion resistance in marine applications. The AX71 absolute encoder can withstand up to 100 G of mechanical shock and ambient temperatures of up to 60 degrees C. The encoder is ATEX certified without barriers, and offers the high resolution feedback of 4,000 counts of multiturn rotation and 16,000 counts of position. 1 fig.

  6. Continuation calculus

    Directory of Open Access Journals (Sweden)

    Bram Geron

    2013-09-01

    Full Text Available Programs with control are usually modeled using lambda calculus extended with control operators. Instead of modifying lambda calculus, we consider a different model of computation. We introduce continuation calculus, or CC, a deterministic model of computation that is evaluated using only head reduction, and argue that it is suitable for modeling programs with control. It is demonstrated how to define programs, specify them, and prove them correct. This is shown in detail by presenting in CC a list multiplication program that prematurely returns when it encounters a zero. The correctness proof includes termination of the program. In continuation calculus we can model both call-by-name and call-by-value. In addition, call-by-name functions can be applied to call-by-value results, and conversely.

  7. A depth-encoding PET detector that uses light sharing and single-ended readout with silicon photomultipliers

    Science.gov (United States)

    Kuang, Zhonghua; Yang, Qian; Wang, Xiaohui; Fu, Xin; Ren, Ning; Sang, Ziru; Wu, San; Zheng, Yunfei; Zhang, Xianming; Hu, Zhanli; Du, Junwei; Liang, Dong; Liu, Xin; Zheng, Hairong; Yang, Yongfeng

    2018-02-01

    Detectors with depth-encoding capability and good timing resolution are required to develop high-performance whole-body or total-body PET scanners. In this work, depth-encoding PET detectors that use light sharing between two discrete crystals and single-ended readout with silicon photomultipliers (SiPMs) were manufactured and evaluated. The detectors consisted of two unpolished 3  ×  3  ×  20 mm3 LYSO crystals with different coupling materials between them and were read out by Hamamatsu 3  ×  3 mm2 SiPMs with one-to-one coupling. The ratio of the energy of one SiPM to the total energy of two SiPMs was used to measure the depth of interaction (DOI). Detectors with different coupling materials in-between the crystals were measured in the singles mode in an effort to obtain detectors that can provide good DOI resolution. The DOI resolution and energy resolution of three types of detector were measured and the timing resolution was measured for the detector with the best DOI and energy resolution. The optimum detector, with 5 mm optical glue, a 9 mm triangular ESR and a 6 mm rectangular ESR in-between the unpolished crystals, provides a DOI resolution of 2.65 mm, an energy resolution of 10.0% and a timing resolution of 427 ps for events of E  >  400 keV. The detectors simultaneously provide good DOI and timing resolution, and show great promise for the development of high-performance whole-body and total-body PET scanners.

  8. Continuous-variable quantum information processing

    DEFF Research Database (Denmark)

    Andersen, Ulrik Lund; Leuchs, G.; Silberhorn, C.

    2010-01-01

    the continuous degree of freedom of a quantum system for encoding, processing or detecting information, one enters the field of continuous-variable (CV) quantum information processing. In this paper we review the basic principles of CV quantum information processing with main focus on recent developments...... in the field. We will be addressing the three main stages of a quantum information system; the preparation stage where quantum information is encoded into CVs of coherent states and single-photon states, the processing stage where CV information is manipulated to carry out a specified protocol and a detection...... stage where CV information is measured using homodyne detection or photon counting....

  9. The Arabic Diatessaron Project: Digitalizing, Encoding, Lemmatization

    Directory of Open Access Journals (Sweden)

    Giuliano Lancioni

    2016-04-01

    Full Text Available The Arabic Diatessaron Project (henceforth ADP is an international research project in Digital Humanities that aims to collect, digitalise and encode all known manuscripts of the Arabic Diatessaron (henceforth AD, a text that has been relatively neglected in scholarly research. ADP’s final goal is to provide a number of tools that can enable scholars to effectively query, compare and investigate all known variants of the text that will be encoded as far as possible in compliance with the Text Encoding Initiative (TEI guidelines. The paper addresses a number of issues involved in the process of digitalising manuscripts included in the two existing editions (Ciasca 1888 and Marmardji 1935, adding variants in unedited manuscripts, encoding and lemmatising the text. Issues involved in the design of the ADP include presentation of variants, choice of the standard text, applicability of TEI guidelines, automatic translation between different encodings, cross-edition concordances and principles of lemmatisation.

  10. A model for visual memory encoding.

    Directory of Open Access Journals (Sweden)

    Rodolphe Nenert

    Full Text Available Memory encoding engages multiple concurrent and sequential processes. While the individual processes involved in successful encoding have been examined in many studies, a sequence of events and the importance of modules associated with memory encoding has not been established. For this reason, we sought to perform a comprehensive examination of the network for memory encoding using data driven methods and to determine the directionality of the information flow in order to build a viable model of visual memory encoding. Forty healthy controls ages 19-59 performed a visual scene encoding task. FMRI data were preprocessed using SPM8 and then processed using independent component analysis (ICA with the reliability of the identified components confirmed using ICASSO as implemented in GIFT. The directionality of the information flow was examined using Granger causality analyses (GCA. All participants performed the fMRI task well above the chance level (>90% correct on both active and control conditions and the post-fMRI testing recall revealed correct memory encoding at 86.33 ± 5.83%. ICA identified involvement of components of five different networks in the process of memory encoding, and the GCA allowed for the directionality of the information flow to be assessed, from visual cortex via ventral stream to the attention network and then to the default mode network (DMN. Two additional networks involved in this process were the cerebellar and the auditory-insular network. This study provides evidence that successful visual memory encoding is dependent on multiple modules that are part of other networks that are only indirectly related to the main process. This model may help to identify the node(s of the network that are affected by a specific disease processes and explain the presence of memory encoding difficulties in patients in whom focal or global network dysfunction exists.

  11. A model for visual memory encoding.

    Science.gov (United States)

    Nenert, Rodolphe; Allendorfer, Jane B; Szaflarski, Jerzy P

    2014-01-01

    Memory encoding engages multiple concurrent and sequential processes. While the individual processes involved in successful encoding have been examined in many studies, a sequence of events and the importance of modules associated with memory encoding has not been established. For this reason, we sought to perform a comprehensive examination of the network for memory encoding using data driven methods and to determine the directionality of the information flow in order to build a viable model of visual memory encoding. Forty healthy controls ages 19-59 performed a visual scene encoding task. FMRI data were preprocessed using SPM8 and then processed using independent component analysis (ICA) with the reliability of the identified components confirmed using ICASSO as implemented in GIFT. The directionality of the information flow was examined using Granger causality analyses (GCA). All participants performed the fMRI task well above the chance level (>90% correct on both active and control conditions) and the post-fMRI testing recall revealed correct memory encoding at 86.33 ± 5.83%. ICA identified involvement of components of five different networks in the process of memory encoding, and the GCA allowed for the directionality of the information flow to be assessed, from visual cortex via ventral stream to the attention network and then to the default mode network (DMN). Two additional networks involved in this process were the cerebellar and the auditory-insular network. This study provides evidence that successful visual memory encoding is dependent on multiple modules that are part of other networks that are only indirectly related to the main process. This model may help to identify the node(s) of the network that are affected by a specific disease processes and explain the presence of memory encoding difficulties in patients in whom focal or global network dysfunction exists.

  12. Integrated, Continuous Emulsion Creamer.

    Science.gov (United States)

    Cochrane, Wesley G; Hackler, Amber L; Cavett, Valerie J; Price, Alexander K; Paegel, Brian M

    2017-12-19

    Automated and reproducible sample handling is a key requirement for high-throughput compound screening and currently demands heavy reliance on expensive robotics in screening centers. Integrated droplet microfluidic screening processors are poised to replace robotic automation by miniaturizing biochemical reactions to the droplet scale. These processors must generate, incubate, and sort droplets for continuous droplet screening, passively handling millions of droplets with complete uniformity, especially during the key step of sample incubation. Here, we disclose an integrated microfluidic emulsion creamer that packs ("creams") assay droplets by draining away excess oil through microfabricated drain channels. The drained oil coflows with creamed emulsion and then reintroduces the oil to disperse the droplets at the circuit terminus for analysis. Creamed emulsion assay incubation time dispersion was 1.7%, 3-fold less than other reported incubators. The integrated, continuous emulsion creamer (ICEcreamer) was used to miniaturize and optimize measurements of various enzymatic activities (phosphodiesterase, kinase, bacterial translation) under multiple- and single-turnover conditions. Combining the ICEcreamer with current integrated microfluidic DNA-encoded library bead processors eliminates potentially cumbersome instrumentation engineering challenges and is compatible with assays of diverse target class activities commonly investigated in drug discovery.

  13. Distributed-phase OCDMA encoder-decoders based on fiber Bragg gratings

    OpenAIRE

    Zhang, Zhaowei; Tian, C.; Petropoulos, P.; Richardson, D.J.; Ibsen, M.

    2007-01-01

    We propose and demonstrate new optical code-division multiple-access (OCDMA) encoder-decoders having a continuous phase-distribution. With the same spatial refractive index distribution as the reconfigurable optical phase encoder-decoders, they are inherently suitable for the application in reconfigurable OCDMA systems. Furthermore, compared with conventional discrete-phase devices, they also have additional advantages of being more tolerant to input pulse width and, therefore, have the poten...

  14. Encoding of coordination complexes with XML.

    Science.gov (United States)

    Vinoth, P; Sankar, P

    2017-09-01

    An in-silico system to encode structure, bonding and properties of coordination complexes is developed. The encoding is achieved through a semantic XML markup frame. Composition of the coordination complexes is captured in terms of central atom and ligands. Structural information of central atom is detailed in terms of electron status of valence electron orbitals. The ligands are encoded with specific reference to the electron environment of ligand centre atoms. Behaviour of ligands to form low or high spin complexes is accomplished by assigning a Ligand Centre Value to every ligand based on the electronic environment of ligand centre atom. Chemical ontologies are used for categorization purpose and to control different hybridization schemes. Complexes formed by the central atoms of transition metal, non-transition elements belonging to s-block, p-block and f-block are encoded with a generic encoding platform. Complexes of homoleptic, heteroleptic and bridged types are also covered by this encoding system. Utility of the encoded system to predict redox electron transfer reaction in the coordination complexes is demonstrated with a simple application. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Encoding entanglement-assisted quantum stabilizer codes

    International Nuclear Information System (INIS)

    Wang Yun-Jiang; Bai Bao-Ming; Li Zhuo; Xiao He-Ling; Peng Jin-Ye

    2012-01-01

    We address the problem of encoding entanglement-assisted (EA) quantum error-correcting codes (QECCs) and of the corresponding complexity. We present an iterative algorithm from which a quantum circuit composed of CNOT, H, and S gates can be derived directly with complexity O(n 2 ) to encode the qubits being sent. Moreover, we derive the number of each gate consumed in our algorithm according to which we can design EA QECCs with low encoding complexity. Another advantage brought by our algorithm is the easiness and efficiency of programming on classical computers. (general)

  16. Chemical Space of DNA-Encoded Libraries.

    Science.gov (United States)

    Franzini, Raphael M; Randolph, Cassie

    2016-07-28

    In recent years, DNA-encoded chemical libraries (DECLs) have attracted considerable attention as a potential discovery tool in drug development. Screening encoded libraries may offer advantages over conventional hit discovery approaches and has the potential to complement such methods in pharmaceutical research. As a result of the increased application of encoded libraries in drug discovery, a growing number of hit compounds are emerging in scientific literature. In this review we evaluate reported encoded library-derived structures and identify general trends of these compounds in relation to library design parameters. We in particular emphasize the combinatorial nature of these libraries. Generally, the reported molecules demonstrate the ability of this technology to afford hits suitable for further lead development, and on the basis of them, we derive guidelines for DECL design.

  17. Encoding information using laguerre gaussian modes

    CSIR Research Space (South Africa)

    Trichili, A

    2015-08-01

    Full Text Available The authors experimentally demonstrate an information encoding protocol using the two degrees of freedom of Laguerre Gaussian modes having different radial and azimuthal components. A novel method, based on digital holography, for information...

  18. Molecular mechanisms for protein-encoded inheritance

    Science.gov (United States)

    Wiltzius, Jed J. W.; Landau, Meytal; Nelson, Rebecca; Sawaya, Michael R.; Apostol, Marcin I.; Goldschmidt, Lukasz; Soriaga, Angela B.; Cascio, Duilio; Rajashankar, Kanagalaghatta; Eisenberg, David

    2013-01-01

    Strains are phenotypic variants, encoded by nucleic acid sequences in chromosomal inheritance and by protein “conformations” in prion inheritance and transmission. But how is a protein “conformation” stable enough to endure transmission between cells or organisms? Here new polymorphic crystal structures of segments of prion and other amyloid proteins offer structural mechanisms for prion strains. In packing polymorphism, prion strains are encoded by alternative packings (polymorphs) of β-sheets formed by the same segment of a protein; in a second mechanism, segmental polymorphism, prion strains are encoded by distinct β-sheets built from different segments of a protein. Both forms of polymorphism can produce enduring “conformations,” capable of encoding strains. These molecular mechanisms for transfer of information into prion strains share features with the familiar mechanism for transfer of information by nucleic acid inheritance, including sequence specificity and recognition by non-covalent bonds. PMID:19684598

  19. Quantum Logical Operations on Encoded Qubits

    International Nuclear Information System (INIS)

    Zurek, W.H.; Laflamme, R.

    1996-01-01

    We show how to carry out quantum logical operations (controlled-not and Toffoli gates) on encoded qubits for several encodings which protect against various 1-bit errors. This improves the reliability of these operations by allowing one to correct for 1-bit errors which either preexisted or occurred in the course of operation. The logical operations we consider allow one to carry out the vast majority of the steps in the quantum factoring algorithm. copyright 1996 The American Physical Society

  20. Using XML to encode TMA DES metadata

    Directory of Open Access Journals (Sweden)

    Oliver Lyttleton

    2011-01-01

    Full Text Available Background: The Tissue Microarray Data Exchange Specification (TMA DES is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. Materials and Methods: We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. Results: We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. Conclusions: All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs.

  1. Using XML to encode TMA DES metadata.

    Science.gov (United States)

    Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul

    2011-01-01

    The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs.

  2. Using XML to encode TMA DES metadata

    Science.gov (United States)

    Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul

    2011-01-01

    Background: The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. Materials and Methods: We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. Results: We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. Conclusions: All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs. PMID:21969921

  3. Continuous-variable quantum erasure correcting code

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Sabuncu, Metin; Huck, Alexander

    2010-01-01

    We experimentally demonstrate a continuous variable quantum erasure-correcting code, which protects coherent states of light against complete erasure. The scheme encodes two coherent states into a bi-party entangled state, and the resulting 4-mode code is conveyed through 4 independent channels...

  4. Power calculation of linear and angular incremental encoders

    Science.gov (United States)

    Prokofev, Aleksandr V.; Timofeev, Aleksandr N.; Mednikov, Sergey V.; Sycheva, Elena A.

    2016-04-01

    Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and transmit the measured values back to the control unit. The capabilities of these systems are undergoing continual development in terms of their resolution, accuracy and reliability, their measuring ranges, and maximum speeds. This article discusses the method of power calculation of linear and angular incremental photoelectric encoders, to find the optimum parameters for its components, such as light emitters, photo-detectors, linear and angular scales, optical components etc. It analyzes methods and devices that permit high resolutions in the order of 0.001 mm or 0.001°, as well as large measuring lengths of over 100 mm. In linear and angular incremental photoelectric encoders optical beam is usually formulated by a condenser lens passes through the measuring unit changes its value depending on the movement of a scanning head or measuring raster. Past light beam is converting into an electrical signal by the photo-detecter's block for processing in the electrical block. Therefore, for calculating the energy source is a value of the desired value of the optical signal at the input of the photo-detecter's block, which reliably recorded and processed in the electronic unit of linear and angular incremental optoelectronic encoders. Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and

  5. ERP Correlates of Encoding Success and Encoding Selectivity in Attention Switching

    Science.gov (United States)

    Yeung, Nick

    2016-01-01

    Long-term memory encoding depends critically on effective processing of incoming information. The degree to which participants engage in effective encoding can be indexed in electroencephalographic (EEG) data by studying event-related potential (ERP) subsequent memory effects. The current study investigated ERP correlates of memory success operationalised with two different measures—memory selectivity and global memory—to assess whether previously observed ERP subsequent memory effects reflect focused encoding of task-relevant information (memory selectivity), general encoding success (global memory), or both. Building on previous work, the present study combined an attention switching paradigm—in which participants were presented with compound object-word stimuli and switched between attending to the object or the word across trials—with a later recognition memory test for those stimuli, while recording their EEG. Our results provided clear evidence that subsequent memory effects resulted from selective attentional focusing and effective top-down control (memory selectivity) in contrast to more general encoding success effects (global memory). Further analyses addressed the question of whether successful encoding depended on similar control mechanisms to those involved in attention switching. Interestingly, differences in the ERP correlates of attention switching and successful encoding, particularly during the poststimulus period, indicated that variability in encoding success occurred independently of prestimulus demands for top-down cognitive control. These results suggest that while effects of selective attention and selective encoding co-occur behaviourally their ERP correlates are at least partly dissociable. PMID:27907075

  6. Multichannel compressive sensing MRI using noiselet encoding.

    Directory of Open Access Journals (Sweden)

    Kamlesh Pawar

    Full Text Available The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS. In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  7. Evaluation of Algorithms for Photon Depth of Interaction Estimation for the TRIMAGE PET Component

    Science.gov (United States)

    Camarlinghi, Niccolò; Belcari, Nicola; Cerello, Piergiorgio; Pennazio, Francesco; Sportelli, Giancarlo; Zaccaro, Emanuele; Del Guerra, Alberto

    2016-02-01

    The TRIMAGE consortium aims to develop a multimodal PET/MR/EEG brain scanner dedicated to the early diagnosis of schizophrenia and other mental health disorders. The TRIMAGE PET component features a full ring made of 18 detectors, each one consisting of twelve 8 ×8 Silicon PhotoMultipliers (SiPMs) tiles coupled to two segmented LYSO crystal matrices with staggered layers. The identification of the pixel where a photon interacted is performed on-line at the front-end level, thus allowing the FPGA board to emit fully digital event packets. This allows to increase the effective bandwidth, but imposes restrictions on the complexity of the algorithms to be implemented. In this work, two algorithms, whose implementation is feasible directly on an FPGA, are presented and evaluated. The first algorithm is driven by physical considerations, while the other consists in a two-class linear Support Vector Machine (SVM). The validation of the algorithm performance is carried out by using simulated data generated with the GAMOS Monte Carlo. The obtained results show that the achieved accuracy in layer identification is above 90% for both the proposed approaches. The feasibility of tagging and rejecting events that underwent multiple interactions within the detector is also discussed.

  8. Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement

    International Nuclear Information System (INIS)

    Lerche, Ch.W.; Ros, A.; Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A.; Sanchez, F.; Benlloch, J.M.

    2009-01-01

    The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.

  9. Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, Ch.W. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain)], E-mail: lerche@ific.uv.es; Ros, A. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain); Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain); Sanchez, F.; Benlloch, J.M. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain)

    2009-06-01

    The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.

  10. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera

    OpenAIRE

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G.; Nagarkar, Vivek V.

    2011-01-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is ...

  11. Deconstructing continuous flash suppression.

    Science.gov (United States)

    Yang, Eunice; Blake, Randolph

    2012-03-08

    In this paper, we asked to what extent the depth of interocular suppression engendered by continuous flash suppression (CFS) varies depending on spatiotemporal properties of the suppressed stimulus and CFS suppressor. An answer to this question could have implications for interpreting the results in which CFS influences the processing of different categories of stimuli to different extents. In a series of experiments, we measured the selectivity and depth of suppression (i.e., elevation in contrast detection thresholds) as a function of the visual features of the stimulus being suppressed and the stimulus evoking suppression, namely, the popular "Mondrian" CFS stimulus (N. Tsuchiya & C. Koch, 2005). First, we found that CFS differentially suppresses the spatial components of the suppressed stimulus: Observers' sensitivity for stimuli of relatively low spatial frequency or cardinally oriented features was more strongly impaired in comparison to high spatial frequency or obliquely oriented stimuli. Second, we discovered that this feature-selective bias primarily arises from the spatiotemporal structure of the CFS stimulus, particularly within information residing in the low spatial frequency range and within the smooth rather than abrupt luminance changes over time. These results imply that this CFS stimulus operates by selectively attenuating certain classes of low-level signals while leaving others to be potentially encoded during suppression. These findings underscore the importance of considering the contribution of low-level features in stimulus-driven effects that are reported under CFS.

  12. High performance detector head for PET and PET/MR with continuous crystals and SiPMs

    International Nuclear Information System (INIS)

    Llosá, G.; Barrillon, P.; Barrio, J.; Bisogni, M.G.; Cabello, J.; Del Guerra, A.; Etxebeste, A.; Gillam, J.E.; Lacasta, C.; Oliver, J.F.; Rafecas, M.; Solaz, C.; Stankova, V.; La Taille, C. de

    2013-01-01

    A high resolution PET detector head for small animal PET applications has been developed. The detector is composed of a 12mm×12mm continuous LYSO crystal coupled to a 64-channel monolithic SiPM matrix from FBK-irst. Crystal thicknesses of 5 mm and 10 mm have been tested, both yielding an intrinsic spatial resolution around 0.7 mm FWHM with a position determination algorithm that can also provide depth-of-interaction information. The detectors have been tested in a rotating system that makes it possible to acquire tomographic data and reconstruct images of 22 Na sources. An image reconstruction method specifically adapted for continuous crystals has been employed. The Full Width at Half Maximum measured from a point source reconstructed with ML–EM was 0.7 mm with the 5 mm crystal and 0.8 mm with the 10 mm crystal

  13. Recognition memory is improved by a structured temporal framework during encoding

    Directory of Open Access Journals (Sweden)

    Sathesan eThavabalasingam

    2016-01-01

    Full Text Available In order to function optimally within our environment, we continuously extract temporal patterns from our experiences and formulate expectations that facilitate adaptive behavior. Given that our memories are embedded within spatiotemporal contexts, an intriguing possibility is that mnemonic processes are sensitive to the temporal structure of events. To test this hypothesis, in a series of behavioral experiments we manipulated the regularity of interval durations at encoding to create temporally structured and unstructured frameworks. Our findings revealed enhanced recognition memory (d’ for stimuli that were explicitly encoded within a temporally structured versus unstructured framework. Encoding information within a temporally structured framework was also associated with a reduction in the negative effects of proactive interference and was linked to greater recollective recognition memory. Furthermore, rhythmic temporal structure was found to enhance recognition memory for incidentally encoded information. Collectively, these results support the possibility that we possess a greater capacity to learn and subsequently remember temporally structured information.

  14. Cloud-based uniform ChIP-Seq processing tools for modENCODE and ENCODE.

    Science.gov (United States)

    Trinh, Quang M; Jen, Fei-Yang Arthur; Zhou, Ziru; Chu, Kar Ming; Perry, Marc D; Kephart, Ellen T; Contrino, Sergio; Ruzanov, Peter; Stein, Lincoln D

    2013-07-22

    Funded by the National Institutes of Health (NIH), the aim of the Model Organism ENCyclopedia of DNA Elements (modENCODE) project is to provide the biological research community with a comprehensive encyclopedia of functional genomic elements for both model organisms C. elegans (worm) and D. melanogaster (fly). With a total size of just under 10 terabytes of data collected and released to the public, one of the challenges faced by researchers is to extract biologically meaningful knowledge from this large data set. While the basic quality control, pre-processing, and analysis of the data has already been performed by members of the modENCODE consortium, many researchers will wish to reinterpret the data set using modifications and enhancements of the original protocols, or combine modENCODE data with other data sets. Unfortunately this can be a time consuming and logistically challenging proposition. In recognition of this challenge, the modENCODE DCC has released uniform computing resources for analyzing modENCODE data on Galaxy (https://github.com/modENCODE-DCC/Galaxy), on the public Amazon Cloud (http://aws.amazon.com), and on the private Bionimbus Cloud for genomic research (http://www.bionimbus.org). In particular, we have released Galaxy workflows for interpreting ChIP-seq data which use the same quality control (QC) and peak calling standards adopted by the modENCODE and ENCODE communities. For convenience of use, we have created Amazon and Bionimbus Cloud machine images containing Galaxy along with all the modENCODE data, software and other dependencies. Using these resources provides a framework for running consistent and reproducible analyses on modENCODE data, ultimately allowing researchers to use more of their time using modENCODE data, and less time moving it around.

  15. Noise level and MPEG-2 encoder statistics

    Science.gov (United States)

    Lee, Jungwoo

    1997-01-01

    Most software in the movie and broadcasting industries are still in analog film or tape format, which typically contains random noise that originated from film, CCD camera, and tape recording. The performance of the MPEG-2 encoder may be significantly degraded by the noise. It is also affected by the scene type that includes spatial and temporal activity. The statistical property of noise originating from camera and tape player is analyzed and the models for the two types of noise are developed. The relationship between the noise, the scene type, and encoder statistics of a number of MPEG-2 parameters such as motion vector magnitude, prediction error, and quant scale are discussed. This analysis is intended to be a tool for designing robust MPEG encoding algorithms such as preprocessing and rate control.

  16. Indirect Encoding in Neuroevolutionary Ship Handling

    Directory of Open Access Journals (Sweden)

    Miroslaw Lacki

    2018-03-01

    Full Text Available In this paper the author compares the efficiency of two encoding schemes for artificial intelligence methods used in the neuroevolutionary ship maneuvering system. This may be also be seen as the ship handling system that simulates a learning process of a group of artificial helmsmen - autonomous control units, created with an artificial neural network. The helmsman observes input signals derived form an enfironment and calculates the values of required parameters of the vessel maneuvering in confined waters. In neuroevolution such units are treated as individuals in population of artificial neural networks, which through environmental sensing and evolutionary algorithms learn to perform given task efficiently. The main task of this project is to evolve a population of helmsmen with indirect encoding and compare results of simulation with direct encoding method.

  17. An Information Theoretic Characterisation of Auditory Encoding

    Science.gov (United States)

    Overath, Tobias; Cusack, Rhodri; Kumar, Sukhbinder; von Kriegstein, Katharina; Warren, Jason D; Grube, Manon; Carlyon, Robert P; Griffiths, Timothy D

    2007-01-01

    The entropy metric derived from information theory provides a means to quantify the amount of information transmitted in acoustic streams like speech or music. By systematically varying the entropy of pitch sequences, we sought brain areas where neural activity and energetic demands increase as a function of entropy. Such a relationship is predicted to occur in an efficient encoding mechanism that uses less computational resource when less information is present in the signal: we specifically tested the hypothesis that such a relationship is present in the planum temporale (PT). In two convergent functional MRI studies, we demonstrated this relationship in PT for encoding, while furthermore showing that a distributed fronto-parietal network for retrieval of acoustic information is independent of entropy. The results establish PT as an efficient neural engine that demands less computational resource to encode redundant signals than those with high information content. PMID:17958472

  18. Incremental phonological encoding during unscripted sentence production

    Directory of Open Access Journals (Sweden)

    Florian T Jaeger

    2012-11-01

    Full Text Available We investigate phonological encoding during unscripted sentence production, focusing on the effect of phonological overlap on phonological encoding. Previous work on this question has almost exclusively employed isolated word production or highly scripted multiword production. These studies have led to conflicting results: some studies found that phonological overlap between two words facilitates phonological encoding, while others found inhibitory effects. One worry with many of these paradigms is that they involve processes that are not typical to everyday language use, which calls into question to what extent their findings speak to the architectures and mechanisms underlying language production. We present a paradigm to investigate the consequences of phonological overlap between words in a sentence while leaving speakers much of the lexical and structural choices typical in everyday language use. Adult native speakers of English described events in short video clips. We annotated the presence of disfluencies and the speech rate at various points throughout the sentence, as well as the constituent order. We find that phonological overlap has an inhibitory effect on phonological encoding. Specifically, if adjacent content words share their phonological onset (e.g., hand the hammer, they are preceded by production difficulty, as reflected in fluency and speech rate. We also find that this production difficulty affects speakers’ constituent order preferences during grammatical encoding. We discuss our results and previous works to isolate the properties of other paradigms that resulted in facilitatory or inhibitory results. The data from our paradigm also speak to questions about the scope of phonological planning in unscripted speech and as to whether phonological and grammatical encoding interact.

  19. Optical encoder based on a nondiffractive beam

    International Nuclear Information System (INIS)

    Lutenberg, Ariel; Perez-Quintian, Fernando; Rebollo, Maria A.

    2008-01-01

    Optical encoders are used in industrial and laboratory motion equipment to measure rotations and linear displacements. We introduce a design of an optical encoder based on a nondiffractive beam. We expect that the invariant profile and radial symmetry of the nondiffractive beam provide the design with remarkable tolerance to mechanical perturbations. We experimentally demonstrate that the proposed design generates a suitable output sinusoidal signal with low harmonic distortion. Moreover, we present a numerical model of the system based on the angular spectrum approximation whose predictions are in excellent agreement with the experimental results

  20. RNAi suppressors encoded by pathogenic human viruses

    NARCIS (Netherlands)

    de Vries, Walter; Berkhout, Ben

    2008-01-01

    RNA silencing or RNAi interference (RNAi) serves as an innate antiviral mechanism in plants, fungi and animals. Human viruses, like plant viruses, encode suppressor proteins or RNAs that block or modulate the RNAi pathway. This review summarizes the mechanisms by which pathogenic human viruses

  1. Visual Memory : The Price of Encoding Details

    NARCIS (Netherlands)

    Nieuwenstein, Mark; Kromm, Maria

    2017-01-01

    Studies on visual long-term memory have shown that we have a tremendous capacity for remembering pictures of objects, even at a highly detailed level. What remains unclear, however, is whether encoding objects at such a detailed level comes at any cost. In the current study, we examined how the

  2. Encoders for block-circulant LDPC codes

    Science.gov (United States)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2009-01-01

    Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.

  3. 47 CFR 11.32 - EAS Encoder.

    Science.gov (United States)

    2010-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL EMERGENCY ALERT SYSTEM (EAS) Equipment Requirements § 11... operation. (vi) Indicator Display. The encoder shall be provided with a visual and/or aural indicator which... to +50 degrees C and a range of relative humidity of up to 95%. (c) Primary Supply Voltage Variation...

  4. Toward Chemical Implementation of Encoded Combinatorial Libraries

    DEFF Research Database (Denmark)

    Nielsen, John; Janda, Kim D.

    1994-01-01

    The recent application of "combinatorial libraries" to supplement existing drug screening processes might simplify and accelerate the search for new lead compounds or drugs. Recently, a scheme for encoded combinatorial chemistry was put forward to surmount a number of the limitations possessed...

  5. Quantum engineering of continuous variable quantum states

    International Nuclear Information System (INIS)

    Sabuncu, Metin

    2009-01-01

    Quantum information with continuous variables is a field attracting increasing attention recently. In continuous variable quantum information one makes use of the continuous information encoded into the quadrature of a quantized light field instead of binary quantities such as the polarization state of a single photon. This brand new research area is witnessing exciting theoretical and experimental achievements such as teleportation, quantum computation and quantum error correction. The rapid development of the field is mainly due higher optical data rates and the availability of simple and efficient manipulation tools in continuous-variable quantum information processing. We in this thesis extend the work in continuous variable quantum information processing and report on novel experiments on amplification, cloning, minimal disturbance and noise erasure protocols. The promising results we obtain in these pioneering experiments indicate that the future of continuous variable quantum information is bright and many advances can be foreseen. (orig.)

  6. Quantum engineering of continuous variable quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Sabuncu, Metin

    2009-10-29

    Quantum information with continuous variables is a field attracting increasing attention recently. In continuous variable quantum information one makes use of the continuous information encoded into the quadrature of a quantized light field instead of binary quantities such as the polarization state of a single photon. This brand new research area is witnessing exciting theoretical and experimental achievements such as teleportation, quantum computation and quantum error correction. The rapid development of the field is mainly due higher optical data rates and the availability of simple and efficient manipulation tools in continuous-variable quantum information processing. We in this thesis extend the work in continuous variable quantum information processing and report on novel experiments on amplification, cloning, minimal disturbance and noise erasure protocols. The promising results we obtain in these pioneering experiments indicate that the future of continuous variable quantum information is bright and many advances can be foreseen. (orig.)

  7. Circuit variability interacts with excitatory-inhibitory diversity of interneurons to regulate network encoding capacity.

    Science.gov (United States)

    Tsai, Kuo-Ting; Hu, Chin-Kun; Li, Kuan-Wei; Hwang, Wen-Liang; Chou, Ya-Hui

    2018-05-23

    Local interneurons (LNs) in the Drosophila olfactory system exhibit neuronal diversity and variability, yet it is still unknown how these features impact information encoding capacity and reliability in a complex LN network. We employed two strategies to construct a diverse excitatory-inhibitory neural network beginning with a ring network structure and then introduced distinct types of inhibitory interneurons and circuit variability to the simulated network. The continuity of activity within the node ensemble (oscillation pattern) was used as a readout to describe the temporal dynamics of network activity. We found that inhibitory interneurons enhance the encoding capacity by protecting the network from extremely short activation periods when the network wiring complexity is very high. In addition, distinct types of interneurons have differential effects on encoding capacity and reliability. Circuit variability may enhance the encoding reliability, with or without compromising encoding capacity. Therefore, we have described how circuit variability of interneurons may interact with excitatory-inhibitory diversity to enhance the encoding capacity and distinguishability of neural networks. In this work, we evaluate the effects of different types and degrees of connection diversity on a ring model, which may simulate interneuron networks in the Drosophila olfactory system or other biological systems.

  8. Between strong continuity and almost continuity

    Directory of Open Access Journals (Sweden)

    J.K. Kohli

    2010-04-01

    Full Text Available As embodied in the title of the paper strong and weak variants of continuity that lie strictly between strong continuity of Levine and almost continuity due to Singal and Singal are considered. Basic properties of almost completely continuous functions (≡ R-maps and δ-continuous functions are studied. Direct and inverse transfer of topological properties under almost completely continuous functions and δ-continuous functions are investigated and their place in the hier- archy of variants of continuity that already exist in the literature is out- lined. The class of almost completely continuous functions lies strictly between the class of completely continuous functions studied by Arya and Gupta (Kyungpook Math. J. 14 (1974, 131-143 and δ-continuous functions defined by Noiri (J. Korean Math. Soc. 16, (1980, 161-166. The class of almost completely continuous functions properly contains each of the classes of (1 completely continuous functions, and (2 al- most perfectly continuous (≡ regular set connected functions defined by Dontchev, Ganster and Reilly (Indian J. Math. 41 (1999, 139-146 and further studied by Singh (Quaestiones Mathematicae 33(2(2010, 1–11 which in turn include all δ-perfectly continuous functions initi- ated by Kohli and Singh (Demonstratio Math. 42(1, (2009, 221-231 and so include all perfectly continuous functions introduced by Noiri (Indian J. Pure Appl. Math. 15(3 (1984, 241-250.

  9. Vessel encoded arterial spin labeling with cerebral perfusion: preliminary study

    International Nuclear Information System (INIS)

    Wu Bing; Xiao Jiangxi; Xie Cheng; Wang Xiaoying; Jiang Xuexiang; Wong, E.C.; Wang Jing; Guo Jia; Zhang Beiru; Zhang Jue; Fang Jing

    2008-01-01

    Objective: To evaluate a noninvasive vessel encoded imaging for selective mapping of the flow territories of the left and fight internal carotid arteries and vertebral-basilar arteries. Methods: Seven volunteers [(33.5 ± 4.1) years; 3 men, 4 women] and 6 patients [(55.2 ± 3.2) years; 2 men, 4 women] were given written informed consent approved by the institutional review board before participating in the study. A pseudo-continuous tagging pulse train is modified to encode all vessels of interest. The selectivity of this method was demonstrated. Regional perfusion imaging was developed on the same arterial spin labeling sequence. Perfusion-weighted images of the selectively labeled cerebral arteries were obtained by subtraction of the labeled from control images. The CBF values of hemisphere, white matter, and gray matter of volunteers were calculated. The vessel territories on patients were compared with DSA. The low perfusion areas were compared with high signal areas on T 2 -FLAIR. Results: High SNR maps of left carotid, right carotid, and basilar territories were generated in 8 minutes of scan time. Cerebral blood flow values measured with regional perfusion imaging in the complete hemisphere (32.6 ± 4.3) ml·min -1 · 100 g -1 , white matter (10.8 ± 0.9) ml·min -1 ·100 g -1 , and gray matter (55.6±2.9) ml·min -1 · 100 g -1 were in agreement with data in the literature. Vessel encoded imaging in patients had a good agreement with DSA. The low perfusion areas were larger than high signal areas on T 2 -FLAIR. Conclusion: We present a new method capable of evaluating both quantitatively and qualitatively the individual brain- feeding arteries in vivo. (authors)

  10. Factors affecting reorganisation of memory encoding networks in temporal lobe epilepsy

    Science.gov (United States)

    Sidhu, M.K.; Stretton, J.; Winston, G.P.; Symms, M.; Thompson, P.J.; Koepp, M.J.; Duncan, J.S.

    2015-01-01

    Summary Aims In temporal lobe epilepsy (TLE) due to hippocampal sclerosis reorganisation in the memory encoding network has been consistently described. Distinct areas of reorganisation have been shown to be efficient when associated with successful subsequent memory formation or inefficient when not associated with successful subsequent memory. We investigated the effect of clinical parameters that modulate memory functions: age at onset of epilepsy, epilepsy duration and seizure frequency in a large cohort of patients. Methods We studied 53 patients with unilateral TLE and hippocampal sclerosis (29 left). All participants performed a functional magnetic resonance imaging memory encoding paradigm of faces and words. A continuous regression analysis was used to investigate the effects of age at onset of epilepsy, epilepsy duration and seizure frequency on the activation patterns in the memory encoding network. Results Earlier age at onset of epilepsy was associated with left posterior hippocampus activations that were involved in successful subsequent memory formation in left hippocampal sclerosis patients. No association of age at onset of epilepsy was seen with face encoding in right hippocampal sclerosis patients. In both left hippocampal sclerosis patients during word encoding and right hippocampal sclerosis patients during face encoding, shorter duration of epilepsy and lower seizure frequency were associated with medial temporal lobe activations that were involved in successful memory formation. Longer epilepsy duration and higher seizure frequency were associated with contralateral extra-temporal activations that were not associated with successful memory formation. Conclusion Age at onset of epilepsy influenced verbal memory encoding in patients with TLE due to hippocampal sclerosis in the speech-dominant hemisphere. Shorter duration of epilepsy and lower seizure frequency were associated with less disruption of the efficient memory encoding network whilst

  11. An Intensional Concurrent Faithful Encoding of Turing Machines

    Directory of Open Access Journals (Sweden)

    Thomas Given-Wilson

    2014-10-01

    Full Text Available The benchmark for computation is typically given as Turing computability; the ability for a computation to be performed by a Turing Machine. Many languages exploit (indirect encodings of Turing Machines to demonstrate their ability to support arbitrary computation. However, these encodings are usually by simulating the entire Turing Machine within the language, or by encoding a language that does an encoding or simulation itself. This second category is typical for process calculi that show an encoding of lambda-calculus (often with restrictions that in turn simulates a Turing Machine. Such approaches lead to indirect encodings of Turing Machines that are complex, unclear, and only weakly equivalent after computation. This paper presents an approach to encoding Turing Machines into intensional process calculi that is faithful, reduction preserving, and structurally equivalent. The encoding is demonstrated in a simple asymmetric concurrent pattern calculus before generalised to simplify infinite terms, and to show encodings into Concurrent Pattern Calculus and Psi Calculi.

  12. Temporal information encoding in dynamic memristive devices

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Wen; Chen, Lin; Du, Chao; Lu, Wei D., E-mail: wluee@eecs.umich.edu [Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, Michigan 48109 (United States)

    2015-11-09

    We show temporal and frequency information can be effectively encoded in memristive devices with inherent short-term dynamics. Ag/Ag{sub 2}S/Pd based memristive devices with low programming voltage (∼100 mV) were fabricated and tested. At weak programming conditions, the devices exhibit inherent decay due to spontaneous diffusion of the Ag atoms. When the devices were subjected to pulse train inputs emulating different spiking patterns, the switching probability distribution function diverges from the standard Poisson distribution and evolves according to the input pattern. The experimentally observed switching probability distributions and the associated cumulative probability functions can be well-explained using a model accounting for the short-term decay effects. Such devices offer an intriguing opportunity to directly encode neural signals for neural information storage and analysis.

  13. DNA-Encoded Dynamic Combinatorial Chemical Libraries.

    Science.gov (United States)

    Reddavide, Francesco V; Lin, Weilin; Lehnert, Sarah; Zhang, Yixin

    2015-06-26

    Dynamic combinatorial chemistry (DCC) explores the thermodynamic equilibrium of reversible reactions. Its application in the discovery of protein binders is largely limited by difficulties in the analysis of complex reaction mixtures. DNA-encoded chemical library (DECL) technology allows the selection of binders from a mixture of up to billions of different compounds; however, experimental results often show low a signal-to-noise ratio and poor correlation between enrichment factor and binding affinity. Herein we describe the design and application of DNA-encoded dynamic combinatorial chemical libraries (EDCCLs). Our experiments have shown that the EDCCL approach can be used not only to convert monovalent binders into high-affinity bivalent binders, but also to cause remarkably enhanced enrichment of potent bivalent binders by driving their in situ synthesis. We also demonstrate the application of EDCCLs in DNA-templated chemical reactions. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Storing data encoded DNA in living organisms

    Science.gov (United States)

    Wong,; Pak C. , Wong; Kwong K. , Foote; Harlan, P [Richland, WA

    2006-06-06

    Current technologies allow the generation of artificial DNA molecules and/or the ability to alter the DNA sequences of existing DNA molecules. With a careful coding scheme and arrangement, it is possible to encode important information as an artificial DNA strand and store it in a living host safely and permanently. This inventive technology can be used to identify origins and protect R&D investments. It can also be used in environmental research to track generations of organisms and observe the ecological impact of pollutants. Today, there are microorganisms that can survive under extreme conditions. As well, it is advantageous to consider multicellular organisms as hosts for stored information. These living organisms can provide as memory housing and protection for stored data or information. The present invention provides well for data storage in a living organism wherein at least one DNA sequence is encoded to represent data and incorporated into a living organism.

  15. Bacillus caldolyticus prs gene encoding phosphoribosyldiphosphate synthase

    DEFF Research Database (Denmark)

    Krath, Britta N.; Hove-Jensen, Bjarne

    1996-01-01

    The prs gene, encoding phosphoribosyl-diphosphate (PRPP) synthase, as well as the flanking DNA sequences were cloned and sequenced from the Gram-positive thermophile, Bacillus caldolyticus. Comparison with the homologous sequences from the mesophile, Bacillus subtilis, revealed a gene (gca......D) encoding N-acetylglucosamine-l-phosphate uridyltransferase upstream of prs, and a gene homologous to ctc downstream of prs. cDNA synthesis with a B. caldolyticus gcaD-prs-ctc-specified mRNA as template, followed by amplification utilising the polymerase chain reaction indicated that the three genes are co......-transcribed. Comparison of amino acid sequences revealed a high similarity among PRPP synthases across a wide phylogenetic range. An E. coli strain harbouring the B. caldolyticus prs gene in a multicopy plasmid produced PRPP synthase activity 33-fold over the activity of a haploid B. caldolyticus strain. B. caldolyticus...

  16. Nucleic acid compositions and the encoding proteins

    Science.gov (United States)

    Preston, III, James F.; Chow, Virginia; Nong, Guang; Rice, John D.; St. John, Franz J.

    2014-09-02

    The subject invention provides at least one nucleic acid sequence encoding an aldouronate-utilization regulon isolated from Paenibacillus sp. strain JDR-2, a bacterium which efficiently utilizes xylan and metabolizes aldouronates (methylglucuronoxylosaccharides). The subject invention also provides a means for providing a coordinately regulated process in which xylan depolymerization and product assimilation are coupled in Paenibacillus sp. strain JDR-2 to provide a favorable system for the conversion of lignocellulosic biomass to biobased products. Additionally, the nucleic acid sequences encoding the aldouronate-utilization regulon can be used to transform other bacteria to form organisms capable of producing a desired product (e.g., ethanol, 1-butanol, acetoin, 2,3-butanediol, 1,3-propanediol, succinate, lactate, acetate, malate or alanine) from lignocellulosic biomass.

  17. Asymmetric synthesis using chiral-encoded metal

    Science.gov (United States)

    Yutthalekha, Thittaya; Wattanakit, Chularat; Lapeyre, Veronique; Nokbin, Somkiat; Warakulwit, Chompunuch; Limtrakul, Jumras; Kuhn, Alexander

    2016-08-01

    The synthesis of chiral compounds is of crucial importance in many areas of society and science, including medicine, biology, chemistry, biotechnology and agriculture. Thus, there is a fundamental interest in developing new approaches for the selective production of enantiomers. Here we report the use of mesoporous metal structures with encoded geometric chiral information for inducing asymmetry in the electrochemical synthesis of mandelic acid as a model molecule. The chiral-encoded mesoporous metal, obtained by the electrochemical reduction of platinum salts in the presence of a liquid crystal phase and the chiral template molecule, perfectly retains the chiral information after removal of the template. Starting from a prochiral compound we demonstrate enantiomeric excess of the (R)-enantiomer when using (R)-imprinted electrodes and vice versa for the (S)-imprinted ones. Moreover, changing the amount of chiral cavities in the material allows tuning the enantioselectivity.

  18. Optimal Achievable Encoding for Brain Machine Interface

    Science.gov (United States)

    2017-12-22

    dictionary-based encoding approach to translate a visual image into sequential patterns of electrical stimulation in real time , in a manner that...including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...networks, and by applying linear decoding to complete recorded populations of retinal ganglion cells for the first time . Third, we developed a greedy

  19. Encoded libraries of chemically modified peptides.

    Science.gov (United States)

    Heinis, Christian; Winter, Greg

    2015-06-01

    The use of powerful technologies for generating and screening DNA-encoded protein libraries has helped drive the development of proteins as pharmaceutical ligands. However the development of peptides as pharmaceutical ligands has been more limited. Although encoded peptide libraries are typically several orders of magnitude larger than classical chemical libraries, can be more readily screened, and can give rise to higher affinity ligands, their use as pharmaceutical ligands is limited by their intrinsic properties. Two of the intrinsic limitations include the rotational flexibility of the peptide backbone and the limited number (20) of natural amino acids. However these limitations can be overcome by use of chemical modification. For example, the libraries can be modified to introduce topological constraints such as cyclization linkers, or to introduce new chemical entities such as small molecule ligands, fluorophores and photo-switchable compounds. This article reviews the chemistry involved, the properties of the peptide ligands, and the new opportunities offered by chemical modification of DNA-encoded peptide libraries. Copyright © 2015. Published by Elsevier Ltd.

  20. Encoding and decoding messages with chaotic lasers

    International Nuclear Information System (INIS)

    Alsing, P.M.; Gavrielides, A.; Kovanis, V.; Roy, R.; Thornburg, K.S. Jr.

    1997-01-01

    We investigate the structure of the strange attractor of a chaotic loss-modulated solid-state laser utilizing return maps based on a combination of intensity maxima and interspike intervals, as opposed to those utilizing Poincare sections defined by the intensity maxima of the laser (I=0,Ie<0) alone. We find both experimentally and numerically that a simple, intrinsic relationship exists between an intensity maximum and the pair of preceding and succeeding interspike intervals. In addition, we numerically investigate encoding messages on the output of a chaotic transmitter laser and its subsequent decoding by a similar receiver laser. By exploiting the relationship between the intensity maxima and the interspike intervals, we demonstrate that the method utilized to encode the message is vital to the system close-quote s ability to hide the signal from unwanted deciphering. In this work alternative methods are studied in order to encode messages by modulating the magnitude of pumping of the transmitter laser and also by driving its loss modulation with more than one frequency. copyright 1997 The American Physical Society

  1. Evaluating standard terminologies for encoding allergy information.

    Science.gov (United States)

    Goss, Foster R; Zhou, Li; Plasek, Joseph M; Broverman, Carol; Robinson, George; Middleton, Blackford; Rocha, Roberto A

    2013-01-01

    Allergy documentation and exchange are vital to ensuring patient safety. This study aims to analyze and compare various existing standard terminologies for representing allergy information. Five terminologies were identified, including the Systemized Nomenclature of Medical Clinical Terms (SNOMED CT), National Drug File-Reference Terminology (NDF-RT), Medication Dictionary for Regulatory Activities (MedDRA), Unique Ingredient Identifier (UNII), and RxNorm. A qualitative analysis was conducted to compare desirable characteristics of each terminology, including content coverage, concept orientation, formal definitions, multiple granularities, vocabulary structure, subset capability, and maintainability. A quantitative analysis was also performed to compare the content coverage of each terminology for (1) common food, drug, and environmental allergens and (2) descriptive concepts for common drug allergies, adverse reactions (AR), and no known allergies. Our qualitative results show that SNOMED CT fulfilled the greatest number of desirable characteristics, followed by NDF-RT, RxNorm, UNII, and MedDRA. Our quantitative results demonstrate that RxNorm had the highest concept coverage for representing drug allergens, followed by UNII, SNOMED CT, NDF-RT, and MedDRA. For food and environmental allergens, UNII demonstrated the highest concept coverage, followed by SNOMED CT. For representing descriptive allergy concepts and adverse reactions, SNOMED CT and NDF-RT showed the highest coverage. Only SNOMED CT was capable of representing unique concepts for encoding no known allergies. The proper terminology for encoding a patient's allergy is complex, as multiple elements need to be captured to form a fully structured clinical finding. Our results suggest that while gaps still exist, a combination of SNOMED CT and RxNorm can satisfy most criteria for encoding common allergies and provide sufficient content coverage.

  2. 2D Barcode for DNA Encoding

    OpenAIRE

    Elena Purcaru; Cristian Toma

    2011-01-01

    The paper presents a solution for endcoding/decoding DNA information in 2D barcodes. First part focuses on the existing techniques and symbologies in 2D barcodes field. The 2D barcode PDF417 is presented as starting point. The adaptations and optimizations on PDF417 and on DataMatrix lead to the solution - DNA2DBC - DeoxyriboNucleic Acid Two Dimensional Barcode. The second part shows the DNA2DBC encoding/decoding process step by step. In conclusions are enumerated the most important features ...

  3. Dual beam encoded extended fractional Fourier transform security ...

    Indian Academy of Sciences (India)

    This paper describes a simple method for making dual beam encoded extended fractional Fourier transform (EFRT) security holograms. The hologram possesses different stages of encoding so that security features are concealed and remain invisible to the counterfeiter. These concealed and encoded anticounterfeit ...

  4. Optimal higher-order encoder time-stamping

    NARCIS (Netherlands)

    Merry, R.J.E.; Molengraft, van de M.J.G.; Steinbuch, M.

    2013-01-01

    Optical incremental encoders are used to measure the position of motion control systems. The accuracy of the position measurement is determined and bounded by the number of slits on the encoder. The position measurement is affected by quantization errors and encoder imperfections. In this paper, an

  5. Encoding of electrophysiology and other signals in MR images

    DEFF Research Database (Denmark)

    Hanson, Lars G; Lund, Torben E; Hanson, Christian G

    2007-01-01

    to the "magstripe" technique used for encoding of soundtracks in motion pictures, the electrical signals are in this way encoded as artifacts appearing in the MR images or spectra outside the region of interest. The encoded signals are subsequently reconstructed from the signal recorded by the scanner. RESULTS...

  6. V123 Beam Synchronous Encoder Module

    International Nuclear Information System (INIS)

    Kerner, T.; Conkling, C. R.; Oerter, B.

    1999-01-01

    The V123 Synchronous Encoder Module transmits events to distributed trigger modules and embedded decoders around the RHIC rings where they are used to provide beam instrumentation triggers [1,2,3]. The RHIC beam synchronous event link hardware is mainly comprised of three VMEbus board designs, the central input modules (V201), and encoder modules (V123), and the distributed trigger modules (V124). Two beam synchronous links, one for each ring, are distributed via fiberoptic and fanned out via twisted wire pair cables. The V123 synchronizes with the RF system clock derived from the beam bucket frequency and a revolution fiducial pulse. The RF system clock is used to create the beam synchronous event link carrier and events are synchronized with the rotation fiducial. A low jitter RF clock is later recovered from this carrier by phase lock loops in the trigger modules. Prioritized hardware and software triggers fill up to 15 beam event code transmission slots per revolution while tracking the ramping RF acceleration frequency and storage frequency. The revolution fiducial event is always the first event transmitted which is used to synchronize the firing of the abort kicker and to locate the first bucket for decoders distributed about the ring

  7. Place field assembly distribution encodes preferred locations.

    Directory of Open Access Journals (Sweden)

    Omar Mamad

    2017-09-01

    Full Text Available The hippocampus is the main locus of episodic memory formation and the neurons there encode the spatial map of the environment. Hippocampal place cells represent location, but their role in the learning of preferential location remains unclear. The hippocampus may encode locations independently from the stimuli and events that are associated with these locations. We have discovered a unique population code for the experience-dependent value of the context. The degree of reward-driven navigation preference highly correlates with the spatial distribution of the place fields recorded in the CA1 region of the hippocampus. We show place field clustering towards rewarded locations. Optogenetic manipulation of the ventral tegmental area demonstrates that the experience-dependent place field assembly distribution is directed by tegmental dopaminergic activity. The ability of the place cells to remap parallels the acquisition of reward context. Our findings present key evidence that the hippocampal neurons are not merely mapping the static environment but also store the concurrent context reward value, enabling episodic memory for past experience to support future adaptive behavior.

  8. How can survival processing improve memory encoding?

    Science.gov (United States)

    Luo, Meng; Geng, Haiyan

    2013-11-01

    We investigated the psychological mechanism of survival processing advantage from the perspective of false memory in two experiments. Using a DRM paradigm in combination with analysis based on signal detection theory, we were able to separately examine participants' utilization of verbatim representation and gist representation. Specifically, in Experiment 1, participants rated semantically related words in a survival scenario for a survival condition but rated pleasantness of words in the same DRM lists for a non-survival control condition. The results showed that participants demonstrated more gist processing in the survival condition than in the pleasantness condition; however, the degree of item-specific processing in the two encoding conditions did not significantly differ. In Experiment 2, the control task was changed to a category rating task, in which participants were asked to make category ratings of words in the category lists. We found that the survival condition involved more item-specific processing than did the category condition, but we found no significant difference between the two encoding conditions at the level of gist processing. Overall, our study demonstrates that survival processing can simultaneously promote gist and item-specific representations. When the control tasks only promoted either item-specific representation or gist representation, memory advantages of survival processing occurred.

  9. A practical block detector for a depth-encoding PET camera

    International Nuclear Information System (INIS)

    Rogers, J.G.; Moisan, C.; Hoskinson, E.M.; Andreaco, M.S.; Williams, C.W.; Nutt, R.

    1996-01-01

    The depth-of-interaction effect in block detectors degrades the image resolution in commercial PET cameras and impedes the natural evolution of smaller, less expensive cameras. A method for correcting the measured position of each detected gamma ray by measuring its depth-of-interaction was tested and found to recover 38% of the lost resolution at 7.5 cm radius in a tabletop, 50-cm-diameter camera. To obtain the desired depth sensitivity, standard commercial detectors were modified by a simple and practical process that is suitable for mass production of the detectors. The impact of the detector modifications on central image resolution and on the ability of the camera to correct for object scatter were also measured

  10. Negative base encoding in optical linear algebra processors

    Science.gov (United States)

    Perlee, C.; Casasent, D.

    1986-01-01

    In the digital multiplication by analog convolution algorithm, the bits of two encoded numbers are convolved to form the product of the two numbers in mixed binary representation; this output can be easily converted to binary. Attention is presently given to negative base encoding, treating base -2 initially, and then showing that the negative base system can be readily extended to any radix. In general, negative base encoding in optical linear algebra processors represents a more efficient technique than either sign magnitude or 2's complement encoding, when the additions of digitally encoded products are performed in parallel.

  11. Completely continuous and weakly completely continuous abstract ...

    Indian Academy of Sciences (India)

    An algebra A is called right completely continuous (right weakly completely continuous) ... Moreover, some applications of these results in group algebras are .... A linear subspace S(G) of L1(G) is said to be a Segal algebra, if it satisfies the.

  12. Artificial theta stimulation impairs encoding of contextual fear memory.

    Directory of Open Access Journals (Sweden)

    Arto Lipponen

    Full Text Available Several experiments have demonstrated an intimate relationship between hippocampal theta rhythm (4-12 Hz and memory. Lesioning the medial septum or fimbria-fornix, a fiber track connecting the hippocampus and the medial septum, abolishes the theta rhythm and results in a severe impairment in declarative memory. To assess whether there is a causal relationship between hippocampal theta and memory formation we investigated whether restoration of hippocampal theta by electrical stimulation during the encoding phase also restores fimbria-fornix lesion induced memory deficit in rats in the fear conditioning paradigm. Male Wistar rats underwent sham or fimbria-fornix lesion operation. Stimulation electrodes were implanted in the ventral hippocampal commissure and recording electrodes in the septal hippocampus. Artificial theta stimulation of 8 Hz was delivered during 3-min free exploration of the test cage in half of the rats before aversive conditioning with three foot shocks during 2 min. Memory was assessed by total freezing time in the same environment 24 h and 28 h after fear conditioning, and in an intervening test session in a different context. As expected, fimbria-fornix lesion impaired fear memory and dramatically attenuated hippocampal theta power. Artificial theta stimulation produced continuous theta oscillations that were almost similar to endogenous theta rhythm in amplitude and frequency. However, contrary to our predictions, artificial theta stimulation impaired conditioned fear response in both sham and fimbria-fornix lesioned animals. These data suggest that restoration of theta oscillation per se is not sufficient to support memory encoding after fimbria-fornix lesion and that universal theta oscillation in the hippocampus with a fixed frequency may actually impair memory.

  13. Hexagonal pixel detector with time encoded binary readout

    International Nuclear Information System (INIS)

    Hoedlmoser, H.; Varner, G.; Cooney, M.

    2009-01-01

    The University of Hawaii is developing continuous acquisition pixel (CAP) detectors for vertexing applications in lepton colliding experiments such as SuperBelle or ILC. In parallel to the investigation of different technology options such as MAPS or SOI, both analog and binary readout concepts have been tested. First results with a binary readout scheme in which the hit information is time encoded by means of a signal shifting mechanism have recently been published. This paper explains the hit reconstruction for such a binary detector with an emphasis on fake hit reconstruction probabilities in order to evaluate the rate capability in a high background environment such as the planned SuperB factory at KEK. The results show that the binary concept is at least comparable to any analog readout strategy if not better in terms of occupancy. Furthermore, we present a completely new binary readout strategy in which the pixel cells are arranged in a hexagonal grid allowing the use of three independent output directions to reduce reconstruction ambiguities. The new concept uses the same signal shifting mechanism for time encoding, however, in dedicated transfer lines on the periphery of the detector, which enables higher shifting frequencies. Detailed Monte Carlo simulations of full size pixel matrices including hit and BG generation, signal generation, and data reconstruction show that by means of multiple signal transfer lines on the periphery the pixel can be made smaller (higher resolution), the number of output channels and the data volume per triggered event can be reduced dramatically, fake hit reconstruction is lowered to a minimum and the resulting effective occupancies are less than 10 -4 . A prototype detector has been designed in the AMS 0.35μm Opto process and is currently under fabrication.

  14. Encoding circuit for transform coding of a picture signal and decoding circuit for encoding said signal

    NARCIS (Netherlands)

    1991-01-01

    Encoding circuit for transforming a picture signal into blocks of, for example, 8*8 coefficients, in which each block of coefficients is read motion-adaptively. In the case of motion within a sub-picture, the block of coefficients is read in such an order that the obtained series of coefficients

  15. Video encoder/decoder for encoding/decoding motion compensated images

    NARCIS (Netherlands)

    1996-01-01

    Video encoder and decoder, provided with a motion compensator for motion-compensated video coding or decoding in which a picture is coded or decoded in blocks in alternately horizontal and vertical steps. The motion compensator is provided with addressing means (160) and controlled multiplexers

  16. Quantum key distribution using basis encoding of Gaussian-modulated coherent states

    Science.gov (United States)

    Huang, Peng; Huang, Jingzheng; Zhang, Zheshen; Zeng, Guihua

    2018-04-01

    The continuous-variable quantum key distribution (CVQKD) has been demonstrated to be available in practical secure quantum cryptography. However, its performance is restricted strongly by the channel excess noise and the reconciliation efficiency. In this paper, we present a quantum key distribution (QKD) protocol by encoding the secret keys on the random choices of two measurement bases: the conjugate quadratures X and P . The employed encoding method can dramatically weaken the effects of channel excess noise and reconciliation efficiency on the performance of the QKD protocol. Subsequently, the proposed scheme exhibits the capability to tolerate much higher excess noise and enables us to reach a much longer secure transmission distance even at lower reconciliation efficiency. The proposal can work alternatively to strengthen significantly the performance of the known Gaussian-modulated CVQKD protocol and serve as a multiplier for practical secure quantum cryptography with continuous variables.

  17. Brain Circuits Encoding Reward from Pain Relief.

    Science.gov (United States)

    Navratilova, Edita; Atcherley, Christopher W; Porreca, Frank

    2015-11-01

    Relief from pain in humans is rewarding and pleasurable. Primary rewards, or reward-predictive cues, are encoded in brain reward/motivational circuits. While considerable advances have been made in our understanding of reward circuits underlying positive reinforcement, less is known about the circuits underlying the hedonic and reinforcing actions of pain relief. We review findings from electrophysiological, neuroimaging, and behavioral studies supporting the concept that the rewarding effect of pain relief requires opioid signaling in the anterior cingulate cortex (ACC), activation of midbrain dopamine neurons, and the release of dopamine in the nucleus accumbens (NAc). Understanding of circuits that govern the reward of pain relief may allow the discovery of more effective and satisfying therapies for patients with acute or chronic pain.

  18. Premotor and Motor Cortices Encode Reward.

    Directory of Open Access Journals (Sweden)

    Pavan Ramkumar

    Full Text Available Rewards associated with actions are critical for motivation and learning about the consequences of one's actions on the world. The motor cortices are involved in planning and executing movements, but it is unclear whether they encode reward over and above limb kinematics and dynamics. Here, we report a categorical reward signal in dorsal premotor (PMd and primary motor (M1 neurons that corresponds to an increase in firing rates when a trial was not rewarded regardless of whether or not a reward was expected. We show that this signal is unrelated to error magnitude, reward prediction error, or other task confounds such as reward consumption, return reach plan, or kinematic differences across rewarded and unrewarded trials. The availability of reward information in motor cortex is crucial for theories of reward-based learning and motivational influences on actions.

  19. Radiofrequency encoded angular-resolved light scattering

    DEFF Research Database (Denmark)

    Buckley, Brandon W.; Akbari, Najva; Diebold, Eric D.

    2015-01-01

    The sensitive, specific, and label-free classification of microscopic cells and organisms is one of the outstanding problems in biology. Today, instruments such as the flow cytometer use a combination of light scatter measurements at two distinct angles to infer the size and internal complexity...... of cells at rates of more than 10,000 per second. However, by examining the entire angular light scattering spectrum it is possible to classify cells with higher resolution and specificity. Current approaches to performing these angular spectrum measurements all have significant throughput limitations...... Encoded Angular-resolved Light Scattering (REALS), this technique multiplexes angular light scattering in the radiofrequency domain, such that a single photodetector captures the entire scattering spectrum from a particle over approximately 100 discrete incident angles on a single shot basis. As a proof...

  20. Endogenous opioids encode relative taste preference.

    Science.gov (United States)

    Taha, Sharif A; Norsted, Ebba; Lee, Lillian S; Lang, Penelope D; Lee, Brian S; Woolley, Joshua D; Fields, Howard L

    2006-08-01

    Endogenous opioid signaling contributes to the neural control of food intake. Opioid signaling is thought to regulate palatability, the reward value of a food item as determined by orosensory cues such as taste and texture. The reward value of a food reflects not only these sensory properties but also the relative value of competing food choices. In the present experiment, we used a consummatory contrast paradigm to manipulate the relative value of a sucrose solution for two groups of rats. Systemic injection of the nonspecific opioid antagonist naltrexone suppressed sucrose intake; for both groups, however, this suppression was selective, occurring only for the relatively more valuable sucrose solution. Our results indicate that endogenous opioid signaling contributes to the encoding of relative reward value.

  1. Measurement strategy for spatially encoded photonic qubits

    International Nuclear Information System (INIS)

    Solis-Prosser, M. A.; Neves, L.

    2010-01-01

    We propose a measurement strategy which can, probabilistically, reproduce the statistics of any observable for spatially encoded photonic qubits. It comprises the implementation of a two-outcome positive operator-valued measure followed by a detection in a fixed transverse position, making the displacement of the detection system unnecessary, unlike previous methods. This strategy generalizes a scheme recently demonstrated by one of us and co-workers, restricted to measurement of observables with equatorial eigenvectors only. The method presented here can be implemented with the current technology of programmable multipixel liquid-crystal displays. In addition, it can be straightforwardly extended to high-dimensional qudits and may be a valuable tool in optical implementations of quantum information protocols with spatial qubits and qudits.

  2. MPEG-1 low-cost encoder solution

    Science.gov (United States)

    Grueger, Klaus; Schirrmeister, Frank; Filor, Lutz; von Reventlow, Christian; Schneider, Ulrich; Mueller, Gerriet; Sefzik, Nicolai; Fiedrich, Sven

    1995-02-01

    A solution for real-time compression of digital YCRCB video data to an MPEG-1 video data stream has been developed. As an additional option, motion JPEG and video telephone streams (H.261) can be generated. For MPEG-1, up to two bidirectional predicted images are supported. The required computational power for motion estimation and DCT/IDCT, memory size and memory bandwidth have been the main challenges. The design uses fast-page-mode memory accesses and requires only one single 80 ns EDO-DRAM with 256 X 16 organization for video encoding. This can be achieved only by using adequate access and coding strategies. The architecture consists of an input processing and filter unit, a memory interface, a motion estimation unit, a motion compensation unit, a DCT unit, a quantization control, a VLC unit and a bus interface. For using the available memory bandwidth by the processing tasks, a fixed schedule for memory accesses has been applied, that can be interrupted for asynchronous events. The motion estimation unit implements a highly sophisticated hierarchical search strategy based on block matching. The DCT unit uses a separated fast-DCT flowgraph realized by a switchable hardware unit for both DCT and IDCT operation. By appropriate multiplexing, only one multiplier is required for: DCT, quantization, inverse quantization, and IDCT. The VLC unit generates the video-stream up to the video sequence layer and is directly coupled with an intelligent bus-interface. Thus, the assembly of video, audio and system data can easily be performed by the host computer. Having a relatively low complexity and only small requirements for DRAM circuits, the developed solution can be applied to low-cost encoding products for consumer electronics.

  3. Providing Continuous Assurance

    NARCIS (Netherlands)

    Kocken, Jonne; Hulstijn, Joris

    2017-01-01

    It has been claimed that continuous assurance can be attained by combining continuous monitoring by management, with continuous auditing of data streams and the effectiveness of internal controls by an external auditor. However, we find that in existing literature the final step to continuous

  4. Modular verification of chemical reaction network encodings via serializability analysis

    Science.gov (United States)

    Lakin, Matthew R.; Stefanovic, Darko; Phillips, Andrew

    2015-01-01

    Chemical reaction networks are a powerful means of specifying the intended behaviour of synthetic biochemical systems. A high-level formal specification, expressed as a chemical reaction network, may be compiled into a lower-level encoding, which can be directly implemented in wet chemistry and may itself be expressed as a chemical reaction network. Here we present conditions under which a lower-level encoding correctly emulates the sequential dynamics of a high-level chemical reaction network. We require that encodings are transactional, such that their execution is divided by a “commit reaction” that irreversibly separates the reactant-consuming phase of the encoding from the product-generating phase. We also impose restrictions on the sharing of species between reaction encodings, based on a notion of “extra tolerance”, which defines species that may be shared between encodings without enabling unwanted reactions. Our notion of correctness is serializability of interleaved reaction encodings, and if all reaction encodings satisfy our correctness properties then we can infer that the global dynamics of the system are correct. This allows us to infer correctness of any system constructed using verified encodings. As an example, we show how this approach may be used to verify two- and four-domain DNA strand displacement encodings of chemical reaction networks, and we generalize our result to the limit where the populations of helper species are unlimited. PMID:27325906

  5. Encoding plaintext by Fourier transform hologram in double random phase encoding using fingerprint keys

    Science.gov (United States)

    Takeda, Masafumi; Nakano, Kazuya; Suzuki, Hiroyuki; Yamaguchi, Masahiro

    2012-09-01

    It has been shown that biometric information can be used as a cipher key for binary data encryption by applying double random phase encoding. In such methods, binary data are encoded in a bit pattern image, and the decrypted image becomes a plain image when the key is genuine; otherwise, decrypted images become random images. In some cases, images decrypted by imposters may not be fully random, such that the blurred bit pattern can be partially observed. In this paper, we propose a novel bit coding method based on a Fourier transform hologram, which makes images decrypted by imposters more random. Computer experiments confirm that the method increases the randomness of images decrypted by imposters while keeping the false rejection rate as low as in the conventional method.

  6. Encoding plaintext by Fourier transform hologram in double random phase encoding using fingerprint keys

    International Nuclear Information System (INIS)

    Takeda, Masafumi; Nakano, Kazuya; Suzuki, Hiroyuki; Yamaguchi, Masahiro

    2012-01-01

    It has been shown that biometric information can be used as a cipher key for binary data encryption by applying double random phase encoding. In such methods, binary data are encoded in a bit pattern image, and the decrypted image becomes a plain image when the key is genuine; otherwise, decrypted images become random images. In some cases, images decrypted by imposters may not be fully random, such that the blurred bit pattern can be partially observed. In this paper, we propose a novel bit coding method based on a Fourier transform hologram, which makes images decrypted by imposters more random. Computer experiments confirm that the method increases the randomness of images decrypted by imposters while keeping the false rejection rate as low as in the conventional method. (paper)

  7. Source-constrained retrieval influences the encoding of new information.

    Science.gov (United States)

    Danckert, Stacey L; MacLeod, Colin M; Fernandes, Myra A

    2011-11-01

    Jacoby, Shimizu, Daniels, and Rhodes (Psychonomic Bulletin & Review, 12, 852-857, 2005) showed that new words presented as foils among a list of old words that had been deeply encoded were themselves subsequently better recognized than new words presented as foils among a list of old words that had been shallowly encoded. In Experiment 1, by substituting a deep-versus-shallow imagery manipulation for the levels-of-processing manipulation, we demonstrated that the effect is robust and that it generalizes, also occurring with a different type of encoding. In Experiment 2, we provided more direct evidence for context-related encoding during tests of deeply encoded words, showing enhanced priming for foils presented among deeply encoded targets when participants made the same deep-encoding judgments on those items as had been made on the targets during study. In Experiment 3, we established that the findings from Experiment 2 are restricted to this specific deep judgment task and are not a general consequence of these foils being associated with deeply encoded items. These findings provide support for the source-constrained retrieval hypothesis of Jacoby, Shimizu, Daniels, and Rhodes: New information can be influenced by how surrounding items are encoded and retrieved, as long as the surrounding items recruit a coherent mode of processing.

  8. Exploring the influence of encoding format on subsequent memory.

    Science.gov (United States)

    Turney, Indira C; Dennis, Nancy A; Maillet, David; Rajah, M Natasha

    2017-05-01

    Distinctive encoding is greatly influenced by gist-based processes and has been shown to suffer when highly similar items are presented in close succession. Thus, elucidating the mechanisms underlying how presentation format affects gist processing is essential in determining the factors that influence these encoding processes. The current study utilised multivariate partial least squares (PLS) analysis to identify encoding networks directly associated with retrieval performance in a blocked and intermixed presentation condition. Subsequent memory analysis for successfully encoded items indicated no significant differences between reaction time and retrieval performance and presentation format. Despite no significant behavioural differences, behaviour PLS revealed differences in brain-behaviour correlations and mean condition activity in brain regions associated with gist-based vs. distinctive encoding. Specifically, the intermixed format encouraged more distinctive encoding, showing increased activation of regions associated with strategy use and visual processing (e.g., frontal and visual cortices, respectively). Alternatively, the blocked format exhibited increased gist-based processes, accompanied by increased activity in the right inferior frontal gyrus. Together, results suggest that the sequence that information is presented during encoding affects the degree to which distinctive encoding is engaged. These findings extend our understanding of the Fuzzy Trace Theory and the role of presentation format on encoding processes.

  9. Business Continuity Management Plan

    Science.gov (United States)

    2014-12-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA MBA PROFESSIONAL REPORT BUSINESS CONTINUITY MANAGEMENT PLAN December 2014......maximum 200 words) Navy Supply Systems Command (NAVSUP) lacks a business process framework for the development of Business Continuity Management

  10. Temporal encoding in a nervous system.

    Directory of Open Access Journals (Sweden)

    Zane N Aldworth

    2011-05-01

    Full Text Available We examined the extent to which temporal encoding may be implemented by single neurons in the cercal sensory system of the house cricket Acheta domesticus. We found that these neurons exhibit a greater-than-expected coding capacity, due in part to an increased precision in brief patterns of action potentials. We developed linear and non-linear models for decoding the activity of these neurons. We found that the stimuli associated with short-interval patterns of spikes (ISIs of 8 ms or less could be predicted better by second-order models as compared to linear models. Finally, we characterized the difference between these linear and second-order models in a low-dimensional subspace, and showed that modification of the linear models along only a few dimensions improved their predictive power to parity with the second order models. Together these results show that single neurons are capable of using temporal patterns of spikes as fundamental symbols in their neural code, and that they communicate specific stimulus distributions to subsequent neural structures.

  11. Chaotic digital communication by encoding initial conditions.

    Science.gov (United States)

    Xiaofeng, Gong; Xingang, Wang; Meng, Zhan; Lai, C H

    2004-06-01

    We investigate the possibility to improve the noise performance of a chaotic digital communication scheme by utilizing further dynamical information. We show that by encoding the initial information of the chaotic carrier according to the transmitting bits, extra redundance can be introduced into the segments of chaotic signals corresponding to the consecutive bits. Such redundant information can be exploited effectively at the receiver end to improve the noise performance of the system. Compared to other methods (e.g., differential chaos shift keying), straightforward application of the proposed modulation/demodulation scheme already provides significant performance gain in the low signal-to-noise ratio (SNR) region. Furthermore, maximum likelihood precleaning procedure based on the Viterbi algorithm can be applied before the demodulation step to overcome the performance degradation in the high SNR region. The study indicates that it is possible to improve the noise performance of the chaotic digital communication scheme if further dynamics information is added to the system. (c) 2004 American Institute of Physics

  12. Peafowl antipredator calls encode information about signalers.

    Science.gov (United States)

    Yorzinski, Jessica L

    2014-02-01

    Animals emit vocalizations that convey information about external events. Many of these vocalizations, including those emitted in response to predators, also encode information about the individual that produced the call. The relationship between acoustic features of antipredator calls and information relating to signalers (including sex, identity, body size, and social rank) were examined in peafowl (Pavo cristatus). The "bu-girk" antipredator calls of male and female peafowl were recorded and 20 acoustic parameters were automatically extracted from each call. Both the bu and girk elements of the antipredator call were individually distinctive and calls were classified to the correct signaler with over 90% and 70% accuracy in females and males, respectively. Females produced calls with a higher fundamental frequency (F0) than males. In both females and males, body size was negatively correlated with F0. In addition, peahen rank was related to the duration, end mean frequency, and start harmonicity of the bu element. Peafowl antipredator calls contain detailed information about the signaler and can potentially be used by receivers to respond to dangerous situations.

  13. Dynamical encoding of looming, receding, and focussing

    Science.gov (United States)

    Longtin, Andre; Clarke, Stephen Elisha; Maler, Leonard; CenterNeural Dynamics Collaboration

    This talk will discuss a non-conventional neural coding task that may apply more broadly to many senses in higher vertebrates. We ask whether and how a non-visual sensory system can focus on an object. We present recent experimental and modeling work that shows how the early sensory circuitry of electric sense can perform such neuronal focusing that is manifested behaviorally. This sense is the main one used by weakly electric fish to navigate, locate prey and communicate in the murky waters of their natural habitat. We show that there is a distance at which the Fisher information of a neuron's response to a looming and receding object is maximized, and that this distance corresponds to a behaviorally relevant one chosen by these animals. Strikingly, this maximum occurs at a bifurcation between tonic firing and bursting. We further discuss how the invariance of this distance to signal attributes can arise, a process that first involves power-law spike frequency adaptation. The talk will also highlight the importance of expanding the classic dual neural encoding of contrast using ON and OFF cells in the context of looming and receding stimuli. The authors acknowledge support from CIHR and NSERC.

  14. Interdependent processing and encoding of speech and concurrent background noise.

    Science.gov (United States)

    Cooper, Angela; Brouwer, Susanne; Bradlow, Ann R

    2015-05-01

    Speech processing can often take place in adverse listening conditions that involve the mixing of speech and background noise. In this study, we investigated processing dependencies between background noise and indexical speech features, using a speeded classification paradigm (Garner, 1974; Exp. 1), and whether background noise is encoded and represented in memory for spoken words in a continuous recognition memory paradigm (Exp. 2). Whether or not the noise spectrally overlapped with the speech signal was also manipulated. The results of Experiment 1 indicated that background noise and indexical features of speech (gender, talker identity) cannot be completely segregated during processing, even when the two auditory streams are spectrally nonoverlapping. Perceptual interference was asymmetric, whereby irrelevant indexical feature variation in the speech signal slowed noise classification to a greater extent than irrelevant noise variation slowed speech classification. This asymmetry may stem from the fact that speech features have greater functional relevance to listeners, and are thus more difficult to selectively ignore than background noise. Experiment 2 revealed that a recognition cost for words embedded in different types of background noise on the first and second occurrences only emerged when the noise and the speech signal were spectrally overlapping. Together, these data suggest integral processing of speech and background noise, modulated by the level of processing and the spectral separation of the speech and noise.

  15. Smarandache Continued Fractions

    OpenAIRE

    Ibstedt, H.

    2001-01-01

    The theory of general continued fractions is developed to the extent required in order to calculate Smarandache continued fractions to a given number of decimal places. Proof is given for the fact that Smarandache general continued fractions built with positive integer Smarandache sequences baving only a finite number of terms equal to 1 is convergent. A few numerical results are given.

  16. Plants under continuous light

    NARCIS (Netherlands)

    Velez Ramirez, A.I.; Ieperen, van W.; Vreugdenhill, D.; Millenaar, F.F.

    2011-01-01

    Continuous light is an essential tool for understanding the plant circadian clock. Additionally, continuous light might increase greenhouse food production. However, using continuous light in research and practice has its challenges. For instance, most of the circadian clock-oriented experiments

  17. Beyond initial encoding: Measures of the post-encoding status of memory traces predict long-term recall in infancy

    OpenAIRE

    Pathman, Thanujeni; Bauer, Patricia J.

    2012-01-01

    The first years of life are witness to rapid changes in long-term recall ability. In the present research, we contributed to explanation of the changes by testing the absolute and relative contributions to long-term recall of encoding and post-encoding processes. Using elicited imitation, we sampled the status of 16-, 20-, and 24-month-old infants’ memory representations at various time points after experience of events. In Experiment 1, infants were tested immediately, 1 week after encoding,...

  18. Spacetime replication of continuous variable quantum information

    International Nuclear Information System (INIS)

    Hayden, Patrick; Nezami, Sepehr; Salton, Grant; Sanders, Barry C

    2016-01-01

    The theory of relativity requires that no information travel faster than light, whereas the unitarity of quantum mechanics ensures that quantum information cannot be cloned. These conditions provide the basic constraints that appear in information replication tasks, which formalize aspects of the behavior of information in relativistic quantum mechanics. In this article, we provide continuous variable (CV) strategies for spacetime quantum information replication that are directly amenable to optical or mechanical implementation. We use a new class of homologically constructed CV quantum error correcting codes to provide efficient solutions for the general case of information replication. As compared to schemes encoding qubits, our CV solution requires half as many shares per encoded system. We also provide an optimized five-mode strategy for replicating quantum information in a particular configuration of four spacetime regions designed not to be reducible to previously performed experiments. For this optimized strategy, we provide detailed encoding and decoding procedures using standard optical apparatus and calculate the recovery fidelity when finite squeezing is used. As such we provide a scheme for experimentally realizing quantum information replication using quantum optics. (paper)

  19. Stress as a mnemonic filter: Interactions between medial temporal lobe encoding processes and post-encoding stress.

    Science.gov (United States)

    Ritchey, Maureen; McCullough, Andrew M; Ranganath, Charan; Yonelinas, Andrew P

    2017-01-01

    Acute stress has been shown to modulate memory for recently learned information, an effect attributed to the influence of stress hormones on medial temporal lobe (MTL) consolidation processes. However, little is known about which memories will be affected when stress follows encoding. One possibility is that stress interacts with encoding processes to selectively protect memories that had elicited responses in the hippocampus and amygdala, two MTL structures important for memory formation. There is limited evidence for interactions between encoding processes and consolidation effects in humans, but recent studies of consolidation in rodents have emphasized the importance of encoding "tags" for determining the impact of consolidation manipulations on memory. Here, we used functional magnetic resonance imaging in humans to test the hypothesis that the effects of post-encoding stress depend on MTL processes observed during encoding. We found that changes in stress hormone levels were associated with an increase in the contingency of memory outcomes on hippocampal and amygdala encoding responses. That is, for participants showing high cortisol reactivity, memories became more dependent on MTL activity observed during encoding, thereby shifting the distribution of recollected events toward those that had elicited relatively high activation. Surprisingly, this effect was generally larger for neutral, compared to emotionally negative, memories. The results suggest that stress does not uniformly enhance memory, but instead selectively preserves memories tagged during encoding, effectively acting as mnemonic filter. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Olfactory bulb encoding during learning under anaesthesia

    Directory of Open Access Journals (Sweden)

    Alister U Nicol

    2014-06-01

    Full Text Available Neural plasticity changes within the olfactory bulb are important for olfactory learning, although how neural encoding changes support new associations with specific odours and whether they can be investigated under anaesthesia, remain unclear. Using the social transmission of food preference olfactory learning paradigm in mice in conjunction with in vivo microdialysis sampling we have shown firstly that a learned preference for a scented food odour smelled on the breath of a demonstrator animal occurs under isofluorane anaesthesia. Furthermore, subsequent exposure to this cued odour under anaesthesia promotes the same pattern of increased release of glutamate and GABA in the olfactory bulb as previously found in conscious animals following olfactory learning, and evoked GABA release was positively correlated with the amount of scented food eaten. In a second experiment, multiarray (24 electrodes electrophysiological recordings were made from olfactory bulb mitral cells under isofluorane anaesthesia before, during and after a novel scented food odour was paired with carbon disulfide. Results showed significant increases in overall firing frequency to the cued-odour during and after learning and decreases in response to an uncued odour. Analysis of patterns of changes in individual neurons revealed that a substantial proportion (>50% of them significantly changed their response profiles during and after learning with most of those previously inhibited becoming excited. A large number of cells exhibiting no response to the odours prior to learning were either excited or inhibited afterwards. With the uncued odour many previously responsive cells became unresponsive or inhibited. Learning associated changes only occurred in the posterior part of the olfactory bulb. Thus olfactory learning under anaesthesia promotes extensive, but spatially distinct, changes in mitral cell networks to both cued and uncued odours as well as in evoked glutamate and

  1. Encoding and Decoding Models in Cognitive Electrophysiology

    Directory of Open Access Journals (Sweden)

    Christopher R. Holdgraf

    2017-09-01

    Full Text Available Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aim is to provide a practical understanding of predictive modeling of human brain data and to propose best-practices in conducting these analyses.

  2. Stereoscopic radiographic images with gamma source encoding

    International Nuclear Information System (INIS)

    Strocovsky, S.G.; Otero, D

    2012-01-01

    Conventional radiography with X-ray tube has several drawbacks, as the compromise between the size of the focal spot and the fluence. The finite dimensions of the focal spot impose a limit to the spatial resolution. Gamma radiography uses gamma-ray sources which surpass in size, portability and simplicity to X-ray tubes. However, its low intrinsic fluence forces to use extended sources that also degrade the spatial resolution. In this work, we show the principles of a new radiographic technique that overcomes the limitations associated with the finite dimensions of X-ray sources, and that offers additional benefits to conventional techniques. The new technique called coding source imaging (CSI), is based on the use of extended sources, edge-encoding of radiation and differential detection. The mathematical principles and the method of images reconstruction with the new proposed technique are explained in the present work. Analytical calculations were made to determine the maximum spatial resolution and the variables on which it depends. The CSI technique was tested by means of Monte Carlo simulations with sets of spherical objects. We show that CSI has stereoscopic capabilities and it can resolve objects smaller than the source size. The CSI decoding algorithm reconstructs simultaneously four different projections from the same object, while conventional radiography produces only one projection per acquisition. Projections are located in separate image fields on the detector plane. Our results show it is possible to apply an extremely simple radiographic technique with extended sources, and get 3D information of the attenuation coefficient distribution for simple geometry objects in a single acquisition. The results are promising enough to evaluate the possibility of future research with more complex objects typical of medical diagnostic radiography and industrial gamma radiography (author)

  3. The role of depth of encoding in attentional capture

    NARCIS (Netherlands)

    Sasin, Edyta; Nieuwenstein, Mark; Johnson, Addie

    2015-01-01

    The aim of the current study was to examine whether depth of encoding influences attentional capture by recently attended objects. In Experiment 1, participants first had to judge whether a word referred to a living or a nonliving thing (deep encoding condition) or whether the word was written in

  4. Encoding Effects on First-Graders' Use of Manipulatives

    Science.gov (United States)

    Osana, Helena P.; Przednowek, Katarzyna; Cooperman, Allyson; Adrien, Emmanuelle

    2018-01-01

    The effects of prior encodings of manipulatives (red and blue plastic chips) on children's ability to use them as representations of quantity were tested. First graders (N = 73) were assigned to four conditions in which the encoding of plastic chips was experimentally manipulated. All children then participated in an addition activity that relied…

  5. The Contribution of Encoding and Retrieval Processes to Proactive Interference

    Science.gov (United States)

    Kliegl, Oliver; Pastötter, Bernhard; Bäuml, Karl-Heinz T.

    2015-01-01

    Proactive interference (PI) refers to the finding that memory for recently studied (target) material can be impaired by the prior study of other (nontarget) material. Previous accounts of PI differed in whether they attributed PI to impaired retrieval or impaired encoding. Here, we suggest an integrated encoding-retrieval account, which assigns a…

  6. Evaluation of color encodings for high dynamic range pixels

    Science.gov (United States)

    Boitard, Ronan; Mantiuk, Rafal K.; Pouli, Tania

    2015-03-01

    Traditional Low Dynamic Range (LDR) color spaces encode a small fraction of the visible color gamut, which does not encompass the range of colors produced on upcoming High Dynamic Range (HDR) displays. Future imaging systems will require encoding much wider color gamut and luminance range. Such wide color gamut can be represented using floating point HDR pixel values but those are inefficient to encode. They also lack perceptual uniformity of the luminance and color distribution, which is provided (in approximation) by most LDR color spaces. Therefore, there is a need to devise an efficient, perceptually uniform and integer valued representation for high dynamic range pixel values. In this paper we evaluate several methods for encoding colour HDR pixel values, in particular for use in image and video compression. Unlike other studies we test both luminance and color difference encoding in a rigorous 4AFC threshold experiments to determine the minimum bit-depth required. Results show that the Perceptual Quantizer (PQ) encoding provides the best perceptual uniformity in the considered luminance range, however the gain in bit-depth is rather modest. More significant difference can be observed between color difference encoding schemes, from which YDuDv encoding seems to be the most efficient.

  7. Interaction Between Encoding and Retrieval Operations in Cued Recall

    Science.gov (United States)

    Fisher, Ronald P.; Craik, Fergus I. M.

    1977-01-01

    Three experiments are described in which the qualitative nature of memorial processing was manipulated at both input (encoding) and output (retrieval). As in earlier research, it was found that retention levels were highest when the same type of information was used as a retrieval cue. Concludes that the notions of encoding specificity and depth…

  8. On The Designed And Constructed Feedback Shift-Register Encoder

    African Journals Online (AJOL)

    An encoder capable of cyclical shifting of data, and which can therefore be used for Bose-Chaudhuri and Hocquenghem (BCH) coding, has been designed and constructed using discrete components. It comprises basically four bistable multivibrators and an exclusive-OR device. On completion, the encoder performed ...

  9. Distinctiveness of Encoding and Memory for Learning Tasks.

    Science.gov (United States)

    Glover, John A.; And Others

    1982-01-01

    A distinctiveness of encoding hypothesis, as applied to the facilitative effects that higher order objectives have on readers' prose recall, was evaluated in three experiments. Results suggest that distinctiveness of encoding may offer a theoretical basis for the effects of adjunct aids as well as a guide to their construction. (Author/GK)

  10. Decoding and Encoding Facial Expressions in Preschool-Age Children.

    Science.gov (United States)

    Zuckerman, Miron; Przewuzman, Sylvia J.

    1979-01-01

    Preschool-age children drew, decoded, and encoded facial expressions depicting five different emotions. Accuracy of drawing, decoding and encoding each of the five emotions was consistent across the three tasks; decoding ability was correlated with drawing ability among female subjects, but neither of these abilities was correlated with encoding…

  11. On The Designed And Constructed Feedback Shift-Register Encoder

    African Journals Online (AJOL)

    Information transmission in noisy channels can be achieved with vanishingly small probability of error by proper coding of the information as long as the encoding rate is less than the channel capacity. An encoder capable of cyclical shifting of data, and which can therefore be used for Bose-Chaudhuri and Hocquenghem ...

  12. A SSVEP Stimuli Encoding Method Using Trinary Frequency-Shift Keying Encoded SSVEP (TFSK-SSVEP

    Directory of Open Access Journals (Sweden)

    Xing Zhao

    2017-06-01

    Full Text Available SSVEP is a kind of BCI technology with advantage of high information transfer rate. However, due to its nature, frequencies could be used as stimuli are scarce. To solve such problem, a stimuli encoding method which encodes SSVEP signal using Frequency Shift–Keying (FSK method is developed. In this method, each stimulus is controlled by a FSK signal which contains three different frequencies that represent “Bit 0,” “Bit 1” and “Bit 2” respectively. Different to common BFSK in digital communication, “Bit 0” and “Bit 1” composited the unique identifier of stimuli in binary bit stream form, while “Bit 2” indicates the ending of a stimuli encoding. EEG signal is acquired on channel Oz, O1, O2, Pz, P3, and P4, using ADS1299 at the sample rate of 250 SPS. Before original EEG signal is quadrature demodulated, it is detrended and then band-pass filtered using FFT-based FIR filtering to remove interference. Valid peak of the processed signal is acquired by calculating its derivative and converted into bit stream using window method. Theoretically, this coding method could implement at least 2n−1 (n is the length of bit command stimulus while keeping the ITR the same. This method is suitable to implement stimuli on a monitor and where the frequency and phase could be used to code stimuli is limited as well as implementing portable BCI devices which is not capable of performing complex calculations.

  13. Grammatical constraints on phonological encoding in speech production.

    Science.gov (United States)

    Heller, Jordana R; Goldrick, Matthew

    2014-12-01

    To better understand the influence of grammatical encoding on the retrieval and encoding of phonological word-form information during speech production, we examine how grammatical class constraints influence the activation of phonological neighbors (words phonologically related to the target--e.g., MOON, TWO for target TUNE). Specifically, we compare how neighbors that share a target's grammatical category (here, nouns) influence its planning and retrieval, assessed by picture naming latencies, and phonetic encoding, assessed by word productions in picture names, when grammatical constraints are strong (in sentence contexts) versus weak (bare naming). Within-category (noun) neighbors influenced planning time and phonetic encoding more strongly in sentence contexts. This suggests that grammatical encoding constrains phonological processing; the influence of phonological neighbors is grammatically dependent. Moreover, effects on planning times could not fully account for phonetic effects, suggesting that phonological interaction affects articulation after speech onset. These results support production theories integrating grammatical, phonological, and phonetic processes.

  14. Convolutional over Recurrent Encoder for Neural Machine Translation

    Directory of Open Access Journals (Sweden)

    Dakwale Praveen

    2017-06-01

    Full Text Available Neural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Standard neural MT is an end-to-end neural network where the source sentence is encoded by a recurrent neural network (RNN called encoder and the target words are predicted using another RNN known as decoder. Recently, various models have been proposed which replace the RNN encoder with a convolutional neural network (CNN. In this paper, we propose to augment the standard RNN encoder in NMT with additional convolutional layers in order to capture wider context in the encoder output. Experiments on English to German translation demonstrate that our approach can achieve significant improvements over a standard RNN-based baseline.

  15. Improved entropy encoding for high efficient video coding standard

    Directory of Open Access Journals (Sweden)

    B.S. Sunil Kumar

    2018-03-01

    Full Text Available The High Efficiency Video Coding (HEVC has better coding efficiency, but the encoding performance has to be improved to meet the growing multimedia applications. This paper improves the standard entropy encoding by introducing the optimized weighing parameters, so that higher rate of compression can be accomplished over the standard entropy encoding. The optimization is performed using the recently introduced firefly algorithm. The experimentation is carried out using eight benchmark video sequences and the PSNR for varying rate of data transmission is investigated. Comparative analysis based on the performance statistics is made with the standard entropy encoding. From the obtained results, it is clear that the originality of the decoded video sequence is preserved far better than the proposed method, though the compression rate is increased. Keywords: Entropy, Encoding, HEVC, PSNR, Compression

  16. Review of Random Phase Encoding in Volume Holographic Storage

    Directory of Open Access Journals (Sweden)

    Wei-Chia Su

    2012-09-01

    Full Text Available Random phase encoding is a unique technique for volume hologram which can be applied to various applications such as holographic multiplexing storage, image encryption, and optical sensing. In this review article, we first review and discuss diffraction selectivity of random phase encoding in volume holograms, which is the most important parameter related to multiplexing capacity of volume holographic storage. We then review an image encryption system based on random phase encoding. The alignment of phase key for decryption of the encoded image stored in holographic memory is analyzed and discussed. In the latter part of the review, an all-optical sensing system implemented by random phase encoding and holographic interconnection is presented.

  17. Hippocampal-medial prefrontal circuit supports memory updating during learning and post-encoding rest

    Science.gov (United States)

    Schlichting, Margaret L.; Preston, Alison R.

    2015-01-01

    Learning occurs in the context of existing memories. Encountering new information that relates to prior knowledge may trigger integration, whereby established memories are updated to incorporate new content. Here, we provide a critical test of recent theories suggesting hippocampal (HPC) and medial prefrontal (MPFC) involvement in integration, both during and immediately following encoding. Human participants with established memories for a set of initial (AB) associations underwent fMRI scanning during passive rest and encoding of new related (BC) and unrelated (XY) pairs. We show that HPC-MPFC functional coupling during learning was more predictive of trial-by-trial memory for associations related to prior knowledge relative to unrelated associations. Moreover, the degree to which HPC-MPFC functional coupling was enhanced following overlapping encoding was related to memory integration behavior across participants. We observed a dissociation between anterior and posterior MPFC, with integration signatures during post-encoding rest specifically in the posterior subregion. These results highlight the persistence of integration signatures into post-encoding periods, indicating continued processing of interrelated memories during rest. We also interrogated the coherence of white matter tracts to assess the hypothesis that integration behavior would be related to the integrity of the underlying anatomical pathways. Consistent with our predictions, more coherent HPC-MPFC white matter structure was associated with better performance across participants. This HPC-MPFC circuit also interacted with content-sensitive visual cortex during learning and rest, consistent with reinstatement of prior knowledge to enable updating. These results show that the HPC-MPFC circuit supports on- and offline integration of new content into memory. PMID:26608407

  18. Cutting Out Continuations

    DEFF Research Database (Denmark)

    Bahr, Patrick; Hutton, Graham

    2016-01-01

    In the field of program transformation, one often transforms programs into continuation-passing style to make their flow of control explicit, and then immediately removes the resulting continuations using defunctionalisation to make the programs first-order. In this article, we show how these two...... transformations can be fused together into a single transformation step that cuts out the need to first introduce and then eliminate continuations. Our approach is calculational, uses standard equational reasoning techniques, and is widely applicable....

  19. Beyond Initial Encoding: Measures of the Post-Encoding Status of Memory Traces Predict Long-Term Recall during Infancy

    Science.gov (United States)

    Pathman, Thanujeni; Bauer, Patricia J.

    2013-01-01

    The first years of life are witness to rapid changes in long-term recall ability. In the current research we contributed to an explanation of the changes by testing the absolute and relative contributions to long-term recall of encoding and post-encoding processes. Using elicited imitation, we sampled the status of 16-, 20-, and 24-month-old…

  20. Can natural selection encode Bayesian priors?

    Science.gov (United States)

    Ramírez, Juan Camilo; Marshall, James A R

    2017-08-07

    The evolutionary success of many organisms depends on their ability to make decisions based on estimates of the state of their environment (e.g., predation risk) from uncertain information. These decision problems have optimal solutions and individuals in nature are expected to evolve the behavioural mechanisms to make decisions as if using the optimal solutions. Bayesian inference is the optimal method to produce estimates from uncertain data, thus natural selection is expected to favour individuals with the behavioural mechanisms to make decisions as if they were computing Bayesian estimates in typically-experienced environments, although this does not necessarily imply that favoured decision-makers do perform Bayesian computations exactly. Each individual should evolve to behave as if updating a prior estimate of the unknown environment variable to a posterior estimate as it collects evidence. The prior estimate represents the decision-maker's default belief regarding the environment variable, i.e., the individual's default 'worldview' of the environment. This default belief has been hypothesised to be shaped by natural selection and represent the environment experienced by the individual's ancestors. We present an evolutionary model to explore how accurately Bayesian prior estimates can be encoded genetically and shaped by natural selection when decision-makers learn from uncertain information. The model simulates the evolution of a population of individuals that are required to estimate the probability of an event. Every individual has a prior estimate of this probability and collects noisy cues from the environment in order to update its prior belief to a Bayesian posterior estimate with the evidence gained. The prior is inherited and passed on to offspring. Fitness increases with the accuracy of the posterior estimates produced. Simulations show that prior estimates become accurate over evolutionary time. In addition to these 'Bayesian' individuals, we also

  1. Cloning of gene-encoded stem bromelain on system coming from Pichia pastoris as therapeutic protein candidate

    Science.gov (United States)

    Yusuf, Y.; Hidayati, W.

    2018-01-01

    The process of identifying bacterial recombination using PCR, and restriction, and then sequencing process was done after identifying the bacteria. This research aimed to get a yeast cell of Pichia pastoris which has an encoder gene of stem bromelain enzyme. The production of recombinant stem bromelain enzymes using yeast cells of P. pastoris can produce pure bromelain rod enzymes and have the same conformation with the enzyme’s conformation in pineapple plants. This recombinant stem bromelain enzyme can be used as a therapeutic protein in inflammatory, cancer and degenerative diseases. This study was an early stage of a step series to obtain bromelain rod protein derived from pineapple made with genetic engineering techniques. This research was started by isolating the RNA of pineapple stem which was continued with constructing cDNA using reserve transcriptase-PCR technique (RT-PCR), doing the amplification of bromelain enzyme encoder gene with PCR technique using a specific premiere couple which was designed. The process was continued by cloning into bacterium cells of Escherichia coli. A vector which brought the encoder gene of stem bromelain enzyme was inserted into the yeast cell of P. pastoris and was continued by identifying the yeast cell of P. pastoris which brought the encoder gene of stem bromelain enzyme. The research has not found enzyme gene of stem bromelain in yeast cell of P. pastoris yet. The next step is repeating the process by buying new reagent; RNase inhibitor, and buying liquid nitrogen.

  2. High-Efficient Parallel CAVLC Encoders on Heterogeneous Multicore Architectures

    Directory of Open Access Journals (Sweden)

    H. Y. Su

    2012-04-01

    Full Text Available This article presents two high-efficient parallel realizations of the context-based adaptive variable length coding (CAVLC based on heterogeneous multicore processors. By optimizing the architecture of the CAVLC encoder, three kinds of dependences are eliminated or weaken, including the context-based data dependence, the memory accessing dependence and the control dependence. The CAVLC pipeline is divided into three stages: two scans, coding, and lag packing, and be implemented on two typical heterogeneous multicore architectures. One is a block-based SIMD parallel CAVLC encoder on multicore stream processor STORM. The other is a component-oriented SIMT parallel encoder on massively parallel architecture GPU. Both of them exploited rich data-level parallelism. Experiments results show that compared with the CPU version, more than 70 times of speedup can be obtained for STORM and over 50 times for GPU. The implementation of encoder on STORM can make a real-time processing for 1080p @30fps and GPU-based version can satisfy the requirements for 720p real-time encoding. The throughput of the presented CAVLC encoders is more than 10 times higher than that of published software encoders on DSP and multicore platforms.

  3. Archives: Continuing Medical Education

    African Journals Online (AJOL)

    Items 51 - 88 of 88 ... Archives: Continuing Medical Education. Journal Home > Archives: Continuing Medical Education. Log in or Register to get access to full text downloads. Username, Password, Remember me, or Register · Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives. 51 - 88 of 88 ...

  4. Generalized analytic continuation

    CERN Document Server

    Ross, William T

    2002-01-01

    The theory of generalized analytic continuation studies continuations of meromorphic functions in situations where traditional theory says there is a natural boundary. This broader theory touches on a remarkable array of topics in classical analysis, as described in the book. This book addresses the following questions: (1) When can we say, in some reasonable way, that component functions of a meromorphic function on a disconnected domain, are "continuations" of each other? (2) What role do such "continuations" play in certain aspects of approximation theory and operator theory? The authors use the strong analogy with the summability of divergent series to motivate the subject. In this vein, for instance, theorems can be described as being "Abelian" or "Tauberian". The introductory overview carefully explains the history and context of the theory. The authors begin with a review of the works of Poincaré, Borel, Wolff, Walsh, and Gončar, on continuation properties of "Borel series" and other meromorphic func...

  5. Extreme expansion of NBS-encoding genes in Rosaceae.

    Science.gov (United States)

    Jia, YanXiao; Yuan, Yang; Zhang, Yanchun; Yang, Sihai; Zhang, Xiaohui

    2015-05-03

    Nucleotide binding site leucine-rich repeats (NBS-LRR) genes encode a large class of disease resistance (R) proteins in plants. Extensive studies have been carried out to identify and investigate NBS-encoding gene families in many important plant species. However, no comprehensive research into NBS-encoding genes in the Rosaceae has been performed. In this study, five whole-genome sequenced Rosaceae species, including apple, pear, peach, mei, and strawberry, were analyzed to investigate the evolutionary pattern of NBS-encoding genes and to compare them to those of three Cucurbitaceae species, cucumber, melon, and watermelon. Considerable differences in the copy number of NBS-encoding genes were observed between Cucurbitaceae and Rosaceae species. In Rosaceae species, a large number and a high proportion of NBS-encoding genes were observed in peach (437, 1.52%), mei (475, 1.51%), strawberry (346, 1.05%) and pear (617, 1.44%), and apple contained a whopping 1303 (2.05%) NBS-encoding genes, which might be the highest number of R-genes in all of these reported diploid plant. However, no more than 100 NBS-encoding genes were identified in Cucurbitaceae. Many more species-specific gene families were classified and detected with the signature of positive selection in Rosaceae species, especially in the apple genome. Taken together, our findings indicate that NBS-encoding genes in Rosaceae, especially in apple, have undergone extreme expansion and rapid adaptive evolution. Useful information was provided for further research on the evolutionary mode of disease resistance genes in Rosaceae crops.

  6. Thought probes during prospective memory encoding: Evidence for perfunctory processes

    Science.gov (United States)

    McDaniel, Mark A.; Dasse, Michelle N.; Lee, Ji hae; Kurinec, Courtney A.; Tami, Claudina; Krueger, Madison L.

    2018-01-01

    For nearly 50 years, psychologists have studied prospective memory, or the ability to execute delayed intentions. Yet, there remains a gap in understanding as to whether initial encoding of the intention must be elaborative and strategic, or whether some components of successful encoding can occur in a perfunctory, transient manner. In eight studies (N = 680), we instructed participants to remember to press the Q key if they saw words representing fruits (cue) during an ongoing lexical decision task. They then typed what they were thinking and responded whether they encoded fruits as a general category, as specific exemplars, or hardly thought about it at all. Consistent with the perfunctory view, participants often reported mind wandering (42.9%) and hardly thinking about the prospective memory task (22.5%). Even though participants were given a general category cue, many participants generated specific category exemplars (34.5%). Bayesian analyses of encoding durations indicated that specific exemplars came to mind in a perfunctory manner rather than via strategic, elaborative mechanisms. Few participants correctly guessed the research hypotheses and changing from fruit category cues to initial-letter cues eliminated reports of specific exemplar generation, thereby arguing against demand characteristics in the thought probe procedure. In a final experiment, encoding duration was unrelated to prospective memory performance; however, specific-exemplar encoders outperformed general-category encoders with no ongoing task monitoring costs. Our findings reveal substantial variability in intention encoding, and demonstrate that some components of prospective memory encoding can be done “in passing.” PMID:29874277

  7. Latency Performance of Encoding with Random Linear Network Coding

    DEFF Research Database (Denmark)

    Nielsen, Lars; Hansen, René Rydhof; Lucani Rötter, Daniel Enrique

    2018-01-01

    the encoding process can be parallelized based on system requirements to reduce data access time within the system. Using a counting argument, we focus on predicting the effect of changes of generation (number of original packets) and symbol size (number of bytes per data packet) configurations on the encoding...... latency on full vector and on-the-fly algorithms. We show that the encoding latency doubles when either the generation size or the symbol size double and confirm this via extensive simulations. Although we show that the theoretical speed gain of on-the-fly over full vector is two, our measurements show...

  8. Wavelength-encoded OCDMA system using opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  9. Wavelength-encoded OCDMA system using opto-VLSI processors

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  10. Datacube Interoperability, Encoding Independence, and Analytics

    Science.gov (United States)

    Baumann, Peter; Hirschorn, Eric; Maso, Joan

    2017-04-01

    representations. Further, CIS 1.1 offers a unified model for any kind of regular and irregular grids, also allowing sensor models as per SensorML. Encodings include ASCII formats like GML, JSON, RDF as well as binary formats like GeoTIFF, NetCDF, JPEG2000, and GRIB2; further, a container concept allows mixed representations within one coverage file utilizing zip or other convenient package formats. Through the tight integration with the Sensor Web Enablement (SWE), a lossless "transport" from sensor into coverage world is ensured. The corresponding service model of WCS supports datacube operations ranging from simple data extraction to complex ad-hoc analytics with WPCS. Notably, W3C is working has set out on a coverage model as well; it has been designed relatively independently from the abovementioned standards, but there is informal agreement to link it into the CIS universe (which allows for different, yet interchangeable representations). Particularly interesting in the W3C proposal is the detailed semantic modeling of metadata; as CIS 1.1 supports RDF, a tight coupling seems feasible.

  11. Does a Single Eigenstate Encode the Full Hamiltonian?

    Science.gov (United States)

    Garrison, James R.; Grover, Tarun

    2018-04-01

    The eigenstate thermalization hypothesis (ETH) posits that the reduced density matrix for a subsystem corresponding to an excited eigenstate is "thermal." Here we expound on this hypothesis by asking: For which class of operators, local or nonlocal, is ETH satisfied? We show that this question is directly related to a seemingly unrelated question: Is the Hamiltonian of a system encoded within a single eigenstate? We formulate a strong form of ETH where, in the thermodynamic limit, the reduced density matrix of a subsystem corresponding to a pure, finite energy density eigenstate asymptotically becomes equal to the thermal reduced density matrix, as long as the subsystem size is much less than the total system size, irrespective of how large the subsystem is compared to any intrinsic length scale of the system. This allows one to access the properties of the underlying Hamiltonian at arbitrary energy densities (or temperatures) using just a single eigenstate. We provide support for our conjecture by performing an exact diagonalization study of a nonintegrable 1D quantum lattice model with only energy conservation. In addition, we examine the case in which the subsystem size is a finite fraction of the total system size, and we find that, even in this case, many operators continue to match their canonical expectation values, at least approximately. In particular, the von Neumann entanglement entropy equals the thermal entropy as long as the subsystem is less than half the total system. Our results are consistent with the possibility that a single eigenstate correctly predicts the expectation values of all operators with support on less than half the total system, as long as one uses a microcanonical ensemble with vanishing energy width for comparison. We also study, both analytically and numerically, a particle-number conserving model at infinite temperature that substantiates our conjectures.

  12. Emotion experienced during encoding enhances odor retrieval cue effectiveness.

    Science.gov (United States)

    Herz, R S

    1997-01-01

    Emotional potentiation may be a key variable in the formation of odor-associated memory. Two experiments were conducted in which a distinctive ambient odor was present or absent during encoding and retrieval sessions and subjects were in an anxious or neutral mood during encoding. Subjects' mood at retrieval was not manipulated. The laboratory mood induction used in Experiment 1 suggested that anxiety might increase the effectiveness of an odor retrieval cue. This trend was confirmed in Experiment 2 by capturing a naturally stressful situation. Subjects who had an ambient odor cue available and were in a preexam state during encoding recalled more words than subjects in any other group. These data are evidence that heightened emotion experienced during encoding with an ambient odor can enhance the effectiveness of an odor as a cue to memory.

  13. Color Image Authentication and Recovery via Adaptive Encoding

    Directory of Open Access Journals (Sweden)

    Chun-Hung Chen

    2014-01-01

    Full Text Available We describe an authentication and recovery scheme for color image protection based on adaptive encoding. The image blocks are categorized based on their contents and different encoding schemes are applied according to their types. Such adaptive encoding results in better image quality and more robust image authentication. The approximations of the luminance and chromatic channels are carefully calculated, and for the purpose of reducing the data size, differential coding is used to encode the channels with variable size according to the characteristic of the block. The recovery data which represents the approximation and the detail of the image is embedded for data protection. The necessary data is well protected by using error correcting coding and duplication. The experimental results demonstrate that our technique is able to identify and localize image tampering, while preserving high quality for both watermarked and recovered images.

  14. Suppressors of RNA silencing encoded by tomato leaf curl ...

    Indian Academy of Sciences (India)

    2013-01-06

    Jan 6, 2013 ... Virus encoded RNA-silencing suppressors (RSSs) are the key components evolved by the viruses to ... severe disease symptom in the host (Briddon et al. ..... Voinnet O 2001 RNA silencing as a plant immune system against.

  15. Two Genes Encoding Uracil Phosphoribosyltransferase Are Present in Bacillus subtilis

    DEFF Research Database (Denmark)

    Martinussen, Jan; Glaser, Philippe; Andersen, Paal S.

    1995-01-01

    Uracil phosphoribosyltransferase (UPRTase) catalyzes the key reaction in the salvage of uracil in many microorganisms. Surprisingly, two genes encoding UPRTase activity were cloned from Bacillus subtilis by complementation of an Escherichia coli mutant. The genes were sequenced, and the putative...

  16. What is a "good" encoding of guarded choice?

    DEFF Research Database (Denmark)

    Nestmann, Uwe

    2000-01-01

    into the latter that preserves divergence-freedom and symmetries. This paper argues that there are nevertheless "good" encodings between these calculi. In detail, we present a series of encodings for languages with (1) input-guarded choice, (2) both input and output-guarded choice, and (3) mixed-guarded choice......, and investigate them with respect to compositionality and divergence-freedom. The first and second encoding satisfy all of the above criteria, but various "good" candidates for the third encoding-inspired by an existing distributed implementation-invalidate one or the other criterion, While essentially confirming...... Palamidessi's result, our study suggests that the combination of strong compositionality and divergence-freedom is too strong for more practical purposes. (C) 2000 Academic Press....

  17. Cloning, sequencing and expression of cDNA encoding growth ...

    Indian Academy of Sciences (India)

    Unknown

    of medicine, animal husbandry, fish farming and animal ..... northern pike (Esox lucius) growth hormone; Mol. Mar. Biol. ... prolactin 1-luciferase fusion gene in African catfish and ... 1988 Cloning and sequencing of cDNA that encodes goat.

  18. Noise and neuronal populations conspire to encode simple waveforms reliably

    Science.gov (United States)

    Parnas, B. R.

    1996-01-01

    Sensory systems rely on populations of neurons to encode information transduced at the periphery into meaningful patterns of neuronal population activity. This transduction occurs in the presence of intrinsic neuronal noise. This is fortunate. The presence of noise allows more reliable encoding of the temporal structure present in the stimulus than would be possible in a noise-free environment. Simulations with a parallel model of signal processing at the auditory periphery have been used to explore the effects of noise and a neuronal population on the encoding of signal information. The results show that, for a given set of neuronal modeling parameters and stimulus amplitude, there is an optimal amount of noise for stimulus encoding with maximum fidelity.

  19. Toward a Better Compression for DNA Sequences Using Huffman Encoding.

    Science.gov (United States)

    Al-Okaily, Anas; Almarri, Badar; Al Yami, Sultan; Huang, Chun-Hsi

    2017-04-01

    Due to the significant amount of DNA data that are being generated by next-generation sequencing machines for genomes of lengths ranging from megabases to gigabases, there is an increasing need to compress such data to a less space and a faster transmission. Different implementations of Huffman encoding incorporating the characteristics of DNA sequences prove to better compress DNA data. These implementations center on the concepts of selecting frequent repeats so as to force a skewed Huffman tree, as well as the construction of multiple Huffman trees when encoding. The implementations demonstrate improvements on the compression ratios for five genomes with lengths ranging from 5 to 50 Mbp, compared with the standard Huffman tree algorithm. The research hence suggests an improvement on all such DNA sequence compression algorithms that use the conventional Huffman encoding. The research suggests an improvement on all DNA sequence compression algorithms that use the conventional Huffman encoding. Accompanying software is publicly available (AL-Okaily, 2016 ).

  20. Polypeptides having catalase activity and polynucleotides encoding same

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Ye; Duan, Junxin; Zhang, Yu; Tang, Lan

    2017-05-02

    Provided are isolated polypeptides having catalase activity and polynucleotides encoding the polypeptides. Also provided are nucleic acid constructs, vectors and host cells comprising the polynucleotides as well as methods of producing and using the polypeptides.

  1. Cloning, expression and characterisation of a novel gene encoding ...

    African Journals Online (AJOL)

    微软用户

    2012-01-12

    Jan 12, 2012 ... ... characterisation of a novel gene encoding a chemosensory protein from Bemisia ... The genomic DNA sequence comparisons revealed a 1490 bp intron ... have several conserved sequence motifs, including the. N-terminal ...

  2. Multiple-stage pure phase encoding with biometric information

    Science.gov (United States)

    Chen, Wen

    2018-01-01

    In recent years, many optical systems have been developed for securing information, and optical encryption/encoding has attracted more and more attention due to the marked advantages, such as parallel processing and multiple-dimensional characteristics. In this paper, an optical security method is presented based on pure phase encoding with biometric information. Biometric information (such as fingerprint) is employed as security keys rather than plaintext used in conventional optical security systems, and multiple-stage phase-encoding-based optical systems are designed for generating several phase-only masks with biometric information. Subsequently, the extracted phase-only masks are further used in an optical setup for encoding an input image (i.e., plaintext). Numerical simulations are conducted to illustrate the validity, and the results demonstrate that high flexibility and high security can be achieved.

  3. Polypeptides having xylanase activity and polynucleotides encoding same

    Energy Technology Data Exchange (ETDEWEB)

    Spodsberg, Nikolaj

    2018-02-06

    The present invention relates to isolated polypeptides having xylanase activity and polynucleotides encoding the polypeptides. The invention also relates to nucleic acid constructs, vectors, and host cells comprising the polynucleotides as well as methods of producing and using the polypeptides.

  4. Analytical Study of the Effect of the System Geometry on Photon Sensitivity and Depth of Interaction of Positron Emission Mammography

    Directory of Open Access Journals (Sweden)

    Pablo Aguiar

    2012-01-01

    Full Text Available Positron emission mammography (PEM cameras are novel-dedicated PET systems optimized to image the breast. For these cameras it is essential to achieve an optimum trade-off between sensitivity and spatial resolution and therefore the main challenge for the novel cameras is to improve the sensitivity without degrading the spatial resolution. We carry out an analytical study of the effect of the different detector geometries on the photon sensitivity and the angle of incidence of the detected photons which is related to the DOI effect and therefore to the intrinsic spatial resolution. To this end, dual head detectors were compared to box and different polygon-detector configurations. Our results showed that higher sensitivity and uniformity were found for box and polygon-detector configurations compared to dual-head cameras. Thus, the optimal configuration in terms of sensitivity is a PEM scanner based on a polygon of twelve (dodecagon or more detectors. We have shown that this configuration is clearly superior to dual-head detectors and slightly higher than box, octagon, and hexagon detectors. Nevertheless, DOI effects are increased for this configuration compared to dual head and box scanners and therefore an accurate compensation for this effect is required.

  5. The influence of photon depth of interaction and non-collinear spread of annihilation photons on PET image spatial resolution

    International Nuclear Information System (INIS)

    Sanchez-Crespo, Alejandro; Larsson, Stig A.

    2006-01-01

    The quality of PET imaging is impaired by parallax errors. These errors produce misalignment between the projected location of the true origin of the annihilation event and the line of response determined by the coincidence detection system. Parallax errors are due to the varying depths of photon interaction (DOI) within the scintillator and the non-collinear (NC) emission of the annihilation photons. The aim of this work was to address the problems associated with the DOI and the NC spread of annihilation photons and to develop a quantitative model to assess their impact on image spatial resolution losses for various commonly used scintillators and PET geometries. A theoretical model based on Monte Carlo simulations was developed to assess the relative influence of DOI and the NC spread of annihilation photons on PET spatial resolution for various scintillator materials (BGO, LSO, LuAP, GSO, NaI) and PET geometries. The results demonstrate good agreement between simulated, experimental and published overall spatial resolution for some commercial systems, with maximum differences around 1 mm in both 2D and 3D mode. The DOI introduces an impairment of non-stationary spatial resolution along the radial direction, which can be very severe at peripheral positions. As an example, the radial spatial resolution loss due to DOI increased from 1.3 mm at the centre to 6.7 mm at 20 cm from the centre of a BGO camera with a 412-mm radius in 2D mode. Including the NC, the corresponding losses were 3.0 mm at the centre and 7.3 mm 20 cm from the centre. Without a DOI detection technique, it seems difficult to improve PET spatial resolution and increase sensitivity by reducing the detector ring radius or by extending the detector in the axial direction. Much effort is expended on the design and configuration of smaller detector elements but more effort should be devoted to the DOI complexity. (orig.)

  6. Choice of crystal surface finishing for a dual-ended readout depth-of-interaction (DOI) detector

    International Nuclear Information System (INIS)

    Fan, Peng; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Wei, Qingyang; Yao, Rutao

    2016-01-01

    The objective of this study was to choose the crystal surface finishing for a dual-ended readout (DER) DOI detector. Through Monte Carlo simulations and experimental studies, we evaluated 4 crystal surface finishing options as combinations of crystal surface polishing (diffuse or specular) and reflector (diffuse or specular) options on a DER detector. We also tested one linear and one logarithm DOI calculation algorithm. The figures of merit used were DOI resolution, DOI positioning error, and energy resolution. Both the simulation and experimental results show that (1) choosing a diffuse type in either surface polishing or reflector would improve DOI resolution but degrade energy resolution; (2) crystal surface finishing with a diffuse polishing combined with a specular reflector appears a favorable candidate with a good balance of DOI and energy resolution; and (3) the linear and logarithm DOI calculation algorithms show overall comparable DOI error, and the linear algorithm was better for photon interactions near the ends of the crystal while the logarithm algorithm was better near the center. These results provide useful guidance in DER DOI detector design in choosing the crystal surface finishing and DOI calculation methods. (paper)

  7. Data Encoding using Periodic Nano-Optical Features

    Science.gov (United States)

    Vosoogh-Grayli, Siamack

    Successful trials have been made through a designed algorithm to quantize, compress and optically encode unsigned 8 bit integer values in the form of images using Nano optical features. The periodicity of the Nano-scale features (Nano-gratings) have been designed and investigated both theoretically and experimentally to create distinct states of variation (three on states and one off state). The use of easy to manufacture and machine readable encoded data in secured authentication media has been employed previously in bar-codes for bi-state (binary) models and in color barcodes for multiple state models. This work has focused on implementing 4 states of variation for unit information through periodic Nano-optical structures that separate an incident wavelength into distinct colors (variation states) in order to create an encoding system. Compared to barcodes and magnetic stripes in secured finite length storage media the proposed system encodes and stores more data. The benefits of multiple states of variation in an encoding unit are 1) increased numerically representable range 2) increased storage density and 3) decreased number of typical set elements for any ergodic or semi-ergodic source that emits these encoding units. A thorough investigation has targeted the effects of the use of multi-varied state Nano-optical features on data storage density and consequent data transmission rates. The results show that use of Nano-optical features for encoding data yields a data storage density of circa 800 Kbits/in2 via the implementation of commercially available high resolution flatbed scanner systems for readout. Such storage density is far greater than commercial finite length secured storage media such as Barcode family with maximum practical density of 1kbits/in2 and highest density magnetic stripe cards with maximum density circa 3 Kbits/in2. The numerically representable range of the proposed encoding unit for 4 states of variation is [0 255]. The number of

  8. Trieste will continue

    International Nuclear Information System (INIS)

    1968-01-01

    Trieste will continue to be the home of the International Centre for Theoretical Physics for the foreseeable future. An agreement signed in Vienna during December between the Italian Government and the Agency brought this assurance. (author)

  9. Nocturnal continuous glucose monitoring

    DEFF Research Database (Denmark)

    Bay, Christiane; Kristensen, Peter Lommer; Pedersen-Bjergaard, Ulrik

    2013-01-01

    Abstract Background: A reliable method to detect biochemical nocturnal hypoglycemia is highly needed, especially in patients with recurrent severe hypoglycemia. We evaluated reliability of nocturnal continuous glucose monitoring (CGM) in patients with type 1 diabetes at high risk of severe...

  10. Continual improvement plan

    Science.gov (United States)

    1994-01-01

    NASA's approach to continual improvement (CI) is a systems-oriented, agency-wide approach that builds on the past accomplishments of NASA Headquarters and its field installations and helps achieve NASA's vision, mission, and values. The NASA of the future will fully use the principles of continual improvement in every aspect of its operations. This NASA CI plan defines a systematic approach and a model for continual improvement throughout NASA, stressing systems integration and optimization. It demonstrates NASA's constancy of purpose for improvement - a consistent vision of NASA as a worldwide leader in top-quality science, technology, and management practices. The CI plan provides the rationale, structures, methods, and steps, and it defines NASA's short term (1-year) objectives for improvement. The CI plan presents the deployment strategies necessary for cascading the goals and objectives throughout the agency. It also provides guidance on implementing continual improvement with participation from top leadership and all levels of employees.

  11. Continuing Medical Education

    African Journals Online (AJOL)

    Continuing Medical Education. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 25, No 9 (2007) >. Log in or Register to get access to full text downloads.

  12. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  13. Security enhanced BioEncoding for protecting iris codes

    Science.gov (United States)

    Ouda, Osama; Tsumura, Norimichi; Nakaguchi, Toshiya

    2011-06-01

    Improving the security of biometric template protection techniques is a key prerequisite for the widespread deployment of biometric technologies. BioEncoding is a recently proposed template protection scheme, based on the concept of cancelable biometrics, for protecting biometric templates represented as binary strings such as iris codes. The main advantage of BioEncoding over other template protection schemes is that it does not require user-specific keys and/or tokens during verification. Besides, it satisfies all the requirements of the cancelable biometrics construct without deteriorating the matching accuracy. However, although it has been shown that BioEncoding is secure enough against simple brute-force search attacks, the security of BioEncoded templates against more smart attacks, such as record multiplicity attacks, has not been sufficiently investigated. In this paper, a rigorous security analysis of BioEncoding is presented. Firstly, resistance of BioEncoded templates against brute-force attacks is revisited thoroughly. Secondly, we show that although the cancelable transformation employed in BioEncoding might be non-invertible for a single protected template, the original iris code could be inverted by correlating several templates used in different applications but created from the same iris. Accordingly, we propose an important modification to the BioEncoding transformation process in order to hinder attackers from exploiting this type of attacks. The effectiveness of adopting the suggested modification is validated and its impact on the matching accuracy is investigated empirically using CASIA-IrisV3-Interval dataset. Experimental results confirm the efficacy of the proposed approach and show that it preserves the matching accuracy of the unprotected iris recognition system.

  14. Theory of multisource crosstalk reduction by phase-encoded statics

    KAUST Repository

    Schuster, Gerard T.

    2011-03-01

    Formulas are derived that relate the strength of the crosstalk noise in supergather migration images to the variance of time, amplitude and polarity shifts in encoding functions. A supergather migration image is computed by migrating an encoded supergather, where the supergather is formed by stacking a large number of encoded shot gathers. Analysis reveals that for temporal source static shifts in each shot gather, the crosstalk noise is exponentially reduced with increasing variance of the static shift and the square of source frequency. This is not too surprising because larger time shifts lead to less correlation between traces in different shot gathers, and so should tend to reduce the crosstalk noise. Analysis also reveals that combining both polarity and time statics is a superior encoding strategy compared to using either polarity statics or time statics alone. Signal-to-noise (SNR) estimates show that for a standard migration image and for an image computed by migrating a phase-encoded supergather; here, G is the number of traces in a shot gather, I is the number of stacking iterations in the supergather and S is the number of encoded/blended shot gathers that comprise the supergather. If the supergather can be uniformly divided up into Q unique sub-supergathers, then the resulting SNR of the final image is, which means that we can enhance image quality but at the expense of Q times more cost. The importance of these formulas is that they provide a precise understanding between different phase encoding strategies and image quality. Finally, we show that iterative migration of phase-encoded supergathers is a special case of passive seismic interferometry. We suggest that the crosstalk noise formulas can be helpful in designing optimal strategies for passive seismic interferometry and efficient extraction of Green\\'s functions from simulated supergathers. © 2011 The Authors Geophysical Journal International © 2011 RAS.

  15. Encoding and Retrieval Interference in Sentence Comprehension: Evidence from Agreement

    Directory of Open Access Journals (Sweden)

    Sandra Villata

    2018-01-01

    Full Text Available Long-distance verb-argument dependencies generally require the integration of a fronted argument when the verb is encountered for sentence interpretation. Under a parsing model that handles long-distance dependencies through a cue-based retrieval mechanism, retrieval is hampered when retrieval cues also resonate with non-target elements (retrieval interference. However, similarity-based interference may also stem from interference arising during the encoding of elements in memory (encoding interference, an effect that is not directly accountable for by a cue-based retrieval mechanism. Although encoding and retrieval interference are clearly distinct at the theoretical level, it is difficult to disentangle the two on empirical grounds, since encoding interference may also manifest at the retrieval region. We report two self-paced reading experiments aimed at teasing apart the role of each component in gender and number subject-verb agreement in Italian and English object relative clauses. In Italian, the verb does not agree in gender with the subject, thus providing no cue for retrieval. In English, although present tense verbs agree in number with the subject, past tense verbs do not, allowing us to test the role of number as a retrieval cue within the same language. Results from both experiments converge, showing similarity-based interference at encoding, and some evidence for an effect at retrieval. After having pointed out the non-negligible role of encoding in sentence comprehension, and noting that Lewis and Vasishth’s (2005 ACT-R model of sentence processing, the most fully developed cue-based retrieval approach to sentence processing does not predict encoding effects, we propose an augmentation of this model that predicts these effects. We then also propose a self-organizing sentence processing model (SOSP, which has the advantage of accounting for retrieval and encoding interference with a single mechanism.

  16. Encoding and Retrieval Interference in Sentence Comprehension: Evidence from Agreement

    Science.gov (United States)

    Villata, Sandra; Tabor, Whitney; Franck, Julie

    2018-01-01

    Long-distance verb-argument dependencies generally require the integration of a fronted argument when the verb is encountered for sentence interpretation. Under a parsing model that handles long-distance dependencies through a cue-based retrieval mechanism, retrieval is hampered when retrieval cues also resonate with non-target elements (retrieval interference). However, similarity-based interference may also stem from interference arising during the encoding of elements in memory (encoding interference), an effect that is not directly accountable for by a cue-based retrieval mechanism. Although encoding and retrieval interference are clearly distinct at the theoretical level, it is difficult to disentangle the two on empirical grounds, since encoding interference may also manifest at the retrieval region. We report two self-paced reading experiments aimed at teasing apart the role of each component in gender and number subject-verb agreement in Italian and English object relative clauses. In Italian, the verb does not agree in gender with the subject, thus providing no cue for retrieval. In English, although present tense verbs agree in number with the subject, past tense verbs do not, allowing us to test the role of number as a retrieval cue within the same language. Results from both experiments converge, showing similarity-based interference at encoding, and some evidence for an effect at retrieval. After having pointed out the non-negligible role of encoding in sentence comprehension, and noting that Lewis and Vasishth’s (2005) ACT-R model of sentence processing, the most fully developed cue-based retrieval approach to sentence processing does not predict encoding effects, we propose an augmentation of this model that predicts these effects. We then also propose a self-organizing sentence processing model (SOSP), which has the advantage of accounting for retrieval and encoding interference with a single mechanism. PMID:29403414

  17. Molecular cloning and functional analysis of the gene encoding ...

    African Journals Online (AJOL)

    Here we report for the first time the cloning of a full-length cDNA encoding GGPPS (Jc-GGPPS) from Jatropha curcas L. The full-length cDNA was 1414 base pair (bp), with an 1110-bp open reading frame (ORF) encoding a 370- amino-acids polypeptide. Bioinformatic analysis revealed that Jc-GGPPS is a member of the ...

  18. Branching trajectory continual integral

    International Nuclear Information System (INIS)

    Maslov, V.P.; Chebotarev, A.M.

    1980-01-01

    Heuristic definition of the Feynman continual integral over branching trajectories is suggested which makes it possible to obtain in the closed form the solution of the Cauchy problem for the model Hartree equation. A number of properties of the solution is derived from an integral representation. In particular, the quasiclassical asymptotics, exact solution in the gaussian case and perturbation theory series are described. The existence theorem for the simpliest continual integral over branching trajectories is proved [ru

  19. Encoding specificity manipulations do affect retrieval from memory.

    Science.gov (United States)

    Zeelenberg, René

    2005-05-01

    In a recent article, P.A. Higham (2002) [Strong cues are not necessarily weak: Thomson and Tulving (1970) and the encoding specificity principle revisited. Memory &Cognition, 30, 67-80] proposed a new way to analyze cued recall performance in terms of three separable aspects of memory (retrieval, monitoring, and report bias) by comparing performance under both free-report and forced-report instructions. He used this method to derive estimates of these aspects of memory in an encoding specificity experiment similar to that reported by D.M. Thomson and E. Tulving (1970) [Associative encoding and retrieval: weak and strong cues. Journal of Experimental Psychology, 86, 255-262]. Under forced-report instructions, the encoding specificity manipulation did not affect performance. Higham concluded that the manipulation affected monitoring and report bias, but not retrieval. I argue that this interpretation of the results is problematic because the Thomson and Tulving paradigm is confounded, and show in three experiments using a more appropriate design that encoding specificity manipulations do affect performance in forced-report cued recall. Because in Higham's framework forced-report performance provides a measure of retrieval that is uncontaminated by monitoring and report bias it is concluded that encoding specificity manipulations do affect retrieval from memory.

  20. Analysis of Program Obfuscation Schemes with Variable Encoding Technique

    Science.gov (United States)

    Fukushima, Kazuhide; Kiyomoto, Shinsaku; Tanaka, Toshiaki; Sakurai, Kouichi

    Program analysis techniques have improved steadily over the past several decades, and software obfuscation schemes have come to be used in many commercial programs. A software obfuscation scheme transforms an original program or a binary file into an obfuscated program that is more complicated and difficult to analyze, while preserving its functionality. However, the security of obfuscation schemes has not been properly evaluated. In this paper, we analyze obfuscation schemes in order to clarify the advantages of our scheme, the XOR-encoding scheme. First, we more clearly define five types of attack models that we defined previously, and define quantitative resistance to these attacks. Then, we compare the security, functionality and efficiency of three obfuscation schemes with encoding variables: (1) Sato et al.'s scheme with linear transformation, (2) our previous scheme with affine transformation, and (3) the XOR-encoding scheme. We show that the XOR-encoding scheme is superior with regard to the following two points: (1) the XOR-encoding scheme is more secure against a data-dependency attack and a brute force attack than our previous scheme, and is as secure against an information-collecting attack and an inverse transformation attack as our previous scheme, (2) the XOR-encoding scheme does not restrict the calculable ranges of programs and the loss of efficiency is less than in our previous scheme.

  1. Aerobic Exercise During Encoding Impairs Hippocampus-Dependent Memory.

    Science.gov (United States)

    Soga, Keishi; Kamijo, Keita; Masaki, Hiroaki

    2017-08-01

    We investigated how aerobic exercise during encoding affects hippocampus-dependent memory through a source memory task that assessed hippocampus-independent familiarity and hippocampus-dependent recollection processes. Using a within-participants design, young adult participants performed a memory-encoding task while performing a cycling exercise or being seated. The subsequent retrieval phase was conducted while sitting on a chair. We assessed behavioral and event-related brain potential measures of familiarity and recollection processes during the retrieval phase. Results indicated that source accuracy was lower for encoding with exercise than for encoding in the resting condition. Event-related brain potential measures indicated that the parietal old/new effect, which has been linked to recollection processing, was observed in the exercise condition, whereas it was absent in the rest condition, which is indicative of exercise-induced hippocampal activation. These findings suggest that aerobic exercise during encoding impairs hippocampus-dependent memory, which may be attributed to inefficient source encoding during aerobic exercise.

  2. Low Complexity HEVC Encoder for Visual Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhaoqing Pan

    2015-12-01

    Full Text Available Visual sensor networks (VSNs can be widely applied in security surveillance, environmental monitoring, smart rooms, etc. However, with the increased number of camera nodes in VSNs, the volume of the visual information data increases significantly, which becomes a challenge for storage, processing and transmitting the visual data. The state-of-the-art video compression standard, high efficiency video coding (HEVC, can effectively compress the raw visual data, while the higher compression rate comes at the cost of heavy computational complexity. Hence, reducing the encoding complexity becomes vital for the HEVC encoder to be used in VSNs. In this paper, we propose a fast coding unit (CU depth decision method to reduce the encoding complexity of the HEVC encoder for VSNs. Firstly, the content property of the CU is analyzed. Then, an early CU depth decision method and a low complexity distortion calculation method are proposed for the CUs with homogenous content. Experimental results show that the proposed method achieves 71.91% on average encoding time savings for the HEVC encoder for VSNs.

  3. The role of encoding and attention in facial emotion memory: an EEG investigation.

    Science.gov (United States)

    Brenner, Colleen A; Rumak, Samuel P; Burns, Amy M N; Kieffaber, Paul D

    2014-09-01

    Facial expressions are encoded via sensory mechanisms, but meaning extraction and salience of these expressions involve cognitive functions. We investigated the time course of sensory encoding and subsequent maintenance in memory via EEG. Twenty-nine healthy participants completed a facial emotion delayed match-to-sample task. P100, N170 and N250 ERPs were measured in response to the first stimulus, and evoked theta power (4-7Hz) was measured during the delay interval. Negative facial expressions produced larger N170 amplitudes and greater theta power early in the delay. N170 amplitude correlated with theta power, however larger N170 amplitude coupled with greater theta power only predicted behavioural performance for one emotion condition (very happy) out of six tested (see Supplemental Data). These findings indicate that the N170 ERP may be sensitive to emotional facial expressions when task demands require encoding and retention of this information. Furthermore, sustained theta activity may represent continued attentional processing that supports short-term memory, especially of negative facial stimuli. Further study is needed to investigate the potential influence of these measures, and their interaction, on behavioural performance. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  4. Least-squares reverse time migration of marine data with frequency-selection encoding

    KAUST Repository

    Dai, Wei; Huang, Yunsong; Schuster, Gerard T.

    2013-01-01

    The phase-encoding technique can sometimes increase the efficiency of the least-squares reverse time migration (LSRTM) by more than one order of magnitude. However, traditional random encoding functions require all the encoded shots to share

  5. Mutagenesis in sequence encoding of human factor VII for gene therapy of hemophilia

    Directory of Open Access Journals (Sweden)

    B Kazemi

    2009-12-01

    Full Text Available "nBackground: Current treatment of hemophilia which is one of the most common bleeding disorders, involves replacement therapy using concentrates of FVIII and FIX .However, these concentrates have been associated with viral infections and thromboembolic complications and development of antibodies. "nThe use of recombinant human factor VII (rhFVII is effective  for the treatment of patients with  hemophilia A or B, who develop antibodies ( referred as inhibitors against  replacement therapy , because it induces coagulation independent of FVIII and FIX. However, its short half-life and high cost have limited its use. One potential solution to this problem may be the use of FVIIa gene transfer, which would attain continuing therapeutic levels of expression from a single injection. The aim of this study was to engineer a novel hFVII (human FVII gene containing a cleavage site for the intracellular protease and furin, by PCR mutagenesis "nMethods: The sequence encoding light and heavy chains of hFVII, were amplified by using hFVII/pTZ57R and specific primers, separately. The PCR products were cloned in pTZ57R vector. "nResults and discussion: Cloning was confirmed by restriction analysis or PCR amplification using specific primers and plasmid universal primers. Mutagenesis of sequence encoding light and heavy chain was confirmed by restriction enzyme. "nConclusion: In the present study, it was provided recombinant plasmids based on mutant form of DNA encoding light and heavy chains.  Joining mutant form of DNA encoding light chain with mutant heavy chain led to a new variant of hFVII. This variant can be activated by furin and an increase in the proportion of activated form of FVII. This mutant form of hFVII may be used for gene therapy of hemophilia.

  6. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    Science.gov (United States)

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  7. Continuous Markovian Logics

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Cardelli, Luca; Larsen, Kim Guldstrand

    2012-01-01

    Continuous Markovian Logic (CML) is a multimodal logic that expresses quantitative and qualitative properties of continuous-time labelled Markov processes with arbitrary (analytic) state-spaces, henceforth called continuous Markov processes (CMPs). The modalities of CML evaluate the rates...... of the exponentially distributed random variables that characterize the duration of the labeled transitions of a CMP. In this paper we present weak and strong complete axiomatizations for CML and prove a series of metaproperties, including the finite model property and the construction of canonical models. CML...... characterizes stochastic bisimilarity and it supports the definition of a quantified extension of the satisfiability relation that measures the "compatibility" between a model and a property. In this context, the metaproperties allows us to prove two robustness theorems for the logic stating that one can...

  8. Quantum control mechanism analysis through field based Hamiltonian encoding

    International Nuclear Information System (INIS)

    Mitra, Abhra; Rabitz, Herschel

    2006-01-01

    Optimal control of quantum dynamics in the laboratory is proving to be increasingly successful. The control fields can be complex, and the mechanisms by which they operate have often remained obscure. Hamiltonian encoding (HE) has been proposed as a method for understanding mechanisms in quantum dynamics. In this context mechanism is defined in terms of the dominant quantum pathways leading to the final state of the controlled system. HE operates by encoding a special modulation into the Hamiltonian and decoding its signature in the dynamics to determine the dominant pathway amplitudes. Earlier work encoded the modulation directly into the Hamiltonian operators. This present work introduces the alternative scheme of field based HE, where the modulation is encoded into the control field and not directly into the Hamiltonian operators. This distinct form of modulation yields a new perspective on mechanism and is computationally faster than the earlier approach. Field based encoding is also an important step towards a laboratory based algorithm for HE as it is the only form of encoding that may be experimentally executed. HE is also extended to cover systems with noise and uncertainty and finally, a hierarchical algorithm is introduced to reveal mechanism in a stepwise fashion of ever increasing detail as desired. This new hierarchical algorithm is an improvement over earlier approaches to HE where the entire mechanism was determined in one stroke. The improvement comes from the use of less complex modulation schemes, which leads to fewer evaluations of Schroedinger's equation. A number of simulations are presented on simple systems to illustrate the new field based encoding technique for mechanism assessment

  9. Dynamic Information Encoding With Dynamic Synapses in Neural Adaptation

    Science.gov (United States)

    Li, Luozheng; Mi, Yuanyuan; Zhang, Wenhao; Wang, Da-Hui; Wu, Si

    2018-01-01

    Adaptation refers to the general phenomenon that the neural system dynamically adjusts its response property according to the statistics of external inputs. In response to an invariant stimulation, neuronal firing rates first increase dramatically and then decrease gradually to a low level close to the background activity. This prompts a question: during the adaptation, how does the neural system encode the repeated stimulation with attenuated firing rates? It has been suggested that the neural system may employ a dynamical encoding strategy during the adaptation, the information of stimulus is mainly encoded by the strong independent spiking of neurons at the early stage of the adaptation; while the weak but synchronized activity of neurons encodes the stimulus information at the later stage of the adaptation. The previous study demonstrated that short-term facilitation (STF) of electrical synapses, which increases the synchronization between neurons, can provide a mechanism to realize dynamical encoding. In the present study, we further explore whether short-term plasticity (STP) of chemical synapses, an interaction form more common than electrical synapse in the cortex, can support dynamical encoding. We build a large-size network with chemical synapses between neurons. Notably, facilitation of chemical synapses only enhances pair-wise correlations between neurons mildly, but its effect on increasing synchronization of the network can be significant, and hence it can serve as a mechanism to convey the stimulus information. To read-out the stimulus information, we consider that a downstream neuron receives balanced excitatory and inhibitory inputs from the network, so that the downstream neuron only responds to synchronized firings of the network. Therefore, the response of the downstream neuron indicates the presence of the repeated stimulation. Overall, our study demonstrates that STP of chemical synapse can serve as a mechanism to realize dynamical neural

  10. Continuing bonds and place.

    Science.gov (United States)

    Jonsson, Annika; Walter, Tony

    2017-08-01

    Where do people feel closest to those they have lost? This article explores how continuing bonds with a deceased person can be rooted in a particular place or places. Some conceptual resources are sketched, namely continuing bonds, place attachment, ancestral places, home, reminder theory, and loss of place. The authors use these concepts to analyze interview material with seven Swedes and five Britons who often thought warmly of the deceased as residing in a particular place and often performing characteristic actions. The destruction of such a place, by contrast, could create a troubling, haunting absence, complicating the deceased's absent-presence.

  11. Introduction to Continuous Optimization

    DEFF Research Database (Denmark)

    Andreasson, Niclas; Evgrafov, Anton; Patriksson, Michael

    optimal solutions for continuous optimization models. The main part of the mathematical material therefore concerns the analysis and linear algebra that underlie the workings of convexity and duality, and necessary/sufficient local/global optimality conditions for continuous optimization problems. Natural...... algorithms are then developed from these optimality conditions, and their most important convergence characteristics are analyzed. The book answers many more questions of the form “Why?” and “Why not?” than “How?”. We use only elementary mathematics in the development of the book, yet are rigorous throughout...

  12. Continuous Platform Development

    DEFF Research Database (Denmark)

    Nielsen, Ole Fiil

    low risks and investments but also with relatively fuzzy results. When looking for new platform projects, it is important to make sure that the company and market is ready for the introduction of platforms, and to make sure that people from marketing and sales, product development, and downstream......, but continuous product family evolution challenges this strategy. The concept of continuous platform development is based on the fact that platform development should not be a one-time experience but rather an ongoing process of developing new platforms and updating existing ones, so that product family...

  13. A type system for Continuation Calculus

    Directory of Open Access Journals (Sweden)

    Herman Geuvers

    2014-09-01

    Full Text Available Continuation Calculus (CC, introduced by Geron and Geuvers, is a simple foundational model for functional computation. It is closely related to lambda calculus and term rewriting, but it has no variable binding and no pattern matching. It is Turing complete and evaluation is deterministic. Notions like "call-by-value" and "call-by-name" computation are available by choosing appropriate function definitions: e.g. there is a call-by-value and a call-by-name addition function. In the present paper we extend CC with types, to be able to define data types in a canonical way, and functions over these data types, defined by iteration. Data type definitions follow the so-called "Scott encoding" of data, as opposed to the more familiar "Church encoding". The iteration scheme comes in two flavors: a call-by-value and a call-by-name iteration scheme. The call-by-value variant is a double negation variant of call-by-name iteration. The double negation translation allows to move between call-by-name and call-by-value.

  14. An analysis on equal width quantization and linearly separable subcode encoding-based discretization and its performance resemblances

    Directory of Open Access Journals (Sweden)

    Lim Meng-Hui

    2011-01-01

    Full Text Available Abstract Biometric discretization extracts a binary string from a set of real-valued features per user. This representative string can be used as a cryptographic key in many security applications upon error correction. Discretization performance should not degrade from the actual continuous features-based classification performance significantly. However, numerous discretization approaches based on ineffective encoding schemes have been put forward. Therefore, the correlation between such discretization and classification has never been made clear. In this article, we aim to bridge the gap between continuous and Hamming domains, and provide a revelation upon how discretization based on equal-width quantization and linearly separable subcode encoding could affect the classification performance in the Hamming domain. We further illustrate how such discretization can be applied in order to obtain a highly resembled classification performance under the general Lp distance and the inner product metrics. Finally, empirical studies conducted on two benchmark face datasets vindicate our analysis results.

  15. Studies on continuous fermentation

    Energy Technology Data Exchange (ETDEWEB)

    Ueda, K

    1958-01-01

    Continuous fermentation of molasses with a combined system of agitated vessel and flow pipe is studied. A new apparatus was designed. The rate of the fermentation was faster with this apparatus than with the former apparatus which was composed of two vessels.

  16. Continuous Reinforced Concrete Beams

    DEFF Research Database (Denmark)

    Hoang, Cao Linh; Nielsen, Mogens Peter

    1996-01-01

    This report deals with stress and stiffness estimates of continuous reinforced concrete beams with different stiffnesses for negative and positive moments e.g. corresponding to different reinforcement areas in top and bottom. Such conditions are often met in practice.The moment distribution...

  17. Continuous Adductor Canal Blocks

    DEFF Research Database (Denmark)

    Monahan, Amanda M; Sztain, Jacklynn F; Khatibi, Bahareh

    2016-01-01

    on cutaneous knee sensation in volunteers. METHODS: Bilateral adductor canal catheters were inserted in 24 volunteers followed by ropivacaine 0.2% administration for 8 hours. One limb of each subject was assigned randomly to a continuous infusion (8 mL/h) or automated hourly boluses (8 m...

  18. Continuous Personal Improvement.

    Science.gov (United States)

    Emiliani, M. L.

    1998-01-01

    Suggests that continuous improvement tools used in the workplace can be applied to self-improvement. Explains the use of such techniques as one-piece flow, kanban, visual controls, and total productive maintenance. Points out misapplications of these tools and describes the use of fishbone diagrams to diagnose problems. (SK)

  19. Continuity and Change.

    Science.gov (United States)

    Istance, David

    1985-01-01

    Examines issues related to continuity in education and educational change. Indicates that although schools must be responsive to changing social and economic conditions (and contribute to them), they must also be protected against fluctuating swings of educational fashion and safeguard their long-term mission, even when buffeted by short-term…

  20. Promoting Continuing Education Programs.

    Science.gov (United States)

    Hendrickson, Gayle A.

    This handbook is intended for use by institutions in marketing their continuing education programs. A section on "Devising Your Strategy" looks at identifying a target audience, determining the marketing approach, and developing a marketing plan and promotional techniques. A discussion of media options looks at the advantages and…

  1. Continuous quality improvement

    NARCIS (Netherlands)

    Rohlin, Madeleine; Schaub, Rob M.H.; Holbrook, Peter; Leibur, Edvitar; Lévy, Gérard; Roubalikova, Lenka; Nilner, Maria; Roger-Leroi, Valerie; Danner, Gunter; Iseri, Haluk; Feldman, Cecile

    2002-01-01

    Versch. in: Eur J Dent Educ; 6 (Suppl. 3): 67–77 Continuous quality improvement (CQI) can be envisaged as a circular process of goal-setting, followed by external and internal evaluations resulting in improvements that can serve as goals for a next cycle. The need for CQI is apparent, because of

  2. Continuous digital health

    NARCIS (Netherlands)

    Van Halteren, Aart; Gay, Vaĺerie

    2015-01-01

    A transformation is underway regarding how we deal with our health, not only because mobile Internet technology has made it possible to have continuous access to personal health information, but also because breaking the trend of ever-growing healthcare costs is increasingly necessary. Connectivity,

  3. Continuous quality improvement

    International Nuclear Information System (INIS)

    Bourne, P.B.

    1985-01-01

    This paper describes the various statistical tools used at the Hanford Engineering Development Laboratory to achieve continuous quality improvement in the development of Breeder Reactor Technology and in reactor operations. The role of the quality assurance professionals in this process, including quantifiable measurements using actual examples, is provided. The commitment to quality improvement through top management involvement is dramatically illustrated

  4. Continuous feedback fluid queues

    NARCIS (Netherlands)

    Scheinhardt, Willem R.W.; van Foreest, N.D.; Mandjes, M.R.H.

    2003-01-01

    We investigate a fluid buffer which is modulated by a stochastic background process, while the momentary behavior of the background process depends on the current buffer level in a continuous way. Loosely speaking the feedback is such that the background process behaves `as a Markov process' with

  5. Continuing Medical Education

    African Journals Online (AJOL)

    A review article willintroduce readers to the educational subject matter, along with one-page summarises (in print) of additional articles that may be accessed in full online. We will continue to offer topical and up-to-date CME material. Readers are encouraged to register with samj.org.za to receive future notifications of new ...

  6. Task-selective memory effects for successfully implemented encoding strategies.

    Science.gov (United States)

    Leshikar, Eric D; Duarte, Audrey; Hertzog, Christopher

    2012-01-01

    Previous behavioral evidence suggests that instructed strategy use benefits associative memory formation in paired associate tasks. Two such effective encoding strategies--visual imagery and sentence generation--facilitate memory through the production of different types of mediators (e.g., mental images and sentences). Neuroimaging evidence suggests that regions of the brain support memory reflecting the mental operations engaged at the time of study. That work, however, has not taken into account self-reported encoding task success (i.e., whether participants successfully generated a mediator). It is unknown, therefore, whether task-selective memory effects specific to each strategy might be found when encoding strategies are successfully implemented. In this experiment, participants studied pairs of abstract nouns under either visual imagery or sentence generation encoding instructions. At the time of study, participants reported their success at generating a mediator. Outside of the scanner, participants further reported the quality of the generated mediator (e.g., images, sentences) for each word pair. We observed task-selective memory effects for visual imagery in the left middle occipital gyrus, the left precuneus, and the lingual gyrus. No such task-selective effects were observed for sentence generation. Intriguingly, activity at the time of study in the left precuneus was modulated by the self-reported quality (vividness) of the generated mental images with greater activity for trials given higher ratings of quality. These data suggest that regions of the brain support memory in accord with the encoding operations engaged at the time of study.

  7. Two Pathways to Stimulus Encoding in Category Learning?

    Science.gov (United States)

    Davis, Tyler; Love, Bradley C.; Maddox, W. Todd

    2008-01-01

    Category learning theorists tacitly assume that stimuli are encoded by a single pathway. Motivated by theories of object recognition, we evaluate a dual-pathway account of stimulus encoding. The part-based pathway establishes mappings between sensory input and symbols that encode discrete stimulus features, whereas the image-based pathway applies holistic templates to sensory input. Our experiments use rule-plus-exception structures in which one exception item in each category violates a salient regularity and must be distinguished from other items. In Experiment 1, we find that discrete representations are crucial for recognition of exceptions following brief training. Experiments 2 and 3 involve multi-session training regimens designed to encourage either part or image-based encoding. We find that both pathways are able to support exception encoding, but have unique characteristics. We speculate that one advantage of the part-based pathway is the ability to generalize across domains, whereas the image-based pathway provides faster and more effortless recognition. PMID:19460948

  8. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  9. Graph Regularized Auto-Encoders for Image Representation.

    Science.gov (United States)

    Yiyi Liao; Yue Wang; Yong Liu

    2017-06-01

    Image representation has been intensively explored in the domain of computer vision for its significant influence on the relative tasks such as image clustering and classification. It is valuable to learn a low-dimensional representation of an image which preserves its inherent information from the original image space. At the perspective of manifold learning, this is implemented with the local invariant idea to capture the intrinsic low-dimensional manifold embedded in the high-dimensional input space. Inspired by the recent successes of deep architectures, we propose a local invariant deep nonlinear mapping algorithm, called graph regularized auto-encoder (GAE). With the graph regularization, the proposed method preserves the local connectivity from the original image space to the representation space, while the stacked auto-encoders provide explicit encoding model for fast inference and powerful expressive capacity for complex modeling. Theoretical analysis shows that the graph regularizer penalizes the weighted Frobenius norm of the Jacobian matrix of the encoder mapping, where the weight matrix captures the local property in the input space. Furthermore, the underlying effects on the hidden representation space are revealed, providing insightful explanation to the advantage of the proposed method. Finally, the experimental results on both clustering and classification tasks demonstrate the effectiveness of our GAE as well as the correctness of the proposed theoretical analysis, and it also suggests that GAE is a superior solution to the current deep representation learning techniques comparing with variant auto-encoders and existing local invariant methods.

  10. Feedback-tuned, noise resilient gates for encoded spin qubits

    Science.gov (United States)

    Bluhm, Hendrik

    Spin 1/2 particles form native two level systems and thus lend themselves as a natural qubit implementation. However, encoding a single qubit in several spins entails benefits, such as reducing the resources necessary for qubit control and protection from certain decoherence channels. While several varieties of such encoded spin qubits have been implemented, accurate control remains challenging, and leakage out of the subspace of valid qubit states is a potential issue. Optimal performance typically requires large pulse amplitudes for fast control, which is prone to systematic errors and prohibits standard control approaches based on Rabi flopping. Furthermore, the exchange interaction typically used to electrically manipulate encoded spin qubits is inherently sensitive to charge noise. I will discuss all-electrical, high-fidelity single qubit operations for a spin qubit encoded in two electrons in a GaAs double quantum dot. Starting from a set of numerically optimized control pulses, we employ an iterative tuning procedure based on measured error syndromes to remove systematic errors.Randomized benchmarking yields an average gate fidelity exceeding 98 % and a leakage rate into invalid states of 0.2 %. These gates exhibit a certain degree of resilience to both slow charge and nuclear spin fluctuations due to dynamical correction analogous to a spin echo. Furthermore, the numerical optimization minimizes the impact of fast charge noise. Both types of noise make relevant contributions to gate errors. The general approach is also adaptable to other qubit encodings and exchange based two-qubit gates.

  11. Molecular cloning and expression of gene encoding aromatic amino acid decarboxylase in 'Vidal blanc' grape berries.

    Science.gov (United States)

    Pan, Qiu-Hong; Chen, Fang; Zhu, Bao-Qing; Ma, Li-Yan; Li, Li; Li, Jing-Ming

    2012-04-01

    The pleasantly fruity and floral 2-phenylethanol are a dominant aroma compound in post-ripening 'Vidal blanc' grapes. However, to date little has been reported about its synthetic pathway in grapevine. In the present study, a full-length cDNA of VvAADC (encoding aromatic amino acid decarboxylase) was firstly cloned from the berries of 'Vidal blanc', an interspecific hybrid variety of Vitis vinifera × Vitis riparia. This sequence encodes a complete open reading frame of 482 amino acids with a calculated molecular mass of 54 kDa and isoelectric point value (pI) of 5.73. The amino acid sequence deduced shared about 79% identity with that of aromatic L: -amino acid decarboxylases (AADCs) from tomato. Real-time PCR analysis indicated that VvAADC transcript abundance presented a small peak at 110 days after full bloom and then a continuous increase at the berry post-ripening stage, which was consistent with the accumulation of 2-phenylethanol, but did not correspond to the trends of two potential intermediates, phenethylamine and 2-phenylacetaldehyde. Furthermore, phenylalanine still exhibited a continuous increase even in post-ripening period. It is thus suggested that 2-phenylethanol biosynthetic pathway mediated by AADC exists in grape berries, but it has possibly little contribution to a considerable accumulation of 2-phenylethanol in post-ripening 'Vidal blanc' grapes.

  12. Encoding, storage and judgment of experienced frequency and duration

    Directory of Open Access Journals (Sweden)

    Tilmann Betsch

    2010-08-01

    Full Text Available This paper examines conditions that do or do not lead to accurate judgments of frequency (JOF and judgments of duration (JOD. In three experiments, duration and frequency of visually presented stimuli are varied orthogonally in a within-subjects design. Experiment 1 reveals an asymmetric judgment pattern. JOFs reflected actual presentation frequency quite accurately and were unbiased by exposure duration. Conversely, JODs were almost insensitive to actual exposure duration and were systematically biased by presentation frequency. We show, however, that a tendency towards a symmetric judgment pattern can be obtained by manipulating encoding conditions. Sustaining attention during encoding (Experiment 2 or enhancing richness of the encoded stimuli (Experiment 3 increases judgment sensitivity in JOD and yields biases in both directions (JOF biased by exposure duration, JOD biased by presentation frequency. The implications of these findings for underlying memory mechanisms are discussed.

  13. Encoding, training and retrieval in ferroelectric tunnel junctions

    Science.gov (United States)

    Xu, Hanni; Xia, Yidong; Xu, Bo; Yin, Jiang; Yuan, Guoliang; Liu, Zhiguo

    2016-05-01

    Ferroelectric tunnel junctions (FTJs) are quantum nanostructures that have great potential in the hardware basis for future neuromorphic applications. Among recently proposed possibilities, the artificial cognition has high hopes, where encoding, training, memory solidification and retrieval constitute a whole chain that is inseparable. However, it is yet envisioned but experimentally unconfirmed. The poor retention or short-term store of tunneling electroresistance, in particular the intermediate states, is still a key challenge in FTJs. Here we report the encoding, training and retrieval in BaTiO3 FTJs, emulating the key features of information processing in terms of cognitive neuroscience. This is implemented and exemplified through processing characters. Using training inputs that are validated by the evolution of both barrier profile and domain configuration, accurate recalling of encoded characters in the retrieval stage is demonstrated.

  14. DNA-encoded chemical libraries - achievements and remaining challenges.

    Science.gov (United States)

    Favalli, Nicholas; Bassi, Gabriele; Scheuermann, Jörg; Neri, Dario

    2018-04-23

    DNA-encoded chemical libraries (DECLs) are collections of compounds, individually coupled to DNA tags serving as amplifiable identification barcodes. Since individual compounds can be identified by the associated DNA tag, they can be stored as a mixture, allowing the synthesis and screening of combinatorial libraries of unprecedented size, facilitated by the implementation of split-and-pool synthetic procedures or other experimental methodologies. In this review, we briefly present relevant concepts and technologies, which are required for the implementation and interpretation of screening procedures with DNA-encoded chemical libraries. Moreover, we illustrate some success stories, detailing how novel ligands were discovered from encoded libraries. Finally, we critically review what can realistically be achieved with the technology at the present time, highlighting challenges and opportunities for the future. © 2018 Federation of European Biochemical Societies.

  15. Human Transcriptome and Chromatin Modifications: An ENCODE Perspective

    Directory of Open Access Journals (Sweden)

    Li Shen

    2013-06-01

    Full Text Available A decade-long project, led by several international research groups, called the Encyclopedia of DNA Elements (ENCODE, recently released an unprecedented amount of data. The ambitious project covers transcriptome, cistrome, epigenome, and interactome data from more than 1,600 sets of experiments in human. To make use of this valuable resource, it is important to understand the information it represents and the techniques that were used to generate these data. In this review, we introduce the data that ENCODE generated, summarize the observations from the data analysis, and revisit a computational approach that ENCODE used to predict gene expression, with a focus on the human transcriptome and its association with chromatin modifications.

  16. Audiovisual semantic congruency during encoding enhances memory performance.

    Science.gov (United States)

    Heikkilä, Jenni; Alho, Kimmo; Hyvönen, Heidi; Tiippana, Kaisa

    2015-01-01

    Studies of memory and learning have usually focused on a single sensory modality, although human perception is multisensory in nature. In the present study, we investigated the effects of audiovisual encoding on later unisensory recognition memory performance. The participants were to memorize auditory or visual stimuli (sounds, pictures, spoken words, or written words), each of which co-occurred with either a semantically congruent stimulus, incongruent stimulus, or a neutral (non-semantic noise) stimulus in the other modality during encoding. Subsequent memory performance was overall better when the stimulus to be memorized was initially accompanied by a semantically congruent stimulus in the other modality than when it was accompanied by a neutral stimulus. These results suggest that semantically congruent multisensory experiences enhance encoding of both nonverbal and verbal materials, resulting in an improvement in their later recognition memory.

  17. Distinctiveness and encoding effects in online sentence comprehension

    Directory of Open Access Journals (Sweden)

    Philip eHofmeister

    2014-12-01

    Full Text Available In explicit memory recall and recognition tasks, elaboration and contextual isolation both facilitate memory performance. Here, we investigate these effects in the context of sentence processing: targets for retrieval during online sentence processing of English object relative clause constructions differ in the amount of elaboration associated with the target noun phrase, or the homogeneity of superficial features (text color. Experiment 1 shows that greater elaboration for targets during the encoding phase reduces reading times at retrieval sites, but elaboration of non-targets has considerably weaker effects. Experiment 2 illustrates that processing isolated superficial features of target noun phrases --- here, a green word in a sentence with words colored white --- does not lead to enhanced memory performance, despite triggering longer encoding times. These results are interpreted in the light of the memory models of Nairne 1990, 2001, 2006, which state that encoding remnants contribute to the set of retrieval cues that provide the basis for similarity-based interference effects.

  18. Comparison between different encoding schemes for synthetic aperture imaging

    DEFF Research Database (Denmark)

    Nikolov, Svetoslav; Jensen, Jørgen Arendt

    2002-01-01

    and spatio-temporal encoding was investigated. Experiments on wire phantom in water were carried out to quantify the gain from the different encodings. The gain in SNR using an FM modulated pulse is 12 dB. The penetration depth of the images was studied using tissue mimicking phantom with frequency dependent......Synthetic transmit aperture ultrasound (STAU) imaging can create images with as low as 2 emissions, making it attractive for 3D real-time imaging. Two are the major problems to be solved: (1) complexity of the hardware involved, and (2) poor image quality due to low signal to noise ratio (SNR). We...... attenuation of 0.5 dB/(cm MHz). The combination of spatial and temporal encoding have highest penetration depth. Images to a depth of 110 mm, can successfully be made with contrast resolution comparable to that of a linear array image. The in-vivo scans show that the motion artifacts do not significantly...

  19. Validation of a Real-time AVS Encoder on FPGA

    Directory of Open Access Journals (Sweden)

    Qun Fang Yuan

    2014-01-01

    Full Text Available A whole I frame AVS real-time video encoder is designed and implemented on FPGA platform in this paper. The system uses the structure of the flow calculation, coupled with a dual-port RAM memory between/among the various functional modules. Reusable design and pipeline design are used to optimize various encoding module and to ensure the efficient operation of the pipeline. Through the simulation of ISE software and the verification of Xilinx Vritex-4 pro platform, it can be seen that the highest working frequency can be up to 110 MHz, meeting the requirements of the whole I frame real- time encoding of AVS in CIF resolution.

  20. Enzymes and Enzyme Activity Encoded by Nonenveloped Viruses.

    Science.gov (United States)

    Azad, Kimi; Banerjee, Manidipa; Johnson, John E

    2017-09-29

    Viruses are obligate intracellular parasites that rely on host cell machineries for their replication and survival. Although viruses tend to make optimal use of the host cell protein repertoire, they need to encode essential enzymatic or effector functions that may not be available or accessible in the host cellular milieu. The enzymes encoded by nonenveloped viruses-a group of viruses that lack any lipid coating or envelope-play vital roles in all the stages of the viral life cycle. This review summarizes the structural, biochemical, and mechanistic information available for several classes of enzymes and autocatalytic activity encoded by nonenveloped viruses. Advances in research and development of antiviral inhibitors targeting specific viral enzymes are also highlighted.

  1. Automatic Encoding and Language Detection in the GSDL

    Directory of Open Access Journals (Sweden)

    Otakar Pinkas

    2014-10-01

    Full Text Available Automatic detection of encoding and language of the text is part of the Greenstone Digital Library Software (GSDL for building and distributing digital collections. It is developed by the University of Waikato (New Zealand in cooperation with UNESCO. The automatic encoding and language detection in Slavic languages is difficult and it sometimes fails. The aim is to detect cases of failure. The automatic detection in the GSDL is based on n-grams method. The most frequent n-grams for Czech are presented. The whole process of automatic detection in the GSDL is described. The input documents to test collections are plain texts encoded in ISO-8859-1, ISO-8859-2 and Windows-1250. We manually evaluated the quality of automatic detection. To the causes of errors belong the improper language model predominance and the incorrect switch to Windows-1250. We carried out further tests on documents that were more complex.

  2. Performance study of large area encoding readout MRPC

    Science.gov (United States)

    Chen, X. L.; Wang, Y.; Chen, G.; Han, D.; Wang, X.; Zeng, M.; Zeng, Z.; Zhao, Z.; Guo, B.

    2018-02-01

    Muon tomography system built by the 2-D readout high spatial resolution Multi-gap Resistive Plate Chamber (MRPC) detector is a project of Tsinghua University. An encoding readout method based on the fine-fine configuration has been used to minimize the number of the readout electronic channels resulting in reducing the complexity and the cost of the system. In this paper, we provide a systematic comparison of the MRPC detector performance with and without fine-fine encoding readout. Our results suggest that the application of the fine-fine encoding readout leads us to achieve a detecting system with slightly worse spatial resolution but dramatically reduce the number of electronic channels.

  3. Fluorescence-Based Multiplex Protein Detection Using Optically Encoded Microbeads

    Directory of Open Access Journals (Sweden)

    Dae Hong Jeong

    2012-03-01

    Full Text Available Potential utilization of proteins for early detection and diagnosis of various diseases has drawn considerable interest in the development of protein-based multiplex detection techniques. Among the various techniques for high-throughput protein screening, optically-encoded beads combined with fluorescence-based target monitoring have great advantages over the planar array-based multiplexing assays. This review discusses recent developments of analytical methods of screening protein molecules on microbead-based platforms. These include various strategies such as barcoded microbeads, molecular beacon-based techniques, and surface-enhanced Raman scattering-based techniques. Their applications for label-free protein detection are also addressed. Especially, the optically-encoded beads such as multilayer fluorescence beads and SERS-encoded beads are successful for generating a large number of coding.

  4. Encoding qubits into oscillators with atomic ensembles and squeezed light

    Science.gov (United States)

    Motes, Keith R.; Baragiola, Ben Q.; Gilchrist, Alexei; Menicucci, Nicolas C.

    2017-05-01

    The Gottesman-Kitaev-Preskill (GKP) encoding of a qubit within an oscillator provides a number of advantages when used in a fault-tolerant architecture for quantum computing, most notably that Gaussian operations suffice to implement all single- and two-qubit Clifford gates. The main drawback of the encoding is that the logical states themselves are challenging to produce. Here we present a method for generating optical GKP-encoded qubits by coupling an atomic ensemble to a squeezed state of light. Particular outcomes of a subsequent spin measurement of the ensemble herald successful generation of the resource state in the optical mode. We analyze the method in terms of the resources required (total spin and amount of squeezing) and the probability of success. We propose a physical implementation using a Faraday-based quantum nondemolition interaction.

  5. Prefrontal activity and impaired memory encoding strategies in schizophrenia.

    Science.gov (United States)

    Guimond, Synthia; Hawco, Colin; Lepage, Martin

    2017-08-01

    Schizophrenia patients have significant memory difficulties that have far-reaching implications in their daily life. These impairments are partly attributed to an inability to self-initiate effective memory encoding strategies, but its core neurobiological correlates remain unknown. The current study addresses this critical gap in our knowledge of episodic memory impairments in schizophrenia. Schizophrenia patients (n = 35) and healthy controls (n = 23) underwent a Semantic Encoding Memory Task (SEMT) during an fMRI scan. Brain activity was examined for conditions where participants were a) prompted to use semantic encoding strategies, or b) not prompted but required to self-initiate such strategies. When prompted to use semantic encoding strategies, schizophrenia patients exhibited similar recognition performance and brain activity as healthy controls. However, when required to self-initiate these strategies, patients had significant reduced recognition performance and brain activity in the left dorsolateral prefrontal cortex, as well as in the left temporal gyrus, left superior parietal lobule, and cerebellum. When patients were divided based on performance on the SEMT, the subgroup with more severe deficits in self-initiation also showed greater reduction in left dorsolateral prefrontal activity. These results suggest that impaired self-initiation of elaborative encoding strategies is a driving feature of memory deficits in schizophrenia. We also identified the neural correlates of impaired self-initiation of semantic encoding strategies, in which a failure to activate the left dorsolateral prefrontal cortex plays a key role. These findings provide important new targets in the development of novel treatments aiming to improve memory and ultimately patients' outcome. Copyright © 2017. Published by Elsevier Ltd.

  6. Continuous venovenous haemodialysis

    DEFF Research Database (Denmark)

    Jensen, Dorte Møller; Bistrup, C; Pedersen, R S

    1996-01-01

    A simple three-pump-based system for the performance of continuous venovenous haemodialysis is described. The method employs access to the circulation via a double-lumen catheter, and by means of a standard extracorporeal peristaltic pump the blood is circulated through a haemofiltration filter....... Standard solutions for peritoneal dialysis are administered in a single-pass manner countercurrent to the blood flow. To control the dialysate flow through the filter, two separate pumps designed for intravenous infusion are used. Anticoagulation is achieved by means of continuous heparin infusion....... This three-pump system is effective in controlling the fluid balance and the level of azotemia. Furthermore, this system makes haemodialysis possible in spite of severe haemodynamic instability. The system is easy to use and inexpensive. 3 patients participated in the study....

  7. Negative affect promotes encoding of and memory for details at the expense of the gist: affect, encoding, and false memories.

    Science.gov (United States)

    Storbeck, Justin

    2013-01-01

    I investigated whether negative affective states enhance encoding of and memory for item-specific information reducing false memories. Positive, negative, and neutral moods were induced, and participants then completed a Deese-Roediger-McDermott (DRM) false-memory task. List items were presented in unique spatial locations or unique fonts to serve as measures for item-specific encoding. The negative mood conditions had more accurate memories for item-specific information, and they also had fewer false memories. The final experiment used a manipulation that drew attention to distinctive information, which aided learning for DRM words, but also promoted item-specific encoding. For the condition that promoted item-specific encoding, false memories were reduced for positive and neutral mood conditions to a rate similar to that of the negative mood condition. These experiments demonstrated that negative affective cues promote item-specific processing reducing false memories. People in positive and negative moods encode events differently creating different memories for the same event.

  8. Continuous Fiber Ceramic Composites

    Energy Technology Data Exchange (ETDEWEB)

    Fareed, Ali [Honeywell Advanced Composites Inc. (HACI), Newark, DE (United States); Craig, Phillip A. [Honeywell Advanced Composites Inc. (HACI), Newark, DE (United States)

    2002-09-01

    Fiber-reinforced ceramic composites demonstrate the high-temperature stability of ceramics--with an increased fracture toughness resulting from the fiber reinforcement of the composite. The material optimization performed under the continuous fiber ceramic composites (CFCC) included a series of systematic optimizations. The overall goals were to define the processing window, to increase the robustinous of the process, to increase process yield while reducing costs, and to define the complexity of parts that could be fabricated.

  9. Deconstructing continuous flash suppression

    OpenAIRE

    Yang, Eunice; Blake, Randolph

    2012-01-01

    In this paper, we asked to what extent the depth of interocular suppression engendered by continuous flash suppression (CFS) varies depending on spatiotemporal properties of the suppressed stimulus and CFS suppressor. An answer to this question could have implications for interpreting the results in which CFS influences the processing of different categories of stimuli to different extents. In a series of experiments, we measured the selectivity and depth of suppression (i.e., elevation in co...

  10. Safety Campaign Continues

    CERN Multimedia

    2002-01-01

    If you see this poster, stop and read it! This is the third poster produced by TIS Division as part of its information campaign on health and safety in the workplace. It provides statistics on occupational accidents at CERN. You will see that, as in the rest of Europe, falls, slips and trips continue to be the main cause of accident. So, eyes open and take care! For more information : http://safety.cern.ch/

  11. Robust continuous clustering.

    Science.gov (United States)

    Shah, Sohil Atul; Koltun, Vladlen

    2017-09-12

    Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank.

  12. Accelerated radial Fourier-velocity encoding using compressed sensing

    International Nuclear Information System (INIS)

    Hilbert, Fabian; Han, Dietbert

    2014-01-01

    Purpose:Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. Materials and Methods:We imaged the femoral artery of healthy volunteers with ECG - triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Results:Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6 - fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Conclusion: Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity

  13. Ordering of diagnostic information in encoded medical images. Accuracy progression

    Science.gov (United States)

    Przelaskowski, A.; Jóźwiak, R.; Krzyżewski, T.; Wróblewska, A.

    2008-03-01

    A concept of diagnostic accuracy progression for embedded coding of medical images was presented. Implementation of JPEG2000 encoder with a modified PCRD optimization algorithm was realized and initially verified as a tool for accurate medical image streaming. Mean square error as a distortion measure was replaced by other numerical measures to revise quality progression according to diagnostic importance of successively encoded image information. A faster increment of image diagnostic importance during reconstruction of initial packets of code stream was reached. Modified Jasper code was initially tested on a set of mammograms containing clusters of microcalcifications and malignant masses, and other radiograms. Teleradiologic applications were considered as the first area of interests.

  14. Development and Synthesis of DNA-Encoded Benzimidazole Library.

    Science.gov (United States)

    Ding, Yun; Chai, Jing; Centrella, Paolo A; Gondo, Chenaimwoyo; DeLorey, Jennifer L; Clark, Matthew A

    2018-04-25

    Encoded library technology (ELT) is an effective approach to the discovery of novel small-molecule ligands for biological targets. A key factor for the success of the technology is the chemical diversity of the libraries. Here we report the development of DNA-conjugated benzimidazoles. Using 4-fluoro-3-nitrobenzoic acid as a key synthon, we synthesized a 320 million-member DNA-encoded benzimidazole library using Fmoc-protected amino acids, amines and aldehydes as diversity elements. Affinity selection of the library led to the discovery of a novel, potent and specific antagonist of the NK3 receptor.

  15. Accelerated radial Fourier-velocity encoding using compressed sensing

    Energy Technology Data Exchange (ETDEWEB)

    Hilbert, Fabian; Han, Dietbert [Wuerzburg Univ. (Germany). Inst. of Radiology; Wech, Tobias; Koestler, Herbert [Wuerzburg Univ. (Germany). Inst. of Radiology; Wuerzburg Univ. (Germany). Comprehensive Heart Failure Center (CHFC)

    2014-10-01

    Purpose:Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. Materials and Methods:We imaged the femoral artery of healthy volunteers with ECG - triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Results:Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6 - fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Conclusion: Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity

  16. Authentication of gold nanoparticle encoded pharmaceutical tablets using polarimetric signatures.

    Science.gov (United States)

    Carnicer, Artur; Arteaga, Oriol; Suñé-Negre, Josep M; Javidi, Bahram

    2016-10-01

    The counterfeiting of pharmaceutical products represents concerns for both industry and the safety of the general public. Falsification produces losses to companies and poses health risks for patients. In order to detect fake pharmaceutical tablets, we propose producing film-coated tablets with gold nanoparticle encoding. These coated tablets contain unique polarimetric signatures. We present experiments to show that ellipsometric optical techniques, in combination with machine learning algorithms, can be used to distinguish genuine and fake samples. To the best of our knowledge, this is the first report using gold nanoparticles encoded with optical polarimetric classifiers to prevent the counterfeiting of pharmaceutical products.

  17. Accelerated radial Fourier-velocity encoding using compressed sensing.

    Science.gov (United States)

    Hilbert, Fabian; Wech, Tobias; Hahn, Dietbert; Köstler, Herbert

    2014-09-01

    Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. We imaged the femoral artery of healthy volunteers with ECG-triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6-fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity distribution in vessels in the order of the voxel size. Thus

  18. Total sleep deprivation does not significantly degrade semantic encoding.

    Science.gov (United States)

    Honn, K A; Grant, D A; Hinson, J M; Whitney, P; Van Dongen, Hpa

    2018-01-17

    Sleep deprivation impairs performance on cognitive tasks, but it is unclear which cognitive processes it degrades. We administered a semantic matching task with variable stimulus onset asynchrony (SOA) and both speeded and self-paced trial blocks. The task was administered at the baseline and 24 hours later after 30.8 hours of total sleep deprivation (TSD) or matching well-rested control. After sleep deprivation, the 20% slowest response times (RTs) were significantly increased. However, the semantic encoding time component of the RTs remained at baseline level. Thus, the performance impairment induced by sleep deprivation on this task occurred in cognitive processes downstream of semantic encoding.

  19. Non-deterministic quantum CNOT gate with double encoding

    Science.gov (United States)

    Gueddana, Amor; Attia, Moez; Chatta, Rihab

    2013-09-01

    We define an Asymmetric Partially Polarizing Beam Splitter (APPBS) to be a linear optical component having different reflectivity (transmittance) coefficients, on the upper and the lower arms, for horizontally and vertically Polarized incident photons. Our CNOT model is composed by two APPBSs, one Half Wave Plate (HWP), two Polarizing Beam Splitters (PBSs), a Beam Splitter (BS) and a -phase rotator for specific wavelength. Control qubit operates with dual rail encoding while target qubit is based on polarization encoding. To perform CNOT operation in 4/27 of the cases, input and target incoming photons are injected with different wavelengths.

  20. Effects of reflector and crystal surface on the performance of a depth-encoding PET detector with dual-ended readout

    International Nuclear Information System (INIS)

    Ren, Silin; Yang, Yongfeng; Cherry, Simon R.

    2014-01-01

    Purpose: Depth encoding detectors are required to improve the spatial resolution and spatial resolution uniformity of small animal positron emission tomography (PET) scanners, as well as dedicated breast and brain scanners. Depth of interaction (DOI) can be measured by using dual-ended readout of lutetium oxyorthosilicate (LSO) scintillator arrays with position-sensitive avalanche photodiodes. Inter-crystal reflectors and crystal surface treatments play important roles in determining the performance of dual-ended detectors. In this paper, the authors evaluated five LSO arrays made with three different intercrystal reflectors and with either polished or unpolished crystal surfaces. Methods: The crystal size in all arrays was 1.5 mm, which is typical of the detector size used in small animal and dedicated breast scanners. The LSO arrays were measured with dual-ended readout and were compared in terms of flood histogram, energy resolution, and DOI resolution performance. Results: The four arrays using enhanced specular reflector (ESR) and Toray reflector provided similar quality flood histograms and the array using Crystal Wrap reflector gave the worst flood histogram. The two arrays using ESR reflector provided the best energy resolution and the array using Crystal Wrap reflector yielded the worst energy resolution. All arrays except the polished ESR array provided good DOI resolution ranging from 1.9 mm to 2.9 mm. DOI resolution improved as the gradient in light collection efficiency with depth (GLCED) increased. The geometric mean energies were also calculated for these dual-ended readout detectors as an alternative to the conventional summed total energy. It was shown that the geometric mean energy is advantageous in that it provides more uniform photopeak amplitude at different depths for arrays with high GLCED, and is beneficial in event selection by allowing a fixed energy window independent of depth. A new method of DOI calculation that improved the linearity

  1. Continuous multivariate exponential extension

    International Nuclear Information System (INIS)

    Block, H.W.

    1975-01-01

    The Freund-Weinman multivariate exponential extension is generalized to the case of nonidentically distributed marginal distributions. A fatal shock model is given for the resulting distribution. Results in the bivariate case and the concept of constant multivariate hazard rate lead to a continuous distribution related to the multivariate exponential distribution (MVE) of Marshall and Olkin. This distribution is shown to be a special case of the extended Freund-Weinman distribution. A generalization of the bivariate model of Proschan and Sullo leads to a distribution which contains both the extended Freund-Weinman distribution and the MVE

  2. Continuity and consensus

    DEFF Research Database (Denmark)

    Abrahamson, Peter

    2010-01-01

    maternal leave. These changes can be explained as adjustments to post-industrial conditions within a political culture relying on class compromises and a broad consensus informed by expert advice coming from civil servants and ad hoc policy commissions. The paper concludes that changes in Danish family...... policy reflect changing conditions for employment and the minding of children and that there has been a high degree of continuity and consensus about the change, as indicated by the strong increase in female labour market involvement....

  3. Continuous Integration in CFMGR

    CERN Document Server

    Frohlingsdorf, David

    2017-01-01

    Cfmgr is a managing tool for network devices. At the moment there is no way to automatically check the working behaviour of the tool, meaning that a lot of effort is spend into manually testing the tool after an update. During my stay at CERN I developed a black-box testing framework for Cfmgr according to Continuous Integration practices and successfully deployed the framework using Jenkins and Docker. This report discusses in detail how the framework works and how it can be configured, and equally gives a broad problem description and outlines future work directions.

  4. Continuous-infusion adriamycin

    International Nuclear Information System (INIS)

    Benjamin, R.S.; Chawla, S.P.; Ewer, M.S.; Hortobagyi, G.N.

    1986-01-01

    This chapter discusses the diminished cardiotoxicity as well as diminished nausea and vomiting with continuous infusions of adriamycin to patients undergoing radiation therapy, particularly with infusions of 48 hours or longer, and best with 96-hour infusions, the longest duration that has been studied systematically. In breast cancer, data show that more adriamycin is better, but only for a selected subgroup of patients: those with complete remission. The diminished cardiotoxicity makes the use of adriamycin more attractive in the adjuvant situation, where increased safety will decrease the chances of long-term complications and make retreatment easy for cured patients who develop second malignancies

  5. Continuous Shearlet Tight Frames

    KAUST Repository

    Grohs, Philipp

    2010-10-22

    Based on the shearlet transform we present a general construction of continuous tight frames for L2(ℝ2) from any sufficiently smooth function with anisotropic moments. This includes for example compactly supported systems, piecewise polynomial systems, or both. From our earlier results in Grohs (Technical report, KAUST, 2009) it follows that these systems enjoy the same desirable approximation properties for directional data as the previous bandlimited and very specific constructions due to Kutyniok and Labate (Trans. Am. Math. Soc. 361:2719-2754, 2009). We also show that the representation formulas we derive are in a sense optimal for the shearlet transform. © 2010 Springer Science+Business Media, LLC.

  6. A detector insert based on continuous scintillators for hybrid MR–PET imaging of the human brain

    Energy Technology Data Exchange (ETDEWEB)

    Rato Mendes, P., E-mail: pedro.rato@ciemat.es [CIEMAT, Avenida Complutense 40, 28040 Madrid (Spain); Cuerdo, R.; Sarasola, I.; García de Acilu, P.; Navarrete, J.; Vela, O.; Oller, J.C.; Cela, J.M. [CIEMAT, Avenida Complutense 40, 28040 Madrid (Spain); Núñez, L.; Pastrana, M. [Hospital Universitario Puerta de Hierro Majadahonda, Manuel de Falla 1, 28222 Majadahonda (Spain); Romero, L.; Willmott, C. [CIEMAT, Avenida Complutense 40, 28040 Madrid (Spain)

    2013-02-21

    We are developing a positron emission tomography (PET) insert for existing magnetic resonance (MR) equipment, aiming at hybrid MR–PET imaging. Our detector block design is based on trapezoid-shaped LYSO:Ce monolithic scintillators coupled to magnetically compatible Hamamatsu S8550-02 silicon avalanche photodiode (APD) matrices with a dedicated ASIC front-end readout from GammaMedica-Ideas (Fornebu, Norway). The detectors are position sensitive, capable of determining the incidence point of 511 keV gammas with an intrinsic spatial resolution on the order of 2 mm by means of supervised learning neural-network (NN) algorithms. These algorithms, apart from providing continuous coordinates, are also intrinsically corrected for depth of interaction effects and thus parallax-free. Recently we have implemented an advanced prototype featuring two heads with four detector blocks each and final front-end and readout electronics, improving the spatial resolution of reconstructed point source images down to 1.7 mm full width at half maximum (FWHM). Presently we are carrying out operational tests of components and systems under magnetic fields using a 3 T MR scanner. In this paper we present a description of our project, a summary of the results obtained with laboratory prototypes, and the strategy to build and install the complete system at the nuclear medicine department of a collaborating hospital.

  7. A detector insert based on continuous scintillators for hybrid MR–PET imaging of the human brain

    International Nuclear Information System (INIS)

    Rato Mendes, P.; Cuerdo, R.; Sarasola, I.; García de Acilu, P.; Navarrete, J.; Vela, O.; Oller, J.C.; Cela, J.M.; Núñez, L.; Pastrana, M.; Romero, L.; Willmott, C.

    2013-01-01

    We are developing a positron emission tomography (PET) insert for existing magnetic resonance (MR) equipment, aiming at hybrid MR–PET imaging. Our detector block design is based on trapezoid-shaped LYSO:Ce monolithic scintillators coupled to magnetically compatible Hamamatsu S8550-02 silicon avalanche photodiode (APD) matrices with a dedicated ASIC front-end readout from GammaMedica-Ideas (Fornebu, Norway). The detectors are position sensitive, capable of determining the incidence point of 511 keV gammas with an intrinsic spatial resolution on the order of 2 mm by means of supervised learning neural-network (NN) algorithms. These algorithms, apart from providing continuous coordinates, are also intrinsically corrected for depth of interaction effects and thus parallax-free. Recently we have implemented an advanced prototype featuring two heads with four detector blocks each and final front-end and readout electronics, improving the spatial resolution of reconstructed point source images down to 1.7 mm full width at half maximum (FWHM). Presently we are carrying out operational tests of components and systems under magnetic fields using a 3 T MR scanner. In this paper we present a description of our project, a summary of the results obtained with laboratory prototypes, and the strategy to build and install the complete system at the nuclear medicine department of a collaborating hospital

  8. Transcriptional modulation of genes encoding nitrate reductase in ...

    African Journals Online (AJOL)

    The free aluminum (Al) content in soil can reach levels that are toxic to plants, and this has frequently limited increased productivity of cultures. Four genes encoding nitrate reductase (NR) were identified, named ZmNR1–4. With the aim of evaluating NR activity and the transcriptional modulation of the ZmNR1, ZmNR2, ...

  9. Encoding, Memory, and Transcoding Deficits in Childhood Apraxia of Speech

    Science.gov (United States)

    Shriberg, Lawrence D.; Lohmeier, Heather L.; Strand, Edythe A.; Jakielski, Kathy J.

    2012-01-01

    A central question in Childhood Apraxia of Speech (CAS) is whether the core phenotype is limited to transcoding (planning/programming) deficits or if speakers with CAS also have deficits in auditory-perceptual "encoding" (representational) and/or "memory" (storage and retrieval of representations) processes. We addressed this and other questions…

  10. Encoded low swing for ultra low power interconnect

    NARCIS (Netherlands)

    Krishnan, R.; Pineda de Gyvez, J.

    2003-01-01

    We present a novel encoded-low swing technique for ultra low power interconnect. Using this technique and an efficient circuit implementation, we achieve an average of 45.7% improvement in the power-delay product over the schemes utilizing low swing techniques alone, for random bit streams. Also, we

  11. Learning from Number Board Games: You Learn What You Encode

    Science.gov (United States)

    Laski, Elida V.; Siegler, Robert S.

    2014-01-01

    We tested the hypothesis that encoding the numerical-spatial relations in a number board game is a key process in promoting learning from playing such games. Experiment 1 used a microgenetic design to examine the effects on learning of the type of counting procedure that children use. As predicted, having kindergartners count-on from their current…

  12. Conventions and nomenclature for double diffusion encoding NMR and MRI

    DEFF Research Database (Denmark)

    Shemesh, Noam; Jespersen, Sune N; Alexander, Daniel C

    2015-01-01

    , such as double diffusion encoding (DDE) NMR and MRI, may provide novel quantifiable metrics that are less easily inferred from conventional diffusion acquisitions. Despite the growing interest on the topic, the terminology for the pulse sequences, their parameters, and the metrics that can be derived from them...

  13. Resource-aware complexity scalability for mobile MPEG encoding

    NARCIS (Netherlands)

    Mietens, S.O.; With, de P.H.N.; Hentschel, C.; Panchanatan, S.; Vasudev, B.

    2004-01-01

    Complexity scalability attempts to scale the required resources of an algorithm with the chose quality settings, in order to broaden the application range. In this paper, we present complexity-scalable MPEG encoding of which the core processing modules are modified for scalability. Scalability is

  14. Chimeric polypeptides having cellulolytic enhancing activity and polynucleotides encoding same

    Science.gov (United States)

    Wogulis, Mark; Sweeney, Matthew; Heu, Tia

    2017-06-14

    The present invention relates to chimeric GH61 polypeptides having cellulolytic enhancing activity. The present invention also relates to polynucleotides encoding the chimeric GH61 polypeptides; nucleic acid constructs, vectors, and host cells comprising the polynucleotides; and methods of using the chimeric GH61 polypeptides.

  15. Neural Activity during Encoding Predicts False Memories Created by Misinformation

    Science.gov (United States)

    Okado, Yoko; Stark, Craig E. L.

    2005-01-01

    False memories are often demonstrated using the misinformation paradigm, in which a person's recollection of a witnessed event is altered after exposure to misinformation about the event. The neural basis of this phenomenon, however, remains unknown. The authors used fMRI to investigate encoding processes during the viewing of an event and…

  16. False memory and importance: can we prioritize encoding without consequence?

    Science.gov (United States)

    Bui, Dung C; Friedman, Michael C; McDonough, Ian M; Castel, Alan D

    2013-10-01

    Given the large amount of information that we encounter, we often must prioritize what information we attempt to remember. Although critical for everyday functioning, relatively little research has focused on how people prioritize the encoding of information. Recent research has shown that people can and do selectively remember information assigned with higher, relative to lower, importance. However, the mechanisms underlying this prioritization process and the consequences of these processes are still not well understood. In the present study, we sought to better understand these prioritization processes and whether implementing these processes comes at the cost of memory accuracy, by increasing false memories. We used a modified form of the Deese/Roediger-McDermott (DRM) paradigm, in which participants studied DRM lists, with each list paired with low, medium, or high point values. In Experiment 1, encoding higher values led to more false memories than did encoding lower values, possibly because prioritizing information enhanced relational processing among high-value words. In Experiment 2, disrupting relational processing selectively reduced false memories for high-value words. Finally, in Experiment 3, facilitating relational processing selectively increased false memories for low-value words. These findings suggest that while prioritizing information can enhance true memory, this process concomitantly increases false memories. Furthermore, the mechanism underlying these prioritization processes depends on the ability to successfully engage in relational processing. Thus, how we prioritize the encoding of incoming information can come at a cost in terms of accurate memory.

  17. Extraordinarily Adaptive Properties of the Genetically Encoded Amino Acids

    Science.gov (United States)

    Ilardo, Melissa; Meringer, Markus; Freeland, Stephen; Rasulev, Bakhtiyor; Cleaves II, H. James

    2015-01-01

    Using novel advances in computational chemistry, we demonstrate that the set of 20 genetically encoded amino acids, used nearly universally to construct all coded terrestrial proteins, has been highly influenced by natural selection. We defined an adaptive set of amino acids as one whose members thoroughly cover relevant physico-chemical properties, or “chemistry space.” Using this metric, we compared the encoded amino acid alphabet to random sets of amino acids. These random sets were drawn from a computationally generated compound library containing 1913 alternative amino acids that lie within the molecular weight range of the encoded amino acids. Sets that cover chemistry space better than the genetically encoded alphabet are extremely rare and energetically costly. Further analysis of more adaptive sets reveals common features and anomalies, and we explore their implications for synthetic biology. We present these computations as evidence that the set of 20 amino acids found within the standard genetic code is the result of considerable natural selection. The amino acids used for constructing coded proteins may represent a largely global optimum, such that any aqueous biochemistry would use a very similar set. PMID:25802223

  18. SAMPEG: a scene-adaptive parallel MPEG-2 software encoder

    NARCIS (Netherlands)

    Farin, D.S.; Mache, N.; With, de P.H.N.; Girod, B.; Bouman, C.A.; Steinbach, E.G.

    2001-01-01

    This paper presents a fully software-based MPEG-2 encoder architecture, which uses scene-change detection to optimize the Group-of-Picture (GOP) structure for the actual video sequence. This feature enables easy, lossless edit cuts at scene-change positions and it also improves overall picture

  19. Imagining Another Context during Encoding Offsets Context-Dependent Forgetting

    Science.gov (United States)

    Masicampo, E. J.; Sahakyan, Lili

    2014-01-01

    We tested whether imagining another context during encoding would offset context-dependent forgetting. All participants studied a list of words in Context A. Participants who remained in Context A during the test recalled more than participants who were tested in another context (Context B), demonstrating the standard context-dependent forgetting…

  20. A Neural Signature Encoding Decisions under Perceptual Ambiguity.

    Science.gov (United States)

    Sun, Sai; Yu, Rongjun; Wang, Shuo

    2017-01-01

    People often make perceptual decisions with ambiguous information, but it remains unclear whether the brain has a common neural substrate that encodes various forms of perceptual ambiguity. Here, we used three types of perceptually ambiguous stimuli as well as task instructions to examine the neural basis for both stimulus-driven and task-driven perceptual ambiguity. We identified a neural signature, the late positive potential (LPP), that encoded a general form of stimulus-driven perceptual ambiguity. In addition to stimulus-driven ambiguity, the LPP was also modulated by ambiguity in task instructions. To further specify the functional role of the LPP and elucidate the relationship between stimulus ambiguity, behavioral response, and the LPP, we employed regression models and found that the LPP was specifically associated with response latency and confidence rating, suggesting that the LPP encoded decisions under perceptual ambiguity. Finally, direct behavioral ratings of stimulus and task ambiguity confirmed our neurophysiological findings, which could not be attributed to differences in eye movements either. Together, our findings argue for a common neural signature that encodes decisions under perceptual ambiguity but is subject to the modulation of task ambiguity. Our results represent an essential first step toward a complete neural understanding of human perceptual decision making.

  1. Polypeptides having xylanase activity and polynucleotides encoding same

    Energy Technology Data Exchange (ETDEWEB)

    Spodsberg, Nikolaj; Shaghasi, Tarana

    2017-06-20

    The present invention relates to polypeptides having xylanase activity, catalytic domains, and carbohydrate binding domains, and polynucleotides encoding the polypeptides, catalytic domains, and carbohydrate binding domains. The present invention also relates to nucleic acid constructs, vectors, and host cells comprising the polynucleotides as well as methods of producing and using the polypeptides, catalytic domains, and carbohydrate binding domains.

  2. Amount of Postcue Encoding Predicts Amount of Directed Forgetting

    Science.gov (United States)

    Pastotter, Bernhard; Bauml, Karl-Heinz

    2010-01-01

    In list-method directed forgetting, participants are cued to intentionally forget a previously studied list (List 1) before encoding a subsequently presented list (List 2). Compared with remember-cued participants, forget-cued participants typically show impaired recall of List 1 and improved recall of List 2, referred to as List 1 forgetting and…

  3. Quantum-dots-encoded-microbeads based molecularly imprinted polymer.

    Science.gov (United States)

    Liu, Yixi; Liu, Le; He, Yonghong; He, Qinghua; Ma, Hui

    2016-03-15

    Quantum dots encoded microbeads have various advantages such as large surface area, superb optical properties and the ability of multiplexing. Molecularly imprinted polymer that can mimic the natural recognition entities has high affinity and selectivity for the specific analyte. Here, the concept of utilizing the quantum dots encoded microbeads as the supporting material and the polydopamine as the functional monomer to form the core-shell molecular imprinted polymer was proposed for the first time. The resulted imprinted polymer can provide various merits: polymerization can complete in aqueous environment; fabrication procedure is facile and universal; the obvious economic advantage; the thickness of the imprinting layer is highly controllable; polydopamine coating can improve the biocompatibility of the quantum dot encoded microbeads. The rabbit IgG binding and flow cytometer experiment result showed the distinct advantages of this strategy: cost-saving, facile and fast preparation procedure. Most importantly, the ability for the multichannel detection, which makes the imprinted polydopamine modified encoded-beads very attractive in protein pre-concentration, recognition, separation and biosensing. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. How Does Intentionality of Encoding Affect Memory for Episodic Information?

    Science.gov (United States)

    Craig, Michael; Butterworth, Karla; Nilsson, Jonna; Hamilton, Colin J.; Gallagher, Peter; Smulders, Tom V.

    2016-01-01

    Episodic memory enables the detailed and vivid recall of past events, including target and wider contextual information. In this paper, we investigated whether/how encoding intentionality affects the retention of target and contextual episodic information from a novel experience. Healthy adults performed (1) a "What-Where-When"…

  5. Word form Encoding in Chinese Word Naming and Word Typing

    Science.gov (United States)

    Chen, Jenn-Yeu; Li, Cheng-Yi

    2011-01-01

    The process of word form encoding was investigated in primed word naming and word typing with Chinese monosyllabic words. The target words shared or did not share the onset consonants with the prime words. The stimulus onset asynchrony (SOA) was 100 ms or 300 ms. Typing required the participants to enter the phonetic letters of the target word,…

  6. Utilizing encoding in scalable linear optics quantum computing

    International Nuclear Information System (INIS)

    Hayes, A J F; Gilchrist, A; Myers, C R; Ralph, T C

    2004-01-01

    We present a scheme which offers a significant reduction in the resources required to implement linear optics quantum computing. The scheme is a variation of the proposal of Knill, Laflamme and Milburn, and makes use of an incremental approach to the error encoding to boost probability of success

  7. Encoding color information for visual tracking: Algorithms and benchmark.

    Science.gov (United States)

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  8. Error-backpropagation in temporally encoded networks of spiking neurons

    NARCIS (Netherlands)

    S.M. Bohte (Sander); J.A. La Poutré (Han); J.N. Kok (Joost)

    2000-01-01

    textabstractFor a network of spiking neurons that encodes information in the timing of individual spike-times, we derive a supervised learning rule, emph{SpikeProp, akin to traditional error-backpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm,

  9. cDNA encoding a polypeptide including a hevein sequence

    Energy Technology Data Exchange (ETDEWEB)

    Raikhel, N.V.; Broekaert, W.F.; Namhai Chua; Kush, A.

    1993-02-16

    A cDNA clone (HEV1) encoding hevein was isolated via polymerase chain reaction (PCR) using mixed oligonucleotides corresponding to two regions of hevein as primers and a Hevea brasiliensis latex cDNA library as a template. HEV1 is 1,018 nucleotides long and includes an open reading frame of 204 amino acids.

  10. Identification and characterization of a gene encoding a putative ...

    Indian Academy of Sciences (India)

    2012-10-30

    Oct 30, 2012 ... Genetic Improvement of Oil Crops, Ministry of Agriculture, Wuhan 430062, China. 2Institute of ... Its encoding gene is an essential candidate for oil crops to .... higher level in leaves than in other organs (Kim and Huang. 2004) ...

  11. RNAi-based silencing of genes encoding the vacuolar- ATPase ...

    African Journals Online (AJOL)

    RNAi-based silencing of genes encoding the vacuolar- ATPase subunits a and c in pink bollworm (Pectinophora gossypiella). Ahmed M. A. Mohammed. Abstract. RNA interference is a post- transcriptional gene regulation mechanism that is predominantly found in eukaryotic organisms. RNAi demonstrated a successful ...

  12. EGVII endoglucanase and nucleic acids encoding the same

    Science.gov (United States)

    Dunn-Coleman, Nigel [Los Gatos, CA; Goedegebuur, Frits [Vlaardingen, NL; Ward, Michael [San Francisco, CA; Yao, Jian [Sunnyvale, CA

    2009-05-05

    The present invention provides an endoglucanase nucleic acid sequence, designated egl7, and the corresponding EGVII amino acid sequence. The invention also provides expression vectors and host cells comprising a nucleic acid sequence encoding EGVII, recombinant EGVII proteins and methods for producing the same.

  13. Extraordinarily adaptive properties of the genetically encoded amino acids.

    Science.gov (United States)

    Ilardo, Melissa; Meringer, Markus; Freeland, Stephen; Rasulev, Bakhtiyor; Cleaves, H James

    2015-03-24

    Using novel advances in computational chemistry, we demonstrate that the set of 20 genetically encoded amino acids, used nearly universally to construct all coded terrestrial proteins, has been highly influenced by natural selection. We defined an adaptive set of amino acids as one whose members thoroughly cover relevant physico-chemical properties, or "chemistry space." Using this metric, we compared the encoded amino acid alphabet to random sets of amino acids. These random sets were drawn from a computationally generated compound library containing 1913 alternative amino acids that lie within the molecular weight range of the encoded amino acids. Sets that cover chemistry space better than the genetically encoded alphabet are extremely rare and energetically costly. Further analysis of more adaptive sets reveals common features and anomalies, and we explore their implications for synthetic biology. We present these computations as evidence that the set of 20 amino acids found within the standard genetic code is the result of considerable natural selection. The amino acids used for constructing coded proteins may represent a largely global optimum, such that any aqueous biochemistry would use a very similar set.

  14. Beta-glucosidase variants and polynucleotides encoding same

    Science.gov (United States)

    Wogulis, Mark; Harris, Paul; Osborn, David

    2017-06-27

    The present invention relates to beta-glucosidase variants, e.g. beta-glucosidase variants of a parent Family GH3A beta-glucosidase from Aspergillus fumigatus. The present invention also relates to polynucleotides encoding the beta-glucosidase variants; nucleic acid constructs, vectors, and host cells comprising the polynucleotides; and methods of using the beta-glucosidase variants.

  15. Polypeptides having beta-glucosidase activity and polynucleotides encoding same

    Science.gov (United States)

    Harris, Paul; Golightly, Elizabeth

    2012-11-27

    The present invention relates to isolated polypeptides having beta-glucosidase activity and isolated polynucleotides encoding the polypeptides. The invention also relates to nucleic acid constructs, vectors, and host cells comprising the polynucleotides as well as methods for producing and using the polypeptides.

  16. Polypeptides having cellobiohydrolase activity and polynucleotides encoding same

    Science.gov (United States)

    Morant, Marc D.; Harris, Paul

    2015-10-13

    The present invention relates to isolated polypeptides having cellobiohydrolase activity and isolated polynucleotides encoding the polypeptides. The invention also relates to nucleic acid constructs, vectors, and host cells comprising the polynucleotides as well as methods of producing and using the polypeptides.

  17. Polypeptides having cellulolytic enhancing activity and polynucleotides encoding same

    Science.gov (United States)

    Maiyuran, Suchindra; Kramer, Randall; Harris, Paul

    2013-10-29

    The present invention relates to isolated polypeptides having cellulolytic enhancing activity and isolated polynucleotides encoding the polypeptides. The invention also relates to nucleic acid constructs, vectors, and host cells comprising the polynucleotides as well as methods of producing and using the polypeptides.

  18. Polynucleotides encoding polypeptides having beta-glucosidase activity

    Science.gov (United States)

    Harris, Paul; Golightly, Elizabeth

    2010-03-02

    The present invention relates to isolated polypeptides having beta-glucosidase activity and isolated polynucleotides encoding the polypeptides. The invention also relates to nucleic acid constructs, vectors, and host cells comprising the polynucleotides as well as methods for producing and using the polypeptides.

  19. A spoonful of sugar: encoding and publishing in the classroom

    NARCIS (Netherlands)

    Spadini, E.

    2017-01-01

    This paper pursues the use of text encoding and digital publication in teaching textual criticism. A number of concepts and rules of textual criticism can be put into practice during a course thanks to the use of digital resources and tools. In dealing with original materials (text sources), the

  20. The implications of alternative splicing in the ENCODE protein complement

    DEFF Research Database (Denmark)

    Tress, Michael L.; Martelli, Pier Luigi; Frankish, Adam

    2007-01-01

    suggested as one explanation for the discrepancy between the number of human genes and functional complexity. Here, we carry out a detailed study of the alternatively spliced gene products annotated in the ENCODE pilot project. We find that alternative splicing in human genes is more frequent than has...

  1. Plasmid-encoded diacetyl (acetoin) reductase in Leuconostoc pseudomesenteroides

    DEFF Research Database (Denmark)

    Rattray, Fergal P; Myling-Petersen, Dorte; Larsen, Dianna

    2003-01-01

    A plasmid-borne diacetyl (acetoin) reductase (butA) from Leuconostoc pseudomesenteroides CHCC2114 was sequenced and cloned. Nucleotide sequence analysis revealed an open reading frame encoding a protein of 257 amino acids which had high identity at the amino acid level to diacetyl (acetoin...

  2. RNAi-based silencing of genes encoding the vacuolar- ATPase ...

    African Journals Online (AJOL)

    2016-11-09

    Nov 9, 2016 ... Spodoptera exigua larval development by silencing chitin synthase gene with RNA interference. Bull. Entomol. Res. 98:613-619. Dow JAT (1999). The Multifunctional Drosophila melanogaster V-. ATPase is encoded by a multigene family. J. Bioenerg. Biomembr. 31:75-83. Fire A, Xu SQ, Montgomery MK, ...

  3. Method of implementing frequency-encoded NOT, OR and NOR

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 73; Issue 5. Method of implementing frequency-encoded NOT, OR and NOR logic operations using lithium niobate waveguide and reflecting semiconductor optical amplifiers. Sisir Kumar Garai Sourangshu Mukhopadhyay. Volume 73 Issue 5 November 2009 pp 901- ...

  4. Landsat Data Continuity Mission

    Science.gov (United States)

    ,

    2012-01-01

    The Landsat Data Continuity Mission (LDCM) is a partnership formed between the National Aeronautics and Space Administration (NASA) and the U.S. Geological Survey (USGS) to place the next Landsat satellite in orbit in January 2013. The Landsat era that began in 1972 will become a nearly 41-year global land record with the successful launch and operation of the LDCM. The LDCM will continue the acquisition, archiving, and distribution of multispectral imagery affording global, synoptic, and repetitive coverage of the Earth's land surfaces at a scale where natural and human-induced changes can be detected, differentiated, characterized, and monitored over time. The mission objectives of the LDCM are to (1) collect and archive medium resolution (30-meter spatial resolution) multispectral image data affording seasonal coverage of the global landmasses for a period of no less than 5 years; (2) ensure that LDCM data are sufficiently consistent with data from the earlier Landsat missions in terms of acquisition geometry, calibration, coverage characteristics, spectral characteristics, output product quality, and data availability to permit studies of landcover and land-use change over time; and (3) distribute LDCM data products to the general public on a nondiscriminatory basis at no cost to the user.

  5. Continuously adjustable Pulfrich spectacles

    Science.gov (United States)

    Jacobs, Ken; Karpf, Ron

    2011-03-01

    A number of Pulfrich 3-D movies and TV shows have been produced, but the standard implementation has inherent drawbacks. The movie and TV industries have correctly concluded that the standard Pulfrich 3-D implementation is not a useful 3-D technique. Continuously Adjustable Pulfrich Spectacles (CAPS) is a new implementation of the Pulfrich effect that allows any scene containing movement in a standard 2-D movie, which are most scenes, to be optionally viewed in 3-D using inexpensive viewing specs. Recent scientific results in the fields of human perception, optoelectronics, video compression and video format conversion are translated into a new implementation of Pulfrich 3- D. CAPS uses these results to continuously adjust to the movie so that the viewing spectacles always conform to the optical density that optimizes the Pulfrich stereoscopic illusion. CAPS instantly provides 3-D immersion to any moving scene in any 2-D movie. Without the glasses, the movie will appear as a normal 2-D image. CAPS work on any viewing device, and with any distribution medium. CAPS is appropriate for viewing Internet streamed movies in 3-D.

  6. If you watch it move, you'll recognize it in 3D: Transfer of depth cues between encoding and retrieval.

    Science.gov (United States)

    Papenmeier, Frank; Schwan, Stephan

    2016-02-01

    Viewing objects with stereoscopic displays provides additional depth cues through binocular disparity supporting object recognition. So far, it was unknown whether this results from the representation of specific stereoscopic information in memory or a more general representation of an object's depth structure. Therefore, we investigated whether continuous object rotation acting as depth cue during encoding results in a memory representation that can subsequently be accessed by stereoscopic information during retrieval. In Experiment 1, we found such transfer effects from continuous object rotation during encoding to stereoscopic presentations during retrieval. In Experiments 2a and 2b, we found that the continuity of object rotation is important because only continuous rotation and/or stereoscopic depth but not multiple static snapshots presented without stereoscopic information caused the extraction of an object's depth structure into memory. We conclude that an object's depth structure and not specific depth cues are represented in memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Identification and validation of human papillomavirus encoded microRNAs.

    Directory of Open Access Journals (Sweden)

    Kui Qian

    Full Text Available We report here identification and validation of the first papillomavirus encoded microRNAs expressed in human cervical lesions and cell lines. We established small RNA libraries from ten human papillomavirus associated cervical lesions including cancer and two human papillomavirus harboring cell lines. These libraries were sequenced using SOLiD 4 technology. We used the sequencing data to predict putative viral microRNAs and discovered nine putative papillomavirus encoded microRNAs. Validation was performed for five candidates, four of which were successfully validated by qPCR from cervical tissue samples and cell lines: two were encoded by HPV 16, one by HPV 38 and one by HPV 68. The expression of HPV 16 microRNAs was further confirmed by in situ hybridization, and colocalization with p16INK4A was established. Prediction of cellular target genes of HPV 16 encoded microRNAs suggests that they may play a role in cell cycle, immune functions, cell adhesion and migration, development, and cancer. Two putative viral target sites for the two validated HPV 16 miRNAs were mapped to the E5 gene, one in the E1 gene, two in the L1 gene and one in the LCR region. This is the first report to show that papillomaviruses encode their own microRNA species. Importantly, microRNAs were found in libraries established from human cervical disease and carcinoma cell lines, and their expression was confirmed in additional tissue samples. To our knowledge, this is also the first paper to use in situ hybridization to show the expression of a viral microRNA in human tissue.

  8. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  9. Task-selective memory effects for successfully implemented encoding strategies.

    Directory of Open Access Journals (Sweden)

    Eric D Leshikar

    Full Text Available Previous behavioral evidence suggests that instructed strategy use benefits associative memory formation in paired associate tasks. Two such effective encoding strategies--visual imagery and sentence generation--facilitate memory through the production of different types of mediators (e.g., mental images and sentences. Neuroimaging evidence suggests that regions of the brain support memory reflecting the mental operations engaged at the time of study. That work, however, has not taken into account self-reported encoding task success (i.e., whether participants successfully generated a mediator. It is unknown, therefore, whether task-selective memory effects specific to each strategy might be found when encoding strategies are successfully implemented. In this experiment, participants studied pairs of abstract nouns under either visual imagery or sentence generation encoding instructions. At the time of study, participants reported their success at generating a mediator. Outside of the scanner, participants further reported the quality of the generated mediator (e.g., images, sentences for each word pair. We observed task-selective memory effects for visual imagery in the left middle occipital gyrus, the left precuneus, and the lingual gyrus. No such task-selective effects were observed for sentence generation. Intriguingly, activity at the time of study in the left precuneus was modulated by the self-reported quality (vividness of the generated mental images with greater activity for trials given higher ratings of quality. These data suggest that regions of the brain support memory in accord with the encoding operations engaged at the time of study.

  10. Electroencephalographic brain dynamics of memory encoding in emotionally arousing context

    Directory of Open Access Journals (Sweden)

    Carlos Enrique eUribe

    2011-06-01

    Full Text Available Emotional content/context enhances declarative memory through modulation of encoding and retrieval mechanisms. At encoding, neurophysiological data have consistently demonstrated the subsequent memory effect in theta and gamma oscillations. Yet, the existing studies were focused on the emotional content effect and let the emotional context effect unexplored. We hypothesized that theta and gamma oscillations show higher evoked/induced activity during the encoding of visual stimuli when delivered in an emotionally arousing context. Twenty-five healthy volunteers underwent evoked potentials recordings using a 21 scalp electrodes montage. They attended to an audiovisual test of emotional declarative memory being randomly assigned to either emotionally arousing or neutral context. Visual stimulus presentation was used as the time-locking event. Grand-averages of the evoked potentials and evoked spectral perturbations were calculated for each volunteer. Evoked potentials showed a higher negative deflection from 80 to 140 ms for the emotional condition. Such effect was observed over central, frontal and prefrontal locations bilaterally. Evoked theta power was higher in left parietal, central, frontal and prefrontal electrodes from -50 to 300 ms in the emotional condition. Evoked gamma power was higher in the emotional condition with a spatial distribution that overlapped at some points with the theta topography. The early theta power increase could be related to expectancy induced by auditory information processing that facilitates visual encoding in emotional contexts. Together, our results suggest that declarative memory enhancement for both emotional content and emotional context are supported by similar neural mechanisms at encoding, and offer new evidence about the brain processing of relevant environmental stimuli.

  11. MicroRNA-encoding long non-coding RNAs

    Directory of Open Access Journals (Sweden)

    Zhu Xiaopeng

    2008-05-01

    Full Text Available Abstract Background Recent analysis of the mouse transcriptional data has revealed the existence of ~34,000 messenger-like non-coding RNAs (ml-ncRNAs. Whereas the functional properties of these ml-ncRNAs are beginning to be unravelled, no functional information is available for the large majority of these transcripts. Results A few ml-ncRNA have been shown to have genomic loci that overlap with microRNA loci, leading us to suspect that a fraction of ml-ncRNA may encode microRNAs. We therefore developed an algorithm (PriMir for specifically detecting potential microRNA-encoding transcripts in the entire set of 34,030 mouse full-length ml-ncRNAs. In combination with mouse-rat sequence conservation, this algorithm detected 97 (80 of them were novel strong miRNA-encoding candidates, and for 52 of these we obtained experimental evidence for the existence of their corresponding mature microRNA by microarray and stem-loop RT-PCR. Sequence analysis of the microRNA-encoding RNAs revealed an internal motif, whose presence correlates strongly (R2 = 0.9, P-value = 2.2 × 10-16 with the occurrence of stem-loops with characteristics of known pre-miRNAs, indicating the presence of a larger number microRNA-encoding RNAs (from 300 up to 800 in the ml-ncRNAs population. Conclusion Our work highlights a unique group of ml-ncRNAs and offers clues to their functions.

  12. Continuous alcoholic fermentation

    Energy Technology Data Exchange (ETDEWEB)

    Smidrkal, M; Nejedly, A

    1956-01-01

    Results are given of investigations on the continuous production of ethanol on a laboratory and on a semi-commercial scale. The suggested devices are particularly described. Under constant conditions the production cycle required 12 to 17 days, the acidity being 4.0 to 415 ml. 0.1 N NaOH/100 ml and the concentration of fermented wort 10.5 to 11%. The maximum production from 1 h of fermentation space during 24 h was 8.67 l of absolute alcohol when the efflux was divided into several basins; when the efflux of sweet wort was collected into one basin only, the maximum production was 7.20 l of absolute alcohol. The amount of alcohol produced was 62.20 l/100 kg sugar.

  13. Spaces of continuous functions

    CERN Document Server

    Groenewegen, G L M

    2016-01-01

    The space C(X) of all continuous functions on a compact space X carries the structure of a normed vector space, an algebra and a lattice. On the one hand we study the relations between these structures and the topology of X, on the other hand we discuss a number of classical results according to which an algebra or a vector lattice can be represented as a C(X). Various applications of these theorems are given. Some attention is devoted to related theorems, e.g. the Stone Theorem for Boolean algebras and the Riesz Representation Theorem. The book is functional analytic in character. It does not presuppose much knowledge of functional analysis; it contains introductions into subjects such as the weak topology, vector lattices and (some) integration theory.

  14. CORTICAL ENCODING OF SIGNALS IN NOISE: EFFECTS OF STIMULUS TYPE AND RECORDING PARADIGM

    Science.gov (United States)

    Billings, Curtis J.; Bennett, Keri O.; Molis, Michelle R.; Leek, Marjorie R.

    2010-01-01

    Objectives Perception-in-noise deficits have been demonstrated across many populations and listening conditions. Many factors contribute to successful perception of auditory stimuli in noise, including neural encoding in the central auditory system. Physiological measures such as cortical auditory evoked potentials can provide a view of neural encoding at the level of the cortex that may inform our understanding of listeners’ abilities to perceive signals in the presence of background noise. In order to understand signal-in-noise neural encoding better, we set out to determine the effect of signal type, noise type, and evoking paradigm on the P1-N1-P2 complex. Design Tones and speech stimuli were presented to nine individuals in quiet, and in three background noise types: continuous speech spectrum noise, interrupted speech spectrum noise, and four-talker babble at a signal-to-noise ratio of −3 dB. In separate sessions, cortical auditory evoked potentials were evoked by a passive homogenous paradigm (single repeating stimulus) and an active oddball paradigm. Results The results for the N1 component indicated significant effects of signal type, noise type, and evoking paradigm. While components P1 and P2 also had significant main effects of these variables, only P2 demonstrated significant interactions among these variables. Conclusions Signal type, noise type, and evoking paradigm all must be carefully considered when interpreting signal-in-noise evoked potentials. Furthermore, these data confirm the possible usefulness of CAEPs as an aid to understanding perception-in-noise deficits. PMID:20890206

  15. Hippocampal subfield and medial temporal cortical persistent activity during working memory reflects ongoing encoding

    Directory of Open Access Journals (Sweden)

    Rachel K Nauer

    2015-03-01

    Full Text Available Previous neuroimaging studies support a role for the medial temporal lobes (MTL in maintaining novel stimuli over brief working memory (WM delays, and suggest delay period activity predicts subsequent memory. Additionally, slice recording studies have demonstrated neuronal persistent spiking in entorhinal cortex (EC, perirhinal cortex (PrC, and hippocampus (CA1, CA3, subiculum. These data have led to computational models that suggest persistent spiking in parahippocampal regions could sustain neuronal representations of sensory information over many seconds. This mechanism may support both WM maintenance and encoding of information into long term episodic memory. The goal of the current study was to use high-resolution fMRI to elucidate the contributions of the MTL cortices and hippocampal subfields to WM maintenance as it relates to later episodic recognition memory. We scanned participants while they performed a delayed match to sample task with novel scene stimuli, and assessed their memory for these scenes post-scan. We hypothesized stimulus-driven activation that persists into the delay period—a putative correlate of persistent spiking—would predict later recognition memory. Our results suggest sample and delay period activation in the parahippocampal cortex (PHC, PrC, and subiculum (extending into DG/CA3 and CA1 was linearly related to increases in subsequent memory strength. These data extend previous neuroimaging studies that have constrained their analysis to either the sample or delay period by modeling these together as one continuous ongoing encoding process, and support computational frameworks that predict persistent activity underlies both WM and episodic encoding.

  16. Continuous spinal anesthesia.

    Science.gov (United States)

    Moore, James M

    2009-01-01

    Continuous spinal anesthesia (CSA) is an underutilized technique in modern anesthesia practice. Compared with other techniques of neuraxial anesthesia, CSA allows incremental dosing of an intrathecal local anesthetic for an indefinite duration, whereas traditional single-shot spinal anesthesia usually involves larger doses, a finite, unpredictable duration, and greater potential for detrimental hemodynamic effects including hypotension, and epidural anesthesia via a catheter may produce lesser motor block and suboptimal anesthesia in sacral nerve root distributions. This review compares CSA with other anesthetic techniques and also describes the history of CSA, its clinical applications, concerns regarding neurotoxicity, and other pharmacologic implications of its use. CSA has seen a waxing and waning of its popularity in clinical practice since its initial description in 1907. After case reports of cauda equina syndrome were reported with the use of spinal microcatheters for CSA, these microcatheters were withdrawn from clinical practice in the United States but continued to be used in Europe with no further neurologic sequelae. Because only large-bore catheters may be used in the United States, CSA is usually reserved for elderly patients out of concern for the risk of postdural puncture headache in younger patients. However, even in younger patients, sometimes the unique clinical benefits and hemodynamic stability involved in CSA outweigh concerns regarding postdural puncture headache. Clinical scenarios in which CSA may be of particular benefit include patients with severe aortic stenosis undergoing lower extremity surgery and obstetric patients with complex heart disease. CSA is an underutilized technique in modern anesthesia practice. Perhaps more accurately termed fractional spinal anesthesia, CSA involves intermittent dosing of local anesthetic solution via an intrathecal catheter. Where traditional spinal anesthesia involves a single injection with a

  17. Cellular dynamical mechanisms for encoding the time and place of events along spatiotemporal trajectories in episodic memory.

    Science.gov (United States)

    Hasselmo, Michael E; Giocomo, Lisa M; Brandon, Mark P; Yoshida, Motoharu

    2010-12-31

    Understanding the mechanisms of episodic memory requires linking behavioral data and lesion effects to data on the dynamics of cellular membrane potentials and population interactions within brain regions. Linking behavior to specific membrane channels and neurochemicals has implications for therapeutic applications. Lesions of the hippocampus, entorhinal cortex and subcortical nuclei impair episodic memory function in humans and animals, and unit recording data from these regions in behaving animals indicate episodic memory processes. Intracellular recording in these regions demonstrates specific cellular properties including resonance, membrane potential oscillations and bistable persistent spiking that could underlie the encoding and retrieval of episodic trajectories. A model presented here shows how intrinsic dynamical properties of neurons could mediate the encoding of episodic memories as complex spatiotemporal trajectories. The dynamics of neurons allow encoding and retrieval of unique episodic trajectories in multiple continuous dimensions including temporal intervals, personal location, the spatial coordinates and sensory features of perceived objects and generated actions, and associations between these elements. The model also addresses how cellular dynamics could underlie unit firing data suggesting mechanisms for coding continuous dimensions of space, time, sensation and action. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. Continuous Variable Quantum Key Distribution Using Polarized Coherent States

    Science.gov (United States)

    Vidiella-Barranco, A.; Borelli, L. F. M.

    We discuss a continuous variables method of quantum key distribution employing strongly polarized coherent states of light. The key encoding is performed using the variables known as Stokes parameters, rather than the field quadratures. Their quantum counterpart, the Stokes operators Ŝi (i=1,2,3), constitute a set of non-commuting operators, being the precision of simultaneous measurements of a pair of them limited by an uncertainty-like relation. Alice transmits a conveniently modulated two-mode coherent state, and Bob randomly measures one of the Stokes parameters of the incoming beam. After performing reconciliation and privacy amplification procedures, it is possible to distill a secret common key. We also consider a non-ideal situation, in which coherent states with thermal noise, instead of pure coherent states, are used for encoding.

  19. Wavelet-based Encoding Scheme for Controlling Size of Compressed ECG Segments in Telecardiology Systems.

    Science.gov (United States)

    Al-Busaidi, Asiya M; Khriji, Lazhar; Touati, Farid; Rasid, Mohd Fadlee; Mnaouer, Adel Ben

    2017-09-12

    One of the major issues in time-critical medical applications using wireless technology is the size of the payload packet, which is generally designed to be very small to improve the transmission process. Using small packets to transmit continuous ECG data is still costly. Thus, data compression is commonly used to reduce the huge amount of ECG data transmitted through telecardiology devices. In this paper, a new ECG compression scheme is introduced to ensure that the compressed ECG segments fit into the available limited payload packets, while maintaining a fixed CR to preserve the diagnostic information. The scheme automatically divides the ECG block into segments, while maintaining other compression parameters fixed. This scheme adopts discrete wavelet transform (DWT) method to decompose the ECG data, bit-field preserving (BFP) method to preserve the quality of the DWT coefficients, and a modified running-length encoding (RLE) scheme to encode the coefficients. The proposed dynamic compression scheme showed promising results with a percentage packet reduction (PR) of about 85.39% at low percentage root-mean square difference (PRD) values, less than 1%. ECG records from MIT-BIH Arrhythmia Database were used to test the proposed method. The simulation results showed promising performance that satisfies the needs of portable telecardiology systems, like the limited payload size and low power consumption.

  20. Encoding technique for high data compaction in data bases of fusion devices

    International Nuclear Information System (INIS)

    Vega, J.; Cremy, C.; Sanchez, E.; Portas, A.; Dormido, S.

    1996-01-01

    At present, data requirements of hundreds of Mbytes/discharge are typical in devices such as JET, TFTR, DIII-D, etc., and these requirements continue to increase. With these rates, the amount of storage required to maintain discharge information is enormous. Compaction techniques are now essential to reduce storage. However, general compression techniques may distort signals, but this is undesirable for fusion diagnostics. We have developed a general technique for data compression which is described here. The technique, which is based on delta compression, does not require an examination of the data as in delayed methods. Delta values are compacted according to general encoding forms which satisfy a prefix code property and which are defined prior to data capture. Several prefix codes, which are bit oriented and which have variable code lengths, have been developed. These encoding methods are independent of the signal analog characteristics and enable one to store undistorted signals. The technique has been applied to databases of the TJ-I tokamak and the TJ-IU torsatron. Compaction rates of over 80% with negligible computational effort were achieved. Computer programs were written in ANSI C, thus ensuring portability and easy maintenance. We also present an interpretation, based on information theory, of the high compression rates achieved without signal distortion. copyright 1996 American Institute of Physics

  1. Feleucins: Novel Bombinin Precursor-Encoded Nonapeptide Amides from the Skin Secretion of Bombina variegata

    Directory of Open Access Journals (Sweden)

    Bing Bai

    2014-01-01

    Full Text Available The first amphibian skin antimicrobial peptide (AMP to be identified was named bombinin, reflecting its origin from the skin of the European yellow-bellied toad (Bombina variegata. Bombinins and their related peptides, the bombinin Hs, were subsequently reported from other bombinid toads. Molecular cloning of bombinin-encoding cDNAs from skin found that bombinins and bombinin Hs were coencoded on the same precursor proteins. Here, we report the molecular cloning of two novel cDNAs from a skin secretion-derived cDNA library of B. variegata whose open-reading frames each encode a novel bombinin (GIGGALLNVGKVALKGLAKGLAEHFANamide and a C-terminally located single copy of a novel nonapeptide (FLGLLGGLLamide or FLGLIGSLLamide. These novel nonapeptides were named feleucin-BV1 and feleucin-BV2, respectively. The novel bombinin exhibited 89% identity to homologues from the toads, B. microdeladigitora and B. maxima. The feleucins exhibited no identity with any amphibian AMP archived in databases. Synthetic feleucins exhibited a weak activity against Staphylococcus aureus (128–256 mg/L but feleucin-BV1 exhibited a synergistic action with the novel bombinin. The present report clearly demonstrates that the skin secretions of bombinid toads continue to represent a source of peptides of novel structure that could provide templates for the design of therapeutics.

  2. Color sensitive silicon photomultiplers with micro-cell level encoding for DOI PET detectors

    Science.gov (United States)

    Shimazoe, Kenji; Koyama, Akihiro; Takahashi, Hiroyuki; Ganka, Thomas; Iskra, Peter; Marquez Seco, Alicia; Schneider, Florian; Wiest, Florian

    2017-11-01

    There have been many studies on Depth Of Interaction (DOI) identification for high resolution Positron Emission Tomography (PET) systems, including those on phoswich detectors, double-sided readout, light sharing methods, and wavelength discrimination. The wavelength discrimination method utilizes the difference in wavelength of stacked scintillators and requires a color sensitive photodetector. Here, a new silicon photomultiplier (SiPM) coupled to a color filter (colorSiPM) was designed and fabricated for DOI detection. The fabricated colorSiPM has two anode readouts that are sensitive to blue and green color. The colorSiPM's response and DOI identification capability for stacked GAGG and LYSO crystals are characterized. The fabricated colorSiPM is sensitive enough to detect a peak of 662 keV from a 137 Cs source.

  3. Changes in the modulation of brain activity during context encoding vs. context retrieval across the adult lifespan.

    Science.gov (United States)

    Ankudowich, E; Pasvanis, S; Rajah, M N

    2016-10-01

    Age-related deficits in context memory may arise from neural changes underlying both encoding and retrieval of context information. Although age-related functional changes in the brain regions supporting context memory begin at midlife, little is known about the functional changes with age that support context memory encoding and retrieval across the adult lifespan. We investigated how age-related functional changes support context memory across the adult lifespan by assessing linear changes with age during successful context encoding and retrieval. Using functional magnetic resonance imaging (fMRI), we compared young, middle-aged and older adults during both encoding and retrieval of spatial and temporal details of faces. Multivariate behavioral partial least squares (B-PLS) analysis of fMRI data identified a pattern of whole-brain activity that correlated with a linear age term and a pattern of whole-brain activity that was associated with an age-by-memory phase (encoding vs. retrieval) interaction. Further investigation of this latter effect identified three main findings: 1) reduced phase-related modulation in bilateral fusiform gyrus, left superior/anterior frontal gyrus and right inferior frontal gyrus that started at midlife and continued to older age, 2) reduced phase-related modulation in bilateral inferior parietal lobule that occurred only in older age, and 3) changes in phase-related modulation in older but not younger adults in left middle frontal gyrus and bilateral parahippocampal gyrus that was indicative of age-related over-recruitment. We conclude that age-related reductions in context memory arise in midlife and are related to changes in perceptual recollection and changes in fronto-parietal retrieval monitoring. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  4. Dissociative effects of true and false recall as a function of different encoding strategies.

    Science.gov (United States)

    Goodwin, Kerri A

    2007-01-01

    Goodwin, Meissner, and Ericsson (2001) proposed a path model in which elaborative encoding predicted the likelihood of verbalisation of critical, nonpresented words at encoding, which in turn predicted the likelihood of false recall. The present study tested this model of false recall experimentally with a manipulation of encoding strategy and the implementation of the process-tracing technique of protocol analysis. Findings indicated that elaborative encoding led to more verbalisations of critical items during encoding than rote rehearsal of list items, but false recall rates were reduced under elaboration conditions (Experiment 2). Interestingly, false recall was more likely to occur when items were verbalised during encoding than not verbalised (Experiment 1), and participants tended to reinstate their encoding strategies during recall, particularly after elaborative encoding (Experiment 1). Theoretical implications for the interplay of encoding and retrieval processes of false recall are discussed.

  5. Continuous composite riser

    Energy Technology Data Exchange (ETDEWEB)

    Slagsvold, L. [ABB Offshore Systems (Norway)

    2002-12-01

    The industry is now looking at developing reserves in waters depths of up to 3000 m (10000 ft). When moving into deeper waters the un-bonded flexible riser becomes very heavy and introduces large hang-off forces on the vessel. We are therefore investigating riser concepts incorporating new materials and with a simpler cross section that can be used for floating production. Advanced composite materials have properties such as, low weight, high strength, good durability and very good fatigue performance. Composite materials are slowly being exploited in the oil industry; they are being prototype tested for drilling risers and small diameter lines. Part of the process for the industry to accept larger diameter production risers made out of composite materials is to understand both the advantages and limitations. A new continuous composite riser system is being developed which capitalizes on the technical benefits of this material while addressing the known constraints. The fully bonded riser is being developed for ultra deep waters and its' characteristics include high temperature (160 deg C), high pressure (500 barg min), light weight, chemical resistant, good insulation, excellent fatigue characteristics and installation by reeling. The concept is based on the use of a thermoplastic liner together with a thermoplastic carbon fibre composite. This paper summarises the ongoing development, which has a goal to manufacture and qualify an 8'' riser, and includes all the steps in a production process from material qualification to the winding process and analytical modelling. (author)

  6. Continuous cell recycle fermentation

    Energy Technology Data Exchange (ETDEWEB)

    Warren, R K; Hill, G A; MacDonald, D G

    1991-10-01

    A cell recycle fermentor using a cross-flow membrane filter has been operated for extended periods. Productivities as high as 70 g/l/h were obtained at a cell concentration of 120 g/l and a product concentration of 70 g/l. The experimental results were then fitted to previously derived biokinetic models (Warren et al., 1990) for a continuous stirred tank fermentor. A good fit for growth rate was found and the cell yield was shown to decrease with product concentration. The product yield, however, was found to remain nearly constant at all cell, substrate and product concentrations. These biokinetics, along with a previous model for the membrane filter (Warren et al., 1991) were then used in a simulalation to estimate the costs of producing ethanol in a large scale system. This simulation was optimized using a variant of the steepest descent method from which a fermentor inlet substrate concentration of 150 g/l and a net cost of $CAN 253.5/1000 L ethanol were projected. From a sensitivity analysis, the yield parameters were found to have the greatest effect on ethanol net cost of the fermentor parameters, while the operating costs and the profit was found to be most sensitive to the wheat raw material cost and to the dried grains by-product value. 55 refs., 11 tabs., 7figs.

  7. Efficiency turns the table on neural encoding, decoding and noise.

    Science.gov (United States)

    Deneve, Sophie; Chalk, Matthew

    2016-04-01

    Sensory neurons are usually described with an encoding model, for example, a function that predicts their response from the sensory stimulus using a receptive field (RF) or a tuning curve. However, central to theories of sensory processing is the notion of 'efficient coding'. We argue here that efficient coding implies a completely different neural coding strategy. Instead of a fixed encoding model, neural populations would be described by a fixed decoding model (i.e. a model reconstructing the stimulus from the neural responses). Because the population solves a global optimization problem, individual neurons are variable, but not noisy, and have no truly invariant tuning curve or receptive field. We review recent experimental evidence and implications for neural noise correlations, robustness and adaptation. Copyright © 2016. Published by Elsevier Ltd.

  8. Temporal texture of associative encoding modulates recall processes.

    Science.gov (United States)

    Tibon, Roni; Levy, Daniel A

    2014-02-01

    Binding aspects of an experience that are distributed over time is an important element of episodic memory. In the current study, we examined how the temporal complexity of an experience may govern the processes required for its retrieval. We recorded event-related potentials during episodic cued recall following pair associate learning of concurrently and sequentially presented object-picture pairs. Cued recall success effects over anterior and posterior areas were apparent in several time windows. In anterior locations, these recall success effects were similar for concurrently and sequentially encoded pairs. However, in posterior sites clustered over parietal scalp the effect was larger for the retrieval of sequentially encoded pairs. We suggest that anterior aspects of the mid-latency recall success effects may reflect working-with-memory operations or direct access recall processes, while more posterior aspects reflect recollective processes which are required for retrieval of episodes of greater temporal complexity. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Direct Pathogenic Effects of HERV-encoded Proteins

    DEFF Research Database (Denmark)

    Hansen, Dorte Tranberg; Møller-Larsen, Anné; Petersen, Thor

    Background: Multiple sclerosis (MS) is a demyelinating, inflammatory disease of the central nervous system (CNS). MS is mediated by the immune system but the etiology of the disease remains unknown. Retroviral envelope (Env) proteins, encoded by human endogenous retroviruses (HERVs), are expressed...... in increased amounts on B cells from MS patients. Furthermore, the amount of anti-HERV antibodies in serum and cerebrospinal fluid from patients with MS is increased when compared with healthy controls. Aim: The overall aim of this project is to investigate the potential role of HERVs in the development of MS...... and the possible direct pathogenic effects of HERV-encoded Env proteins on the CNS. Methods: Construction and characterization of a panel of recombinant Env-proteins is initiated and their pathogenic potential will be investigated: Fusiogenic potential analyzed by flow cytometry and confocal microscopy. Analysis...

  10. Encoded diffractive optics for full-spectrum computational imaging

    KAUST Repository

    Heide, Felix

    2016-09-16

    Diffractive optical elements can be realized as ultra-thin plates that offer significantly reduced footprint and weight compared to refractive elements. However, such elements introduce severe chromatic aberrations and are not variable, unless used in combination with other elements in a larger, reconfigurable optical system. We introduce numerically optimized encoded phase masks in which different optical parameters such as focus or zoom can be accessed through changes in the mechanical alignment of a ultra-thin stack of two or more masks. Our encoded diffractive designs are combined with a new computational approach for self-calibrating imaging (blind deconvolution) that can restore high-quality images several orders of magnitude faster than the state of the art without pre-calibration of the optical system. This co-design of optics and computation enables tunable, full-spectrum imaging using thin diffractive optics.

  11. A Novel Audio Cryptosystem Using Chaotic Maps and DNA Encoding

    Directory of Open Access Journals (Sweden)

    S. J. Sheela

    2017-01-01

    Full Text Available Chaotic maps have good potential in security applications due to their inherent characteristics relevant to cryptography. This paper introduces a new audio cryptosystem based on chaotic maps, hybrid chaotic shift transform (HCST, and deoxyribonucleic acid (DNA encoding rules. The scheme uses chaotic maps such as two-dimensional modified Henon map (2D-MHM and standard map. The 2D-MHM which has sophisticated chaotic behavior for an extensive range of control parameters is used to perform HCST. DNA encoding technology is used as an auxiliary tool which enhances the security of the cryptosystem. The performance of the algorithm is evaluated for various speech signals using different encryption/decryption quality metrics. The simulation and comparison results show that the algorithm can achieve good encryption results and is able to resist several cryptographic attacks. The various types of analysis revealed that the algorithm is suitable for narrow band radio communication and real-time speech encryption applications.

  12. Compression of surface myoelectric signals using MP3 encoding.

    Science.gov (United States)

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  13. Intonational speech prosody encoding in the human auditory cortex.

    Science.gov (United States)

    Tang, C; Hamilton, L S; Chang, E F

    2017-08-25

    Speakers of all human languages regularly use intonational pitch to convey linguistic meaning, such as to emphasize a particular word. Listeners extract pitch movements from speech and evaluate the shape of intonation contours independent of each speaker's pitch range. We used high-density electrocorticography to record neural population activity directly from the brain surface while participants listened to sentences that varied in intonational pitch contour, phonetic content, and speaker. Cortical activity at single electrodes over the human superior temporal gyrus selectively represented intonation contours. These electrodes were intermixed with, yet functionally distinct from, sites that encoded different information about phonetic features or speaker identity. Furthermore, the representation of intonation contours directly reflected the encoding of speaker-normalized relative pitch but not absolute pitch. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  14. Encoded diffractive optics for full-spectrum computational imaging

    KAUST Repository

    Heide, Felix; Fu, Qiang; Peng, Yifan; Heidrich, Wolfgang

    2016-01-01

    Diffractive optical elements can be realized as ultra-thin plates that offer significantly reduced footprint and weight compared to refractive elements. However, such elements introduce severe chromatic aberrations and are not variable, unless used in combination with other elements in a larger, reconfigurable optical system. We introduce numerically optimized encoded phase masks in which different optical parameters such as focus or zoom can be accessed through changes in the mechanical alignment of a ultra-thin stack of two or more masks. Our encoded diffractive designs are combined with a new computational approach for self-calibrating imaging (blind deconvolution) that can restore high-quality images several orders of magnitude faster than the state of the art without pre-calibration of the optical system. This co-design of optics and computation enables tunable, full-spectrum imaging using thin diffractive optics.

  15. Method for making an improved magnetic encoding device

    Science.gov (United States)

    Fox, Richard J.

    1981-01-01

    A magnetic encoding device and method for making the same are provided for use as magnetic storage mediums in identification control applications which give output signals from a reader that are of shorter duration and substantially greater magnitude than those of the prior art. Magnetic encoding elements are produced by uniformly bending wire or strip stock of a magnetic material longitudinally about a common radius to exceed the elastic limit of the material and subsequently mounting the material so that it is restrained in an unbent position on a substrate of nonmagnetic material. The elements are spot weld attached to a substrate to form a binary coded array of elements according to a desired binary code. The coded substrate may be enclosed in a plastic laminate structure. Such devices may be used for security badges, key cards, and the like and may have many other applications.

  16. Local Patch Vectors Encoded by Fisher Vectors for Image Classification

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2018-02-01

    Full Text Available The objective of this work is image classification, whose purpose is to group images into corresponding semantic categories. Four contributions are made as follows: (i For computational simplicity and efficiency, we directly adopt raw image patch vectors as local descriptors encoded by Fisher vector (FV subsequently; (ii For obtaining representative local features within the FV encoding framework, we compare and analyze three typical sampling strategies: random sampling, saliency-based sampling and dense sampling; (iii In order to embed both global and local spatial information into local features, we construct an improved spatial geometry structure which shows good performance; (iv For reducing the storage and CPU costs of high dimensional vectors, we adopt a new feature selection method based on supervised mutual information (MI, which chooses features by an importance sorting algorithm. We report experimental results on dataset STL-10. It shows very promising performance with this simple and efficient framework compared to conventional methods.

  17. Enhanced tactile encoding and memory recognition in congenital blindness.

    Science.gov (United States)

    D'Angiulli, Amedeo; Waraich, Paul

    2002-06-01

    Several behavioural studies have shown that early-blind persons possess superior tactile skills. Since neurophysiological data show that early-blind persons recruit visual as well as somatosensory cortex to carry out tactile processing (cross-modal plasticity), blind persons' sharper tactile skills may be related to cortical re-organisation resulting from loss of vision early in their life. To examine the nature of blind individuals' tactile superiority and its implications for cross-modal plasticity, we compared the tactile performance of congenitally totally blind, low-vision and sighted children on raised-line picture identification test and re-test, assessing effects of task familiarity, exploratory strategy and memory recognition. What distinguished the blind from the other children was higher memory recognition and higher tactile encoding associated with efficient exploration. These results suggest that enhanced perceptual encoding and recognition memory may be two cognitive correlates of cross-modal plasticity in congenital blindness.

  18. The olfactory tubercle encodes odor valence in behaving mice.

    Science.gov (United States)

    Gadziola, Marie A; Tylicki, Kate A; Christian, Diana L; Wesson, Daniel W

    2015-03-18

    Sensory information acquires meaning to adaptively guide behaviors. Despite odors mediating a number of vital behaviors, the components of the olfactory system responsible for assigning meaning to odors remain unclear. The olfactory tubercle (OT), a ventral striatum structure that receives monosynaptic input from the olfactory bulb, is uniquely positioned to transform odor information into behaviorally relevant neural codes. No information is available, however, on the coding of odors among OT neurons in behaving animals. In recordings from mice engaged in an odor discrimination task, we report that the firing rate of OT neurons robustly and flexibly encodes the valence of conditioned odors over identity, with rewarded odors evoking greater firing rates. This coding of rewarded odors occurs before behavioral decisions and represents subsequent behavioral responses. We predict that the OT is an essential region whereby odor valence is encoded in the mammalian brain to guide goal-directed behaviors. Copyright © 2015 the authors 0270-6474/15/354515-13$15.00/0.

  19. Tetrahydrocannabinol (THC) impairs encoding but not retrieval of verbal information.

    Science.gov (United States)

    Ranganathan, Mohini; Radhakrishnan, Rajiv; Addy, Peter H; Schnakenberg-Martin, Ashley M; Williams, Ashley H; Carbuto, Michelle; Elander, Jacqueline; Pittman, Brian; Andrew Sewell, R; Skosnik, Patrick D; D'Souza, Deepak Cyril

    2017-10-03

    Cannabis and agonists of the brain cannabinoid receptor (CB 1 R) produce acute memory impairments in humans. However, the extent to which cannabinoids impair the component processes of encoding and retrieval has not been established in humans. The objective of this analysis was to determine whether the administration of Δ 9 -Tetrahydrocannabinol (THC), the principal psychoactive constituent of cannabis, impairs encoding and/or retrieval of verbal information. Healthy subjects were recruited from the community. Subjects were administered the Rey-Auditory Verbal Learning Test (RAVLT) either before administration of THC (experiment #1) (n=38) or while under the influence of THC (experiment #2) (n=57). Immediate and delayed recall on the RAVLT was compared. Subjects received intravenous THC, in a placebo-controlled, double-blind, randomized manner at doses known to produce behavioral and subjective effects consistent with cannabis intoxication. Total immediate recall, short delayed recall, and long delayed recall were reduced in a statistically significant manner only when the RAVLT was administered to subjects while they were under the influence of THC (experiment #2) and not when the RAVLT was administered prior. THC acutely interferes with encoding of verbal memory without interfering with retrieval. These data suggest that learning information prior to the use of cannabis or cannabinoids is not likely to disrupt recall of that information. Future studies will be necessary to determine whether THC impairs encoding of non-verbal information, to what extent THC impairs memory consolidation, and the role of other cannabinoids in the memory-impairing effects of cannabis. Cannabinoids, Neural Synchrony, and Information Processing (THC-Gamma) http://clinicaltrials.gov/ct2/show/study/NCT00708994 NCT00708994 Pharmacogenetics of Cannabinoid Response http://clinicaltrials.gov/ct2/show/NCT00678730 NCT00678730. Copyright © 2017. Published by Elsevier Inc.

  20. Encoding model of temporal processing in human visual cortex.

    Science.gov (United States)

    Stigliani, Anthony; Jeska, Brianna; Grill-Spector, Kalanit

    2017-12-19

    How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain. Copyright © 2017 the Author(s). Published by PNAS.

  1. Fungicidal activity of peptides encoded by immunoglobulin genes

    OpenAIRE

    Polonelli, Luciano; Ciociola, Tecla; Sperind?, Martina; Giovati, Laura; D?Adda, Tiziana; Galati, Serena; Travassos, Luiz R.; Magliani, Walter; Conti, Stefania

    2017-01-01

    Evidence from previous works disclosed the antimicrobial, antiviral, anti-tumour and/or immunomodulatory activity exerted, through different mechanisms of action, by peptides expressed in the complementarity-determining regions or even in the constant region of antibodies, independently from their specificity and isotype. Presently, we report the selection, from available databases, of peptide sequences encoded by immunoglobulin genes for the evaluation of their potential biological activitie...

  2. A deep auto-encoder model for gene expression prediction.

    Science.gov (United States)

    Xie, Rui; Wen, Jia; Quitadamo, Andrew; Cheng, Jianlin; Shi, Xinghua

    2017-11-17

    Gene expression is a key intermediate level that genotypes lead to a particular trait. Gene expression is affected by various factors including genotypes of genetic variants. With an aim of delineating the genetic impact on gene expression, we build a deep auto-encoder model to assess how good genetic variants will contribute to gene expression changes. This new deep learning model is a regression-based predictive model based on the MultiLayer Perceptron and Stacked Denoising Auto-encoder (MLP-SAE). The model is trained using a stacked denoising auto-encoder for feature selection and a multilayer perceptron framework for backpropagation. We further improve the model by introducing dropout to prevent overfitting and improve performance. To demonstrate the usage of this model, we apply MLP-SAE to a real genomic datasets with genotypes and gene expression profiles measured in yeast. Our results show that the MLP-SAE model with dropout outperforms other models including Lasso, Random Forests and the MLP-SAE model without dropout. Using the MLP-SAE model with dropout, we show that gene expression quantifications predicted by the model solely based on genotypes, align well with true gene expression patterns. We provide a deep auto-encoder model for predicting gene expression from SNP genotypes. This study demonstrates that deep learning is appropriate for tackling another genomic problem, i.e., building predictive models to understand genotypes' contribution to gene expression. With the emerging availability of richer genomic data, we anticipate that deep learning models play a bigger role in modeling and interpreting genomics.

  3. Rapid de novo shape encoding: a challenge to connectionist modeling

    OpenAIRE

    Greene, Ernest

    2018-01-01

    Neural network (connectionist) models are designed to encode image features and provide the building blocks for object and shape recognition. These models generally call for: a) initial diffuse connections from one neuron population to another, and b) training to bring about a functional change in those connections so that one or more high-tier neurons will selectively respond to a specific shape stimulus. Advanced models provide for translation, size, and rotation invariance. The present dis...

  4. Perceptual priming versus explicit memory: dissociable neural correlates at encoding.

    Science.gov (United States)

    Schott, Björn; Richardson-Klavehn, Alan; Heinze, Hans-Jochen; Düzel, Emrah

    2002-05-15

    We addressed the hypothesis that perceptual priming and explicit memory have distinct neural correlates at encoding. Event-related potentials (ERPs) were recorded while participants studied visually presented words at deep versus shallow levels of processing (LOPs). The ERPs were sorted by whether or not participants later used studied words as completions to three-letter word stems in an intentional memory test, and by whether or not they indicated that these completions were remembered from the study list. Study trials from which words were later used and not remembered (primed trials) and study trials from which words were later used and remembered (remembered trials) were compared to study trials from which words were later not used (forgotten trials), in order to measure the ERP difference associated with later memory (DM effect). Primed trials involved an early (200-450 msec) centroparietal negative-going DM effect. Remembered trials involved a late (900-1200 msec) right frontal, positive-going DM effect regardless of LOP, as well as an earlier (600-800 msec) central, positive-going DM effect during shallow study processing only. All three DM effects differed topographically, and, in terms of their onset or duration, from the extended (600-1200 msec) fronto-central, positive-going shift for deep compared with shallow study processing. The results provide the first clear evidence that perceptual priming and explicit memory have distinct neural correlates at encoding, consistent with Tulving and Schacter's (1990) distinction between brain systems concerned with perceptual representation versus semantic and episodic memory. They also shed additional light on encoding processes associated with later explicit memory, by suggesting that brain processes influenced by LOP set the stage for other, at least partially separable, brain processes that are more directly related to encoding success.

  5. Practical Programming with Higher-Order Encodings and Dependent Types

    DEFF Research Database (Denmark)

    Poswolsky, Adam; Schürmann, Carsten

    2008-01-01

    , tedious, and error-prone. In this paper, we describe the underlying calculus of Delphin. Delphin is a fully implemented functional-programming language supporting reasoning over higher-order encodings and dependent types, while maintaining the benefits of HOAS. More specifically, just as representations...... for instantiation from those that will remain uninstantiated, utilizing a variation of Miller and Tiu’s ∇-quantifier [1]....

  6. Creativity within constraints: Encoding, production, and representation in Battlestar Galactica

    OpenAIRE

    Adams, Philippa Rush

    2015-01-01

    Using the lens of feminist production studies, I examine the television show Battlestar Galactica through interviews with show creators to explore the contexts of production. Writers, actors, and producers experience constraints on their creativity. Media producers encode meaning into the texts they create and form their own understandings of social issues and stories. I examine the day-to-day processes and constraints operating in the work lives of television creators as well as their politi...

  7. Wavelength encoding technique for particle analyses in hematology analyzer

    Science.gov (United States)

    Rongeat, Nelly; Brunel, Patrick; Gineys, Jean-Philippe; Cremien, Didier; Couderc, Vincent; Nérin, Philippe

    2011-07-01

    The aim of this study is to combine multiple excitation wavelengths in order to improve accuracy of fluorescence characterization of labeled cells. The experimental demonstration is realized with a hematology analyzer based on flow cytometry and a CW laser source emitting two visible wavelengths. A given optical encoding associated to each wavelength allows fluorescence identification coming from specific fluorochromes and avoiding the use of noisy compensation method.

  8. Exhaustive search of linear information encoding protein-peptide recognition.

    Science.gov (United States)

    Kelil, Abdellali; Dubreuil, Benjamin; Levy, Emmanuel D; Michnick, Stephen W

    2017-04-01

    High-throughput in vitro methods have been extensively applied to identify linear information that encodes peptide recognition. However, these methods are limited in number of peptides, sequence variation, and length of peptides that can be explored, and often produce solutions that are not found in the cell. Despite the large number of methods developed to attempt addressing these issues, the exhaustive search of linear information encoding protein-peptide recognition has been so far physically unfeasible. Here, we describe a strategy, called DALEL, for the exhaustive search of linear sequence information encoded in proteins that bind to a common partner. We applied DALEL to explore binding specificity of SH3 domains in the budding yeast Saccharomyces cerevisiae. Using only the polypeptide sequences of SH3 domain binding proteins, we succeeded in identifying the majority of known SH3 binding sites previously discovered either in vitro or in vivo. Moreover, we discovered a number of sites with both non-canonical sequences and distinct properties that may serve ancillary roles in peptide recognition. We compared DALEL to a variety of state-of-the-art algorithms in the blind identification of known binding sites of the human Grb2 SH3 domain. We also benchmarked DALEL on curated biological motifs derived from the ELM database to evaluate the effect of increasing/decreasing the enrichment of the motifs. Our strategy can be applied in conjunction with experimental data of proteins interacting with a common partner to identify binding sites among them. Yet, our strategy can also be applied to any group of proteins of interest to identify enriched linear motifs or to exhaustively explore the space of linear information encoded in a polypeptide sequence. Finally, we have developed a webserver located at http://michnick.bcm.umontreal.ca/dalel, offering user-friendly interface and providing different scenarios utilizing DALEL.

  9. Polarization encoded all-optical multi-valued shift operators

    Science.gov (United States)

    Roy, Jitendra Nath; Bhowmik, Panchatapa

    2014-08-01

    Polarization encoded multi-valued (both ternary and quaternary logic) shift operators have been designed using linear optical devices only. There are six ternary and 24 quaternary shift operators in multi-valued system. These are also known as reversible literals. This circuit will be useful in future all-optical multi-valued logic based information processing system. Different states of polarization of light are taken as different logic states.

  10. Fast multiwire proportional chamber data encoding system for proton tomography

    International Nuclear Information System (INIS)

    Brown, D.

    1979-01-01

    A data encoding system that rapidly generates the binary address of an active wire in a 512-wire multiwire proportional chamber has been developed. It can accept a second event on a different wire after a deadtime of 130 ns. The system incorporates preprocessing of the wire data to reject events that would require more than one wire address. It also includes a first-in, first-out memory to buffer the data flow

  11. Chaotically encoded particle swarm optimization algorithm and its applications

    International Nuclear Information System (INIS)

    Alatas, Bilal; Akin, Erhan

    2009-01-01

    This paper proposes a novel particle swarm optimization (PSO) algorithm, chaotically encoded particle swarm optimization algorithm (CENPSOA), based on the notion of chaos numbers that have been recently proposed for a novel meaning to numbers. In this paper, various chaos arithmetic and evaluation measures that can be used in CENPSOA have been described. Furthermore, CENPSOA has been designed to be effectively utilized in data mining applications.

  12. Soybean phytase and nucleic acid encoding the same

    OpenAIRE

    1999-01-01

    Isolated soybean phytase polypeptides and isolated nucleic acids encoding soybean phytases are provided. The invention is also directed to nucleic acid expression constructs, vectors, and host cells comprising the isolated soybean phytase nucleic acids, as well as methods for producing recombinant and non-recombinant purified soybean phytase. The invention also relates to transgenic plants expressing the soybean phytase, particularly expression under seed-specific expression control elements.

  13. Encoding of Spatial Attention by Primate Prefrontal Cortex Neuronal Ensembles

    Science.gov (United States)

    Treue, Stefan

    2018-01-01

    Abstract Single neurons in the primate lateral prefrontal cortex (LPFC) encode information about the allocation of visual attention and the features of visual stimuli. However, how this compares to the performance of neuronal ensembles at encoding the same information is poorly understood. Here, we recorded the responses of neuronal ensembles in the LPFC of two macaque monkeys while they performed a task that required attending to one of two moving random dot patterns positioned in different hemifields and ignoring the other pattern. We found single units selective for the location of the attended stimulus as well as for its motion direction. To determine the coding of both variables in the population of recorded units, we used a linear classifier and progressively built neuronal ensembles by iteratively adding units according to their individual performance (best single units), or by iteratively adding units based on their contribution to the ensemble performance (best ensemble). For both methods, ensembles of relatively small sizes (n decoding performance relative to individual single units. However, the decoder reached similar performance using fewer neurons with the best ensemble building method compared with the best single units method. Our results indicate that neuronal ensembles within the LPFC encode more information about the attended spatial and nonspatial features of visual stimuli than individual neurons. They further suggest that efficient coding of attention can be achieved by relatively small neuronal ensembles characterized by a certain relationship between signal and noise correlation structures. PMID:29568798

  14. Attention promotes episodic encoding by stabilizing hippocampal representations

    Science.gov (United States)

    Aly, Mariam; Turk-Browne, Nicholas B.

    2016-01-01

    Attention influences what is later remembered, but little is known about how this occurs in the brain. We hypothesized that behavioral goals modulate the attentional state of the hippocampus to prioritize goal-relevant aspects of experience for encoding. Participants viewed rooms with paintings, attending to room layouts or painting styles on different trials during high-resolution functional MRI. We identified template activity patterns in each hippocampal subfield that corresponded to the attentional state induced by each task. Participants then incidentally encoded new rooms with art while attending to the layout or painting style, and memory was subsequently tested. We found that when task-relevant information was better remembered, the hippocampus was more likely to have been in the correct attentional state during encoding. This effect was specific to the hippocampus, and not found in medial temporal lobe cortex, category-selective areas of the visual system, or elsewhere in the brain. These findings provide mechanistic insight into how attention transforms percepts into memories. PMID:26755611

  15. New Complexity Scalable MPEG Encoding Techniques for Mobile Applications

    Directory of Open Access Journals (Sweden)

    Stephan Mietens

    2004-03-01

    Full Text Available Complexity scalability offers the advantage of one-time design of video applications for a large product family, including mobile devices, without the need of redesigning the applications on the algorithmic level to meet the requirements of the different products. In this paper, we present complexity scalable MPEG encoding having core modules with modifications for scalability. The interdependencies of the scalable modules and the system performance are evaluated. Experimental results show scalability giving a smooth change in complexity and corresponding video quality. Scalability is basically achieved by varying the number of computed DCT coefficients and the number of evaluated motion vectors but other modules are designed such they scale with the previous parameters. In the experiments using the “Stefan” sequence, the elapsed execution time of the scalable encoder, reflecting the computational complexity, can be gradually reduced to roughly 50% of its original execution time. The video quality scales between 20 dB and 48 dB PSNR with unity quantizer setting, and between 21.5 dB and 38.5 dB PSNR for different sequences targeting 1500 kbps. The implemented encoder and the scalability techniques can be successfully applied in mobile systems based on MPEG video compression.

  16. Copyright Protection of Color Imaging Using Robust-Encoded Watermarking

    Directory of Open Access Journals (Sweden)

    M. Cedillo-Hernandez

    2015-04-01

    Full Text Available In this paper we present a robust-encoded watermarking method applied to color images for copyright protection, which presents robustness against several geometric and signal processing distortions. Trade-off between payload, robustness and imperceptibility is a very important aspect which has to be considered when a watermark algorithm is designed. In our proposed scheme, previously to be embedded into the image, the watermark signal is encoded using a convolutional encoder, which can perform forward error correction achieving better robustness performance. Then, the embedding process is carried out through the discrete cosine transform domain (DCT of an image using the image normalization technique to accomplish robustness against geometric and signal processing distortions. The embedded watermark coded bits are extracted and decoded using the Viterbi algorithm. In order to determine the presence or absence of the watermark into the image we compute the bit error rate (BER between the recovered and the original watermark data sequence. The quality of the watermarked image is measured using the well-known indices: Peak Signal to Noise Ratio (PSNR, Visual Information Fidelity (VIF and Structural Similarity Index (SSIM. The color difference between the watermarked and original images is obtained by using the Normalized Color Difference (NCD measure. The experimental results show that the proposed method provides good performance in terms of imperceptibility and robustness. The comparison among the proposed and previously reported methods based on different techniques is also provided.

  17. Towards predicting the encoding capability of MR fingerprinting sequences.

    Science.gov (United States)

    Sommer, K; Amthor, T; Doneva, M; Koken, P; Meineke, J; Börnert, P

    2017-09-01

    Sequence optimization and appropriate sequence selection is still an unmet need in magnetic resonance fingerprinting (MRF). The main challenge in MRF sequence design is the lack of an appropriate measure of the sequence's encoding capability. To find such a measure, three different candidates for judging the encoding capability have been investigated: local and global dot-product-based measures judging dictionary entry similarity as well as a Monte Carlo method that evaluates the noise propagation properties of an MRF sequence. Consistency of these measures for different sequence lengths as well as the capability to predict actual sequence performance in both phantom and in vivo measurements was analyzed. While the dot-product-based measures yielded inconsistent results for different sequence lengths, the Monte Carlo method was in a good agreement with phantom experiments. In particular, the Monte Carlo method could accurately predict the performance of different flip angle patterns in actual measurements. The proposed Monte Carlo method provides an appropriate measure of MRF sequence encoding capability and may be used for sequence optimization. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Bioinformatics analysis and detection of gelatinase encoded gene in Lysinibacillussphaericus

    Science.gov (United States)

    Repin, Rul Aisyah Mat; Mutalib, Sahilah Abdul; Shahimi, Safiyyah; Khalid, Rozida Mohd.; Ayob, Mohd. Khan; Bakar, Mohd. Faizal Abu; Isa, Mohd Noor Mat

    2016-11-01

    In this study, we performed bioinformatics analysis toward genome sequence of Lysinibacillussphaericus (L. sphaericus) to determine gene encoded for gelatinase. L. sphaericus was isolated from soil and gelatinase species-specific bacterium to porcine and bovine gelatin. This bacterium offers the possibility of enzymes production which is specific to both species of meat, respectively. The main focus of this research is to identify the gelatinase encoded gene within the bacteria of L. Sphaericus using bioinformatics analysis of partially sequence genome. From the research study, three candidate gene were identified which was, gelatinase candidate gene 1 (P1), NODE_71_length_93919_cov_158.931839_21 which containing 1563 base pair (bp) in size with 520 amino acids sequence; Secondly, gelatinase candidate gene 2 (P2), NODE_23_length_52851_cov_190.061386_17 which containing 1776 bp in size with 591 amino acids sequence; and Thirdly, gelatinase candidate gene 3 (P3), NODE_106_length_32943_cov_169.147919_8 containing 1701 bp in size with 566 amino acids sequence. Three pairs of oligonucleotide primers were designed and namely as, F1, R1, F2, R2, F3 and R3 were targeted short sequences of cDNA by PCR. The amplicons were reliably results in 1563 bp in size for candidate gene P1 and 1701 bp in size for candidate gene P3. Therefore, the results of bioinformatics analysis of L. Sphaericus resulting in gene encoded gelatinase were identified.

  19. Face and object encoding under perceptual load: ERP evidence.

    Science.gov (United States)

    Neumann, Markus F; Mohamed, Tarik N; Schweinberger, Stefan R

    2011-02-14

    According to the perceptual load theory, processing of a task-irrelevant distractor is abolished when attentional resources are fully consumed by task-relevant material. As an exception, however, famous faces have been shown to elicit repetition modulations in event-related potentials - an N250r - despite high load at initial presentation, suggesting preserved face-encoding. Here, we recorded N250r repetition modulations by unfamiliar faces, hands, and houses, and tested face specificity of preserved encoding under high load. In an immediate (S1-S2) repetition priming paradigm, participants performed a letter identification task on S1 by indicating whether an "X" vs. "N" was among 6 different (high load condition) or 6 identical (low load condition) letters. Letter strings were superimposed on distractor faces, hands, or houses. Subsequent S2 probes were either identical repetitions of S1 distractors, non-repeated exemplars from the same category, or infrequent butterflies, to which participants responded. Independent of attentional load at S1, an occipito-temporal N250r was found for unfamiliar faces. In contrast, no repetition-related neural modulation emerged for houses or hands. This strongly suggests that a putative face-selective attention module supports encoding under high load, and that similar mechanisms are unavailable for other natural or artificial objects. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Adult ADHD and working memory: neural evidence of impaired encoding.

    Science.gov (United States)

    Kim, Soyeon; Liu, Zhongxu; Glizer, Daniel; Tannock, Rosemary; Woltering, Steven

    2014-08-01

    To investigate neural and behavioural correlates of visual encoding during a working memory (WM) task in young adults with and without Attention-Deficit/Hyperactivity Disorder (ADHD). A sample of 30 college students currently meeting a diagnosis of ADHD and 25 typically developing students, matched on age and gender, performed a delayed match-to-sample task with low and high memory load conditions. Dense-array electroencephalography was recorded. Specifically, the P3, an event related potential (ERP) associated with WM, was examined because of its relation with attentional allocation during WM. Task performance (accuracy, reaction time) as well as performance on other neuropsychological tasks of WM was analyzed. Neural differences were found between the groups. Specifically, the P3 amplitude was smaller in the ADHD group compared to the comparison group for both load conditions at parietal-occipital sites. Lower scores on behavioural working memory tasks were suggestive of impaired behavioural WM performance in the ADHD group. Findings from this study provide the first evidence of neural differences in the encoding stage of WM in young adults with ADHD, suggesting ineffective allocation of attentional resources involved in encoding of information in WM. These findings, reflecting alternate neural functioning of WM, may explain some of the difficulties related to WM functioning that college students with ADHD report in their every day cognitive functioning. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  1. Distinct Reward Properties are Encoded via Corticostriatal Interactions.

    Science.gov (United States)

    Smith, David V; Rigney, Anastasia E; Delgado, Mauricio R

    2016-02-02

    The striatum serves as a critical brain region for reward processing. Yet, understanding the link between striatum and reward presents a challenge because rewards are composed of multiple properties. Notably, affective properties modulate emotion while informative properties help obtain future rewards. We approached this problem by emphasizing affective and informative reward properties within two independent guessing games. We found that both reward properties evoked activation within the nucleus accumbens, a subregion of the striatum. Striatal responses to informative, but not affective, reward properties predicted subsequent utilization of information for obtaining monetary reward. We hypothesized that activation of the striatum may be necessary but not sufficient to encode distinct reward properties. To investigate this possibility, we examined whether affective and informative reward properties were differentially encoded in corticostriatal interactions. Strikingly, we found that the striatum exhibited dissociable connectivity patterns with the ventrolateral prefrontal cortex, with increasing connectivity for affective reward properties and decreasing connectivity for informative reward properties. Our results demonstrate that affective and informative reward properties are encoded via corticostriatal interactions. These findings highlight how corticostriatal systems contribute to reward processing, potentially advancing models linking striatal activation to behavior.

  2. Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks.

    Science.gov (United States)

    Gardner, Brian; Sporea, Ioana; Grüning, André

    2015-12-01

    Information encoding in the nervous system is supported through the precise spike timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains an open question. Here we examine how multilayered networks of spiking neurons can learn to encode for input patterns using a fully temporal coding scheme. To this end, we introduce a new supervised learning rule, MultilayerSpiker, that can train spiking networks containing hidden layer neurons to perform transformations between spatiotemporal input and output spike patterns. The performance of the proposed learning rule is demonstrated in terms of the number of pattern mappings it can learn, the complexity of network structures it can be used on, and its classification accuracy when using multispike-based encodings. In particular, the learning rule displays robustness against input noise and can generalize well on an example data set. Our approach contributes to both a systematic understanding of how computations might take place in the nervous system and a learning rule that displays strong technical capability.

  3. ENCODE: A Sourcebook of Epigenomes and Chromatin Language

    Directory of Open Access Journals (Sweden)

    Maryam Yavartanoo

    2013-03-01

    Full Text Available Until recently, since the Human Genome Project, the general view has been that the majority of the human genome is composed of junk DNA and has little or no selective advantage to the organism. Now we know that this conclusion is an oversimplification. In April 2003, the National Human Genome Research Institute (NHGRI launched an international research consortium called Encyclopedia of DNA Elements (ENCODE to uncover non-coding functional elements in the human genome. The result of this project has identified a set of new DNA regulatory elements, based on novel relationships among chromatin accessibility, histone modifications, nucleosome positioning, DNA methylation, transcription, and the occupancy of sequence-specific factors. The project gives us new insights into the organization and regulation of the human genome and epigenome. Here, we sought to summarize particular aspects of the ENCODE project and highlight the features and data that have recently been released. At the end of this review, we have summarized a case study we conducted using the ENCODE epigenome data.

  4. Contribution of stress and sex hormones to memory encoding.

    Science.gov (United States)

    Merz, Christian J

    2017-08-01

    Distinct stages of the menstrual cycle and the intake of oral contraceptives (OC) affect sex hormone levels, stress responses, and memory processes critically involved in the pathogenesis of mental disorders. To characterize the interaction of sex and stress hormones on memory encoding, 30 men, 30 women in the early follicular phase of the menstrual cycle (FO), 30 women in the luteal phase (LU), and 30 OC women were exposed to either a stress (socially evaluated cold-pressor test) or a control condition prior to memory encoding and immediate recall of neutral, positive, and negative words. On the next day, delayed free and cued recall was tested. Sex hormone levels verified distinct estradiol, progesterone, and testosterone levels between groups. Stress increased blood pressure, cortisol concentrations, and ratings of stress appraisal in all four groups as well as cued recall performance of negative words in men. Stress exposure in OC women led to a blunted cortisol response and rather enhanced cued recall of neutral words. Thus, pre-encoding stress facilitated emotional cued recall performance in men only, but not women with different sex hormone statuses pointing to the pivotal role of circulating sex hormones in modulation of learning and memory processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Three-dimensional imagery by encoding sources of X rays

    International Nuclear Information System (INIS)

    Magnin, Isabelle

    1987-01-01

    This research thesis addresses the theoretical and practical study of X ray coded sources, and thus notably aims at exploring whether it would be possible to transform a standard digital radiography apparatus (as those operated in radiology hospital departments) into a low cost three-dimensional imagery system. The author first recalls the principle of conventional tomography and improvement attempts, and describes imagery techniques based on the use of encoding openings and source encoding. She reports the modelling of an imagery system based on encoded sources of X ray, and addresses the original notion of three-dimensional response for such a system. The author then addresses the reconstruction method by considering the reconstruction of a plane object, of a multi-plane object, and of real three-dimensional object. The frequency properties and the tomographic capacities of various types of source codes are analysed. She describes a prototype tomography apparatus, and presents and discusses three-dimensional actual phantom reconstructions. She finally introduces a new principle of dynamic three-dimensional radiography which implements an acquisition technique by 'gating code'. The acquisition principle should allow the reconstruction of volumes animated by periodic deformations, such as the heart for example [fr

  6. Muscle synergies evoked by microstimulation are preferentially encoded during behavior

    Directory of Open Access Journals (Sweden)

    Simon Alexander Overduin

    2014-03-01

    Full Text Available Electrical microstimulation studies provide some of the most direct evidence for the neural representation of muscle synergies. These synergies, i.e. coordinated activations of groups of muscles, have been proposed as building blocks for the construction of motor behaviors by the nervous system. Intraspinal or intracortical microstimulation has been shown to evoke muscle patterns that can be resolved into a small set of synergies similar to those seen in natural behavior. However, questions remain about the validity of microstimulation as a probe of neural function, particularly given the relatively long trains of supratheshold stimuli used in these studies. Here, we examined whether muscle synergies evoked during intracortical microstimulation in two rhesus macaques were similarly encoded by nearby motor cortical units during a purely voluntary behavior involving object reach, grasp, and carry movements. At each microstimulation site we identified the synergy most strongly evoked among those extracted from muscle patterns evoked over all microstimulation sites. For each cortical unit recorded at the same microstimulation site, we then identified the synergy most strongly encoded among those extracted from muscle patterns recorded during the voluntary behavior. We found that the synergy most strongly evoked at an intracortical microstimulation site matched the synergy most strongly encoded by proximal units more often than expected by chance. These results suggest a common neural substrate for microstimulation-evoked motor responses and for the generation of muscle patterns during natural behaviors.

  7. Imaging dynamic redox processes with genetically encoded probes.

    Science.gov (United States)

    Ezeriņa, Daria; Morgan, Bruce; Dick, Tobias P

    2014-08-01

    Redox signalling plays an important role in many aspects of physiology, including that of the cardiovascular system. Perturbed redox regulation has been associated with numerous pathological conditions; nevertheless, the causal relationships between redox changes and pathology often remain unclear. Redox signalling involves the production of specific redox species at specific times in specific locations. However, until recently, the study of these processes has been impeded by a lack of appropriate tools and methodologies that afford the necessary redox species specificity and spatiotemporal resolution. Recently developed genetically encoded fluorescent redox probes now allow dynamic real-time measurements, of defined redox species, with subcellular compartment resolution, in intact living cells. Here we discuss the available genetically encoded redox probes in terms of their sensitivity and specificity and highlight where uncertainties or controversies currently exist. Furthermore, we outline major goals for future probe development and describe how progress in imaging methodologies will improve our ability to employ genetically encoded redox probes in a wide range of situations. This article is part of a special issue entitled "Redox Signalling in the Cardiovascular System." Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. QR encoded smart oral dosage forms by inkjet printing.

    Science.gov (United States)

    Edinger, Magnus; Bar-Shalom, Daniel; Sandler, Niklas; Rantanen, Jukka; Genina, Natalja

    2018-01-30

    The use of inkjet printing (IJP) technology enables the flexible manufacturing of personalized medicine with the doses tailored for each patient. In this study we demonstrate, for the first time, the applicability of IJP in the production of edible dosage forms in the pattern of a quick response (QR) code. This printed pattern contains the drug itself and encoded information relevant to the patient and/or healthcare professionals. IJP of the active pharmaceutical ingredient (API)-containing ink in the pattern of QR code was performed onto a newly developed porous and flexible, but mechanically stable substrate with a good absorption capacity. The printing did not affect the mechanical properties of the substrate. The actual drug content of the printed dosage forms was in accordance with the encoded drug content. The QR encoded dosage forms had a good print definition without significant edge bleeding. They were readable by a smartphone even after storage in harsh conditions. This approach of efficient data incorporation and data storage combined with the use of smart devices can lead to safer and more patient-friendly drug products in the future. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Identification of a Novel UTY‐Encoded Minor Histocompatibility Antigen

    DEFF Research Database (Denmark)

    Mortensen, B. K.; Rasmussen, A. H.; Larsen, Malene Erup

    2012-01-01

    Minor histocompatibility antigens (mHags) encoded by the Y‐chromosome (H‐Y‐mHags) are known to play a pivotal role in allogeneic haematopoietic cell transplantation (HCT) involving female donors and male recipients. We present a new H‐Y‐mHag, YYNAFHWAI (UTY139–147), encoded by the UTY gene...... obtained post‐HCT from male recipients of female donor grafts. In one of these recipients, a CD8+ T cell response was observed against a peptide stretch encoded by the UTY gene. Another bioinformatics tool, HLArestrictor, was used to identify the optimal peptide and HLA‐restriction element. Using peptide....../HLA tetramers, the specificity of the CD8+ T cell response was successfully validated as being HLA‐A*24:02‐restricted and directed against the male UTY139–147 peptide. Functional analysis of these T cells demonstrated male UTY139–147 peptide‐specific cytokine secretion (IFNγ, TNFα and MIP‐1β) and cytotoxic...

  10. Redefining continuing education delivery.

    Science.gov (United States)

    Carlton, K H

    1997-01-01

    individual health-care worker consumer. A number of national and world-wide trends are propelling rapid changes in the delivery modalities and types of emerging providers for health-care CE. Examples of these advanced telecommunications applications of CE opportunities for health-care personnel are becoming more prevalent in the literature and the pattern of CE marketing, and delivery evolution can be seen readily on the Internet. Continued program success and viability will belong to the individuals and organizations who are able to conceptualize and envision the positive transformations and opportunities that can occur from the evolving paradigm of education for the lifelong learner of the 21st century.

  11. Yeast PAH1-encoded phosphatidate phosphatase controls the expression of CHO1-encoded phosphatidylserine synthase for membrane phospholipid synthesis.

    Science.gov (United States)

    Han, Gil-Soo; Carman, George M

    2017-08-11

    The PAH1 -encoded phosphatidate phosphatase (PAP), which catalyzes the committed step for the synthesis of triacylglycerol in Saccharomyces cerevisiae , exerts a negative regulatory effect on the level of phosphatidate used for the de novo synthesis of membrane phospholipids. This raises the question whether PAP thereby affects the expression and activity of enzymes involved in phospholipid synthesis. Here, we examined the PAP-mediated regulation of CHO1 -encoded phosphatidylserine synthase (PSS), which catalyzes the committed step for the synthesis of major phospholipids via the CDP-diacylglycerol pathway. The lack of PAP in the pah1 Δ mutant highly elevated PSS activity, exhibiting a growth-dependent up-regulation from the exponential to the stationary phase of growth. Immunoblot analysis showed that the elevation of PSS activity results from an increase in the level of the enzyme encoded by CHO1 Truncation analysis and site-directed mutagenesis of the CHO1 promoter indicated that Cho1 expression in the pah1 Δ mutant is induced through the inositol-sensitive upstream activation sequence (UAS INO ), a cis -acting element for the phosphatidate-controlled Henry (Ino2-Ino4/Opi1) regulatory circuit. The abrogation of Cho1 induction and PSS activity by a CHO1 UAS INO mutation suppressed pah1 Δ effects on lipid synthesis, nuclear/endoplasmic reticulum membrane morphology, and lipid droplet formation, but not on growth at elevated temperature. Loss of the DGK1 -encoded diacylglycerol kinase, which converts diacylglycerol to phosphatidate, partially suppressed the pah1 Δ-mediated induction of Cho1 and PSS activity. Collectively, these data showed that PAP activity controls the expression of PSS for membrane phospholipid synthesis. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  12. Continuous Re-Exposure to Environmental Sound Cues During Sleep Does Not Improve Memory for Semantically Unrelated Word Pairs

    OpenAIRE

    Donohue, Kelly C.; Spencer, Rebecca M. C.

    2011-01-01

    Two recent studies illustrated that cues present during encoding can enhance recall if re-presented during sleep. This suggests an academic strategy. Such effects have only been demonstrated with spatial learning and cue presentation was isolated to slow wave sleep (SWS). The goal of this study was to examine whether sounds enhance sleep-dependent consolidation of a semantic task if the sounds are re-presented continuously during sleep. Participants encoded a list of word pairs in the evening...

  13. Deep and shallow encoding effects on face recognition: an ERP study.

    Science.gov (United States)

    Marzi, Tessa; Viggiano, Maria Pia

    2010-12-01

    Event related potentials (ERPs) were employed to investigate whether and when brain activity related to face recognition varies according to the processing level undertaken at encoding. Recognition was assessed when preceded by a "shallow" (orientation judgement) or by a "deep" study task (occupation judgement). Moreover, we included a further manipulation by presenting at encoding faces either in the upright or inverted orientation. As expected, deeply encoded faces were recognized more accurately and more quickly with respect to shallowly encoded faces. The ERP showed three main findings: i) as witnessed by more positive-going potentials for deeply encoded faces, at early and later processing stage, face recognition was influenced by the processing strategy adopted during encoding; ii) structural encoding, indexed by the N170, turned out to be "cognitively penetrable" showing repetition priming effects for deeply encoded faces; iii) face inversion, by disrupting configural processing during encoding, influenced memory related processes for deeply encoded faces and impaired the recognition of faces shallowly processed. The present study adds weight to the concept that the depth of processing during memory encoding affects retrieval. We found that successful retrieval following deep encoding involved both familiarity- and recollection-related processes showing from 500 ms a fronto-parietal distribution, whereas shallow encoding affected only earlier processing stages reflecting perceptual priming. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. Learning science as a potential new source of understanding and improvement for continuing education and continuing professional development.

    Science.gov (United States)

    Van Hoof, Thomas J; Doyle, Terrence J

    2018-01-15

    Learning science is an emerging interdisciplinary field that offers educators key insights about what happens in the brain when learning occurs. In addition to explanations about the learning process, which includes memory and involves different parts of the brain, learning science offers effective strategies to inform the planning and implementation of activities and programs in continuing education and continuing professional development. This article provides a brief description of learning, including the three key steps of encoding, consolidation and retrieval. The article also introduces four major learning-science strategies, known as distributed learning, retrieval practice, interleaving, and elaboration, which share the importance of considerable practice. Finally, the article describes how learning science aligns with the general findings from the most recent synthesis of systematic reviews about the effectiveness of continuing medical education.

  15. [Generation continuity and integration].

    Science.gov (United States)

    Zakhvatkin, Iu A

    2008-01-01

    Transformation of the cyclic morphoprocesses in Protista toward the terminal-cyclic morphoprocesses in Metazoa had lead to integration of the fomer's life circles into the latter's ontogenesis and began to supply the newly emerging ecosystems with the regular income of mortomasses. According to the palintomic hypothesis of A.A. Zakhvatkin, it was the egg that became a means of the metazoan generation continuity, and not the half set of organells acquired by descendants of a divided maternal cell in Protozoa. Origin of Metazoa and of their ontogenesis was accomplished by hypetrophic distomy and subsequent palintomic division of the protist parental cell, these processes being comparable to the ovogenesis and ovocyte division in the Metazoa. Division process in the most primitive metazoans, Leptolida and Calcarea, retained certains features of its palintomic nature that are clear in the Ctenophora, the latter though specific being most similar in this respect to the spongs and not to the Coelenterata whith whom they were united in the same phylum formerly. The ovogenesis perfection controlled by the maternal organism and leading to an increment of the nuclear-plasmic tension due to enrichment of egg with the yolk, promoted the embrionization of development and formation of the egg morphogenetic environment providing for the earlier formation processes without participation of the parental recombined genotypes. With all this, far earlier appearence of symmetry elements of definitive forms is embriogenesis along the ascending trend from the lower Metazoa to the most advanced insects. The unordered correspondence of the polarity axis of egg and the oral-aboral axis of blastula-like larva (1) is replaced by protaxony (2) in which these axes coincide, all formation processes reaching their perfection in the homoquadrant spiral division of annelids, which became a means of ovoplasma segregation. Afterward, a herequadrant division and plagioxony are developed in the course

  16. Continuation-like semantics for modeling structural process anomalies

    Directory of Open Access Journals (Sweden)

    Grewe Niels

    2012-09-01

    Full Text Available Abstract Background Biomedical ontologies usually encode knowledge that applies always or at least most of the time, that is in normal circumstances. But for some applications like phenotype ontologies it is becoming increasingly important to represent information about aberrations from a norm. These aberrations may be modifications of physiological structures, but also modifications of biological processes. Methods To facilitate precise definitions of process-related phenotypes, such as delayed eruption of the primary teeth or disrupted ocular pursuit movements, I introduce a modeling approach that draws inspiration from the use of continuations in the analysis of programming languages and apply a similar idea to ontological modeling. This approach characterises processes by describing their outcome up to a certain point and the way they will continue in the canonical case. Definitions of process types are then given in terms of their continuations and anomalous phenotypes are defined by their differences to the canonical definitions. Results The resulting model is capable of accurately representing structural process anomalies. It allows distinguishing between different anomaly kinds (delays, interruptions, gives identity criteria for interrupted processes, and explains why normal and anomalous process instances can be subsumed under a common type, thus establishing the connection between canonical and anomalous process-related phenotypes. Conclusion This paper shows how to to give semantically rich definitions of process-related phenotypes. These allow to expand the application areas of phenotype ontologies beyond literature annotation and establishment of genotype-phenotype associations to the detection of anomalies in suitably encoded datasets.

  17. Continuous Positive Airway Pressure (CPAP)

    Science.gov (United States)

    ... ENT Doctor Near You Continuous Positive Airway Pressure (CPAP) Continuous Positive Airway Pressure (CPAP) Patient Health Information ... relations staff at newsroom@entnet.org . What Is CPAP? The most common and effective nonsurgical treatment for ...

  18. A Dynamic Continuation-Passing Style for Dynamic Delimited Continuations

    DEFF Research Database (Denmark)

    Biernacki, Dariusz; Danvy, Olivier; Millikin, Kevin Scott

    2005-01-01

    We present a new abstract machine that accounts for dynamic delimited continuations. We prove the correctness of this new abstract machine with respect to a pre-existing, definitional abstract machine. Unlike this definitional abstract machine, the new abstract machine is in defunctionalized form......, which makes it possible to state the corresponding higher-order evaluator. This evaluator is in continuation+state passing style and threads a trail of delimited continuations and a meta-continuation. Since this style accounts for dynamic delimited continuations, we refer to it as `dynamic continuation......-passing style.' We show that the new machine operates more efficiently than the definitional one and that the notion of computation induced by the corresponding evaluator takes the form of a monad. We also present new examples and a new simulation of dynamic delimited continuations in terms of static ones....

  19. New recombinant bacterium comprises a heterologous gene encoding glycerol dehydrogenase and/or an up-regulated native gene encoding glycerol dehydrogenase, useful for producing ethanol

    DEFF Research Database (Denmark)

    2010-01-01

    dehydrogenase encoding region of the bacterium, or is inserted into a phosphotransacetylase encoding region of the bacterium, or is inserted into an acetate kinase encoding region of the bacterium. It is operably linked to an inducible, a regulated or a constitutive promoter. The up-regulated glycerol......TECHNOLOGY FOCUS - BIOTECHNOLOGY - Preparation (claimed): Producing recombinant bacterium having enhanced ethanol production characteristics when cultivated in growth medium comprising glycerol comprises: (a) transforming a parental bacterium by (i) the insertion of a heterologous gene encoding...... glycerol dehydrogenase; and/or (ii) up-regulating a native gene encoding glycerol dehydrogenase; and (b) obtaining the recombinant bacterium. Preferred Bacterium: In the recombinant bacterium above, the inserted heterologous gene and/or the up-regulated native gene is encoding a glycerol dehydrogenase...

  20. Effect of tobacco craving cues on memory encoding and retrieval in smokers.

    Science.gov (United States)

    Heishman, Stephen J; Boas, Zachary P; Hager, Marguerite C; Taylor, Richard C; Singleton, Edward G; Moolchan, Eric T

    2006-07-01

    Previous studies have shown that cue-elicited tobacco craving disrupted performance on cognitive tasks; however, no study has examined directly the effect of cue-elicited craving on memory encoding and retrieval. A distinction between encoding and retireval has been reported such that memory is more impaired when attention is divided at encoding than at retrieval. This study tested the hypothesis that active imagery of smoking situations would impair encoding processes, but have little effect on retrieval. Imagery scripts (cigarette craving and neutral content) were presented either before presentation of a word list (encoding trials) or before word recall (retrieval trials). A working memory task at encoding and free recall of words were assessed. Results indicated that active imagery disrupted working memory on encoding trials, but not on retrieval trials. There was a trend toward impaired working memory following craving scripts compared with neutral scripts. These data support the hypothesis that the cognitive underpinnings of encoding and retrieval processes are distinct.

  1. Exoplanets: The Hunt Continues!

    Science.gov (United States)

    2001-04-01

    Swiss Telescope at La Silla Very Successful Summary The intensive and exciting hunt for planets around other stars ( "exoplanets" ) is continuing with great success in both hemispheres. Today, an international team of astronomers from the Geneva Observatory and other research institutes [1] is announcing the discovery of no less than eleven new, planetary companions to solar-type stars, HD 8574, HD 28185, HD 50554, HD 74156, HD 80606, HD 82943, HD 106252, HD 141937, HD 178911B, HD 141937, among which two new multi-planet systems . The masses of these new objects range from slightly less than to about 10 times the mass of the planet Jupiter [2]. The new detections are based on measured velocity changes of the stars [3], performed with the CORALIE spectrometer on the Swiss 1.2-m Leonard Euler telescope at the ESO La Silla Observatory , as well as with instruments on telescopes at the Haute-Provence Observatory and on the Keck telescopes on Mauna Kea (Hawaii, USA). Some of the new planets are unusual: * a two-planet system (around the star HD 82943) in which one orbital period is nearly exactly twice as long as the other - cases like this (refered to as "orbital resonance") are well known in our own solar system; * another two-planet system (HD 74156), with a Jupiter-like planet and a more massive planet further out; * a planet with the most elongated orbit detected so far (HD 80606), moving between 5 and 127 million kilometers from the central star; * a giant planet moving in an orbit around its Sun-like central star that is very similar to the one of the Earth and whose potential satellites (in theory, at least) might be "habitable". At this moment, there are 63 know exoplanet candidates with minimum masses below 10 Jupiter masses, and 67 known objects with minimum masses below 17 Jupiter masses. The present team of astronomers has detected about half of these. PR Photo 13a/01 : Radial-velocity measurements of HD 82943, a two-planet system . PR Photo 13b/01 : Radial

  2. The random continued fraction transformation

    Science.gov (United States)

    Kalle, Charlene; Kempton, Tom; Verbitskiy, Evgeny

    2017-03-01

    We introduce a random dynamical system related to continued fraction expansions. It uses random combinations of the Gauss map and the Rényi (or backwards) continued fraction map. We explore the continued fraction expansions that this system produces, as well as the dynamical properties of the system.

  3. Impact of a Computer System and the Encoding Staff Organization on the Encoding Stays and on Health Institution Financial Production in France.

    Science.gov (United States)

    Sarazin, Marianne; El Merini, Amine; Staccini, Pascal

    2016-01-01

    In France, medicalization of information systems program (PMSI) is an essential tool for the management planning and funding of health. The performance of encoding data inherent to hospital stays has become a major challenge for health institutions. Some studies have highlighted the impact of organizations set up on encoding quality and financial production. The aim of this study is to evaluate a computerized information system and new staff organization impact for treatment of the encoded information.

  4. Low-dose Propofol–induced Amnesia Is Not due to a Failure of Encoding

    OpenAIRE

    Veselis, Robert A.; Pryor, Kane O.; Reinsel, Ruth A.; Mehta, Meghana; Pan, Hong; Johnson, Ray

    2008-01-01

    Background—Propofol may produce amnesia by affecting encoding. The hypothesis that propofol weakens encoding was tested by measuring regional cerebral blood flow during verbal encoding. Methods—17 volunteer participants (12 M, 30.4±6.5 years old) had regional cerebral blood flow measured using H2O15 positron emission tomography during complex and simple encoding tasks (deep vs. shallow level of processing), to identify a region of interest in the left inferior prefrontal cortex...

  5. Less Effort, Better Results: How Does Music Act on Prefrontal Cortex in Older Adults during Verbal Encoding? An fNIRS Study

    Science.gov (United States)

    Ferreri, Laura; Bigand, Emmanuel; Perrey, Stephane; Muthalib, Makii; Bard, Patrick; Bugaiska, Aurélia

    2014-01-01

    Several neuroimaging studies of cognitive aging revealed deficits in episodic memory abilities as a result of prefrontal cortex (PFC) limitations. Improving episodic memory performance despite PFC deficits is thus a critical issue in aging research. Listening to music stimulates cognitive performance in several non-purely musical activities (e.g., language and memory). Thus, music could represent a rich and helpful source during verbal encoding and therefore help subsequent retrieval. Furthermore, such benefit could be reflected in less demand of PFC, which is known to be crucial for encoding processes. This study aimed to investigate whether music may improve episodic memory in older adults while decreasing the PFC activity. Sixteen healthy older adults (μ = 64.5 years) encoded lists of words presented with or without a musical background while their dorsolateral prefrontal cortex (DLPFC) activity was monitored using a eight-channel continuous-wave near-infrared spectroscopy (NIRS) system (Oxymon Mk III, Artinis, The Netherlands). Behavioral results indicated a better source-memory performance for words encoded with music compared to words encoded with silence (p music encoding condition compared to the silence condition (p music modulates the activity of the DLPFC during encoding in a less-demanding direction. Taken together, our results indicate that music can help older adults in memory performances by decreasing their PFC activity. These findings open new perspectives about music as tool for episodic memory rehabilitation on special populations with memory deficits due to frontal lobe damage such as Alzheimer’s patients. PMID:24860481

  6. Less Effort, Better Results: How Does Music Act on Prefrontal Cortex in Older Adults during Verbal Encoding? An fNIRS Study.

    Science.gov (United States)

    Ferreri, Laura; Bigand, Emmanuel; Perrey, Stephane; Muthalib, Makii; Bard, Patrick; Bugaiska, Aurélia

    2014-01-01

    Several neuroimaging studies of cognitive aging revealed deficits in episodic memory abilities as a result of prefrontal cortex (PFC) limitations. Improving episodic memory performance despite PFC deficits is thus a critical issue in aging research. Listening to music stimulates cognitive performance in several non-purely musical activities (e.g., language and memory). Thus, music could represent a rich and helpful source during verbal encoding and therefore help subsequent retrieval. Furthermore, such benefit could be reflected in less demand of PFC, which is known to be crucial for encoding processes. This study aimed to investigate whether music may improve episodic memory in older adults while decreasing the PFC activity. Sixteen healthy older adults (μ = 64.5 years) encoded lists of words presented with or without a musical background while their dorsolateral prefrontal cortex (DLPFC) activity was monitored using a eight-channel continuous-wave near-infrared spectroscopy (NIRS) system (Oxymon Mk III, Artinis, The Netherlands). Behavioral results indicated a better source-memory performance for words encoded with music compared to words encoded with silence (p < 0.05). Functional NIRS data revealed bilateral decrease of oxyhemoglobin values in the music encoding condition compared to the silence condition (p < 0.05), suggesting that music modulates the activity of the DLPFC during encoding in a less-demanding direction. Taken together, our results indicate that music can help older adults in memory performances by decreasing their PFC activity. These findings open new perspectives about music as tool for episodic memory rehabilitation on special populations with memory deficits due to frontal lobe damage such as Alzheimer's patients.

  7. An Unusual Phage Repressor Encoded by Mycobacteriophage BPs.

    Directory of Open Access Journals (Sweden)

    Valerie M Villanueva

    Full Text Available Temperate bacteriophages express transcription repressors that maintain lysogeny by down-regulating lytic promoters and confer superinfection immunity. Repressor regulation is critical to the outcome of infection-lysogenic or lytic growth-as well as prophage induction into lytic replication. Mycobacteriophage BPs and its relatives use an unusual integration-dependent immunity system in which the phage attachment site (attP is located within the repressor gene (33 such that site-specific integration leads to synthesis of a prophage-encoded product (gp33103 that is 33 residues shorter at its C-terminus than the virally-encoded protein (gp33136. However, the shorter form of the repressor (gp33103 is stable and active in repression of the early lytic promoter PR, whereas the longer virally-encoded form (gp33136 is inactive due to targeted degradation via a C-terminal ssrA-like tag. We show here that both forms of the repressor bind similarly to the 33-34 intergenic regulatory region, and that BPs gp33103 is a tetramer in solution. The BPs gp33103 repressor binds to five regulatory regions spanning the BPs genome, and regulates four promoters including the early lytic promoter, PR. BPs gp33103 has a complex pattern of DNA recognition in which a full operator binding site contains two half sites separated by a variable spacer, and BPs gp33103 induces a DNA bend at the full operator site but not a half site. The operator site structure is unusual in that one half site corresponds to a 12 bp palindrome identified previously, but the other half site is a highly variable variant of the palindrome.

  8. Joint-layer encoder optimization for HEVC scalable extensions

    Science.gov (United States)

    Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong

    2014-09-01

    Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.

  9. Dynamic encoding of speech sequence probability in human temporal cortex.

    Science.gov (United States)

    Leonard, Matthew K; Bouchard, Kristofer E; Tang, Claire; Chang, Edward F

    2015-05-06

    Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning. Copyright © 2015 the authors 0270-6474/15/357203-12$15.00/0.

  10. Automatic encoding of polyphonic melodies in musicians and nonmusicians.

    Science.gov (United States)

    Fujioka, Takako; Trainor, Laurel J; Ross, Bernhard; Kakigi, Ryusuke; Pantev, Christo

    2005-10-01

    In music, multiple musical objects often overlap in time. Western polyphonic music contains multiple simultaneous melodic lines (referred to as "voices") of equal importance. Previous electrophysiological studies have shown that pitch changes in a single melody are automatically encoded in memory traces, as indexed by mismatch negativity (MMN) and its magnetic counterpart (MMNm), and that this encoding process is enhanced by musical experience. In the present study, we examined whether two simultaneous melodies in polyphonic music are represented as separate entities in the auditory memory trace. Musicians and untrained controls were tested in both magnetoencephalogram and behavioral sessions. Polyphonic stimuli were created by combining two melodies (A and B), each consisting of the same five notes but in a different order. Melody A was in the high voice and Melody B in the low voice in one condition, and this was reversed in the other condition. On 50% of trials, a deviant final (5th) note was played either in the high or in the low voice, and it either went outside the key of the melody or remained within the key. These four deviations occurred with equal probability of 12.5% each. Clear MMNm was obtained for most changes in both groups, despite the 50% deviance level, with a larger amplitude in musicians than in controls. The response pattern was consistent across groups, with larger MMNm for deviants in the high voice than in the low voice, and larger MMNm for in-key than out-of-key changes, despite better behavioral performance for out-of-key changes. The results suggest that melodic information in each voice in polyphonic music is encoded in the sensory memory trace, that the higher voice is more salient than the lower, and that tonality may be processed primarily at cognitive stages subsequent to MMN generation.

  11. Encoding of temporal intervals in the rat hindlimb sensorimotor cortex

    Directory of Open Access Journals (Sweden)

    Eric Bean Knudsen

    2012-09-01

    Full Text Available The gradual buildup of neural activity over experimentally imposed delay periods, termed climbing activity, is well documented and is a potential mechanism by which interval time is encoded by distributed cortico-thalamico-striatal networks in the brain. Additionally, when multiple delay periods are incorporated, this activity has been shown to scale its rate of climbing proportional to the delay period. However, it remains unclear whether these patterns of activity occur within areas of motor cortex dedicated to hindlimb movement. Moreover, the effects of behavioral training (e.g. motor tasks under different reward conditions but with similar behavioral output are not well addressed. To address this, we recorded activity from the hindlimb sensorimotor cortex (HLSMC of two groups of rats performing a skilled hindlimb press task. In one group, rats were trained only to a make a valid press within a finite window after cue presentation for reward (non-interval trained, nIT; n=5, while rats in the second group were given duration-specific cues in which they had to make presses of either short or long duration to receive reward (interval trained, IT; n=6. Using PETH analyses, we show that cells recorded from both groups showed climbing activity during the task in similar proportions (35% IT and 47% nIT, however only climbing activity from IT rats was temporally scaled to press duration. Furthermore, using single trial decoding techniques (Wiener filter, we show that press duration can be inferred using climbing activity from IT animals (R=0.61 significantly better than nIT animals (R=0.507, p<0.01, suggesting IT animals encode press duration through temporally scaled climbing activity. Thus, if temporal intervals are behaviorally relevant then the activity of climbing neurons is temporally scaled to encode the passage of time.

  12. Reducing constraints on quantum computer design by encoded selective recoupling

    International Nuclear Information System (INIS)

    Lidar, D.A.; Wu, L.-A.

    2002-01-01

    The requirement of performing both single-qubit and two-qubit operations in the implementation of universal quantum logic often leads to very demanding constraints on quantum computer design. We show here how to eliminate the need for single-qubit operations in a large subset of quantum computer proposals: those governed by isotropic and XXZ , XY -type anisotropic exchange interactions. Our method employs an encoding of one logical qubit into two physical qubits, while logic operations are performed using an analogue of the NMR selective recoupling method

  13. Spatio-Temporal Encoding in Medical Ultrasound Imaging

    DEFF Research Database (Denmark)

    Gran, Fredrik

    2005-01-01

    In this dissertation two methods for spatio-temporal encoding in medical ultrasound imaging are investigated. The first technique is based on a frequency division approach. Here, the available spectrum of the transducer is divided into a set of narrow bands. A waveform is designed for each band...... the signal to noise ratio and simultaneously the penetration depth so that the medical doctor can image deeper lying structures. The method is tested both experimentally and in simulation and has also evaluated for the purpose of blood flow estimation. The work presented is based on four papers which...

  14. Trinary signed-digit arithmetic using an efficient encoding scheme

    Science.gov (United States)

    Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.

    2000-09-01

    The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  15. Encoding of natural and artificial stimuli in the auditory midbrain

    Science.gov (United States)

    Lyzwa, Dominika

    How complex acoustic stimuli are encoded in the main center of convergence in the auditory midbrain is not clear. Here, the representation of neural spiking responses to natural and artificial sounds across this subcortical structure is investigated based on neurophysiological recordings from the mammalian midbrain. Neural and stimulus correlations of neuronal pairs are analyzed with respect to the neurons' distance, and responses to different natural communication sounds are discriminated. A model which includes linear and nonlinear neural response properties of this nucleus is presented and employed to predict temporal spiking responses to new sounds. Supported by BMBF Grant 01GQ0811.

  16. Dynamical Encoding by Networks of Competing Neuron Groups: Winnerless Competition

    International Nuclear Information System (INIS)

    Rabinovich, M.; Volkovskii, A.; Lecanda, P.; Huerta, R.; Abarbanel, H. D. I.; Laurent, G.

    2001-01-01

    Following studies of olfactory processing in insects and fish, we investigate neural networks whose dynamics in phase space is represented by orbits near the heteroclinic connections between saddle regions (fixed points or limit cycles). These networks encode input information as trajectories along the heteroclinic connections. If there are N neurons in the network, the capacity is approximately e(N-1) ! , i.e., much larger than that of most traditional network structures. We show that a small winnerless competition network composed of FitzHugh-Nagumo spiking neurons efficiently transforms input information into a spatiotemporal output

  17. cDNA encoding a polypeptide including a hevein sequence

    Energy Technology Data Exchange (ETDEWEB)

    Raikhel, Natasha V. (Okemos, MI); Broekaert, Willem F. (Dilbeek, BE); Chua, Nam-Hai (Scarsdale, NY); Kush, Anil (New York, NY)

    1993-02-16

    A cDNA clone (HEV1) encoding hevein was isolated via polymerase chain reaction (PCR) using mixed oligonucleotides corresponding to two regions of hevein as primers and a Hevea brasiliensis latex cDNA library as a template. HEV1 is 1018 nucleotides long and includes an open reading frame of 204 amino acids. The deduced amino acid sequence contains a pu GOVERNMENT RIGHTS This application was funded under Department of Energy Contract DE-AC02-76ER01338. The U.S. Government has certain rights under this application and any patent issuing thereon.

  18. Space-time encoding for high frame rate ultrasound imaging

    DEFF Research Database (Denmark)

    Misaridis, Thanssis; Jensen, Jørgen Arendt

    2002-01-01

    dynamically focused in both transmit and receive with only two firings. This reduces the problem of motion artifacts. The method has been tested with extensive simulations using Field II. Resolution and SNR are compared with uncoded STA imaging and conventional phased-array imaging. The range resolution...... remains the same for coded STA imaging with four emissions and is slightly degraded for STA imaging with two emissions due to the −55 dB cross-talk between the signals. The additional proposed temporal encoding adds more than 15 dB on the SNR gain, yielding a SNR at the same order as in phased-array...

  19. Advanced Encoding for Multilingual Access in a Terminological Data Base

    DEFF Research Database (Denmark)

    Leroyer, Patrick; L'Homme, Marie-Claude; Robichaud, Benoît

    2010-01-01

    This paper describes new functionalities implemented in a terminological database (TDB) in order to allow efficient editing of and access to multilingual data. The functionalities are original in the sense that they allow users of the database to retrieve the equivalents not only of headwords...... between equivalents can be established automatically. Examples are taken from the fields of computing and the Internet and focus on English and French. However, the model can easily be extended to other fields and languages provided that the data is available and encoded properly....

  20. Generation of Path-Encoded Greenberger-Horne-Zeilinger States

    Science.gov (United States)

    Bergamasco, N.; Menotti, M.; Sipe, J. E.; Liscidini, M.

    2017-11-01

    We study the generation of Greenberger-Horne-Zeilinger (GHZ) states of three path-encoded photons. Inspired by the seminal work of Bouwmeester et al. [Phys. Rev. Lett. 82, 1345 (1999), 10.1103/PhysRevLett.82.1345] on polarization-entangled GHZ states, we find a corresponding path representation for the photon states of an optical circuit, identify the elements required for the state generation, and propose a possible implementation of our strategy. Besides the practical advantage of employing an integrated system that can be fabricated with proven lithographic techniques, our example suggests that it is possible to enhance the generation efficiency by using microring resonators.

  1. On the number of encoder states for a type of RLL codes

    NARCIS (Netherlands)

    Cai, K.; Schouhamer Immink, K.A.

    2006-01-01

    The relationship between the number of encoder states and the probable size of certain runlength-limited (RLL) codes is derived analytically. By associating the number of encoder states with (generalized) Fibonacci numbers, the minimum number of encoder states is obtained, which maximizes the rate

  2. Design and implementation of parallel video encoding strategies using divisible load analysis

    NARCIS (Netherlands)

    Li, Ping; Veeravalli, Bharadwaj; Kassim, A.A.

    2005-01-01

    The processing time needed for motion estimation usually accounts for a significant part of the overall processing time of the video encoder. To improve the video encoding speed, reducing the execution time for motion estimation process is essential. Parallel implementation of video encoding systems

  3. Equal Learning Does Not Result in Equal Remembering: The Importance of Post-Encoding Processes

    Science.gov (United States)

    Bauer, Patricia J.; Guler, O. Evren; Starr, Rebecca M.; Pathman, Thanujeni

    2011-01-01

    Explanations of variability in long-term recall typically appeal to encoding and/or retrieval processes. However, for well over a century, it has been apparent that for memory traces to be stored successfully, they must undergo a post-encoding process of stabilization and integration. Variability in post-encoding processes is thus a potential…

  4. Less effort, better results: how does music act on prefrontal cortex in older adults during verbal encoding? An fNIRS study.

    Directory of Open Access Journals (Sweden)

    Laura eFerreri

    2014-05-01

    Full Text Available Several neuroimaging studies of cognitive ageing revealed deficits in episodic memory abilities as a result of prefrontal cortex (PFC limitations. Improving episodic memory performance despite PFC deficits is thus a critical issue in ageing research. Listening to music stimulates cognitive performance in several non-purely musical activities (e.g. language and memory. Thus, music could represent a rich and helpful source during verbal encoding and therefore help subsequent retrieval. Furthermore, such benefit could be reflected in less demand of PFC, which is known to be crucial for encoding processes. This study aimed to investigate whether music may improve episodic memory in older adults while decreasing the PFC activity.Sixteen healthy older adults (µ=64.5y encoded lists of words presented with or without a musical background while their dorsolateral PFC (DLPFC activity was monitored using a 8-channel continuous-wave near-infrared spectroscopy (NIRS system (Oxymon Mk III, Artinis, The Netherlands. Behavioral results indicated a better source memory performance for words encoded with music compared to words encoded with silence (p

  5. Continuous downstream processing of biopharmaceuticals.

    Science.gov (United States)

    Jungbauer, Alois

    2013-08-01

    Continuous manufacturing has been applied in many different industries but has been pursued reluctantly in biotechnology where the batchwise process is still the standard. A shift to continuous operation can improve productivity of a process and substantially reduce the footprint. Continuous operation also allows robust purification of labile biomolecules. A full set of unit operations is available to design continuous downstream processing of biopharmaceuticals. Chromatography, the central unit operation, is most advanced in respect to continuous operation. Here, the problem of 'batch' definition has been solved. This has also paved the way for implementation of continuous downstream processing from a regulatory viewpoint. Economic pressure, flexibility, and parametric release considerations will be the driving force to implement continuous manufacturing strategies in future. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Cardiovascular change during encoding predicts the nonconscious mere exposure effect.

    Science.gov (United States)

    Ladd, Sandra L; Toscano, William B; Cowings, Patricia S; Gabrieli, John D E

    2014-01-01

    These studies examined memory encoding to determine whether the mere exposure effect could be categorized as a form of conceptual or perceptual implicit priming and, if it was not conceptual or perceptual, whether cardiovascular psychophysiology could reveal its nature. Experiment 1 examined the effects of study phase level of processing on recognition, the mere exposure effect, and word identification implicit priming. Deep relative to shallow processing improved recognition but did not influence the mere exposure effect for nonwords or word identification implicit priming for words. Experiments 2 and 3 examined the effect of study-test changes in font and orientation, respectively, on the mere exposure effect and word identification implicit priming. Different study-test font and orientation reduced word identification implicit priming but had no influence on the mere exposure effect. Experiments 4 and 5 developed and used, respectively, a cardiovascular psychophysiological implicit priming paradigm to examine whether stimulus-specific cardiovascular reactivity at study predicted the mere exposure effect at test. Blood volume pulse change at study was significantly greater for nonwords that were later preferred than for nonwords that were not preferred at test. There was no difference in blood volume pulse change for words at study that were later either identified or not identified at test. Fluency effects, at encoding or retrieval, are an unlikely explanation for these behavioral and cardiovascular findings. The relation of blood volume pulse to affect suggests that an affective process that is not conceptual or perceptual contributes to the mere exposure effect.

  7. Object recognition memory: neurobiological mechanisms of encoding, consolidation and retrieval.

    Science.gov (United States)

    Winters, Boyer D; Saksida, Lisa M; Bussey, Timothy J

    2008-07-01

    Tests of object recognition memory, or the judgment of the prior occurrence of an object, have made substantial contributions to our understanding of the nature and neurobiological underpinnings of mammalian memory. Only in recent years, however, have researchers begun to elucidate the specific brain areas and neural processes involved in object recognition memory. The present review considers some of this recent research, with an emphasis on studies addressing the neural bases of perirhinal cortex-dependent object recognition memory processes. We first briefly discuss operational definitions of object recognition and the common behavioural tests used to measure it in non-human primates and rodents. We then consider research from the non-human primate and rat literature examining the anatomical basis of object recognition memory in the delayed nonmatching-to-sample (DNMS) and spontaneous object recognition (SOR) tasks, respectively. The results of these studies overwhelmingly favor the view that perirhinal cortex (PRh) is a critical region for object recognition memory. We then discuss the involvement of PRh in the different stages--encoding, consolidation, and retrieval--of object recognition memory. Specifically, recent work in rats has indicated that neural activity in PRh contributes to object memory encoding, consolidation, and retrieval processes. Finally, we consider the pharmacological, cellular, and molecular factors that might play a part in PRh-mediated object recognition memory. Recent studies in rodents have begun to indicate the remarkable complexity of the neural substrates underlying this seemingly simple aspect of declarative memory.

  8. Encoder-decoder optimization for brain-computer interfaces.

    Science.gov (United States)

    Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam

    2015-06-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  9. Controlled encoding strategies in memory tests in lithium patients.

    Science.gov (United States)

    Opgenoorth, E; Karlick-Bolten, E

    1986-03-01

    The "levels of processing" theory (Craik and Lockhart) and "dual coding" theory (Paivio) provide new aspects for clinical memory research work. Therefore, an incidental learning paradigm on the basis of these two theoretical approaches was chosen to test aspects of memory performances with lithium therapy. Results of two experiments, with controlled non-semantic processing (rating experiment "comparison of size") and additive semantic processing (rating "living--non-living") indicate a slight reduction in recall (Fig. 1) and recognition performance (Fig. 2) in lithium patients. Effects on encoding strategies are of equal quality in patients and healthy subjects (Tab. 1, 2) but performance differs between both groups: poorer systematic benefit from within code repetitions ("word-word" items, "picture-picture" items) and dual coding (repeated variable item presentation "picture-word") is obtained. The less efficient encoding strategies in the speeded task are discussed with respect to cognitive rigidity and slowing of performance by emotional states. This investigation of so-called "memory deficits" with lithium is an attempt to explore impairments at an early stage of processing; the characterization of the perceptual cognitive analysis seems useful for further clinical research work on this topic.

  10. Retention interval affects visual short-term memory encoding.

    Science.gov (United States)

    Bankó, Eva M; Vidnyánszky, Zoltán

    2010-03-01

    Humans can efficiently store fine-detailed facial emotional information in visual short-term memory for several seconds. However, an unresolved question is whether the same neural mechanisms underlie high-fidelity short-term memory for emotional expressions at different retention intervals. Here we show that retention interval affects the neural processes of short-term memory encoding using a delayed facial emotion discrimination task. The early sensory P100 component of the event-related potentials (ERP) was larger in the 1-s interstimulus interval (ISI) condition than in the 6-s ISI condition, whereas the face-specific N170 component was larger in the longer ISI condition. Furthermore, the memory-related late P3b component of the ERP responses was also modulated by retention interval: it was reduced in the 1-s ISI as compared with the 6-s condition. The present findings cannot be explained based on differences in sensory processing demands or overall task difficulty because there was no difference in the stimulus information and subjects' performance between the two different ISI conditions. These results reveal that encoding processes underlying high-precision short-term memory for facial emotional expressions are modulated depending on whether information has to be stored for one or for several seconds.

  11. Encoder-decoder optimization for brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Josh Merel

    2015-06-01

    Full Text Available Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model" and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  12. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  13. Alpha oscillations and early stages of visual encoding

    Directory of Open Access Journals (Sweden)

    Wolfgang eKlimesch

    2011-05-01

    Full Text Available For a long time alpha oscillations have been functionally linked to the processing of visual information. Here we propose an new theory about the functional meaning of alpha. The central idea is that synchronized alpha reflects a basic processing mode that controls access to information stored in a complex long-term memory system, which we term knowledge system (KS in order to emphasize that it comprises not only declarative memories but any kind of knowledge comprising also procedural information. Based on this theoretical background, we assume that during early stages of perception, alpha ‘directs the flow of information’ to those neural structures which represent information that is relevant for encoding. The physiological function of alpha is interpreted in terms of inhibition. We assume that alpha enables access to stored information by inhibiting task irrelevant neuronal structures and by timing cortical activity in task relevant neuronal structures. We discuss a variety findings showing that evoked alpha and phase locking reflect successful encoding of global stimulus features in an early poststimulus interval of about 0 - 150 ms.

  14. How the visual brain encodes and keeps track of time.

    Science.gov (United States)

    Salvioni, Paolo; Murray, Micah M; Kalmbach, Lysiann; Bueti, Domenica

    2013-07-24

    Time is embedded in any sensory experience: the movements of a dance, the rhythm of a piece of music, the words of a speaker are all examples of temporally structured sensory events. In humans, if and how visual cortices perform temporal processing remains unclear. Here we show that both primary visual cortex (V1) and extrastriate area V5/MT are causally involved in encoding and keeping time in memory and that this involvement is independent from low-level visual processing. Most importantly we demonstrate that V1 and V5/MT come into play simultaneously and seem to be functionally linked during interval encoding, whereas they operate serially (V1 followed by V5/MT) and seem to be independent while maintaining temporal information in working memory. These data help to refine our knowledge of the functional properties of human visual cortex, highlighting the contribution and the temporal dynamics of V1 and V5/MT in the processing of the temporal aspects of visual information.

  15. Shot-Noise Limited Time-Encoded Raman Spectroscopy

    Directory of Open Access Journals (Sweden)

    Sebastian Karpf

    2017-01-01

    Full Text Available Raman scattering, an inelastic scattering mechanism, provides information about molecular excitation energies and can be used to identify chemical compounds. Albeit being a powerful analysis tool, especially for label-free biomedical imaging with molecular contrast, it suffers from inherently low signal levels. This practical limitation can be overcome by nonlinear enhancement techniques like stimulated Raman scattering (SRS. In SRS, an additional light source stimulates the Raman scattering process. This can lead to orders of magnitude increase in signal levels and hence faster acquisition in biomedical imaging. However, achieving a broad spectral coverage in SRS is technically challenging and the signal is no longer background-free, as either stimulated Raman gain (SRG or loss (SRL is measured, turning a sensitivity limit into a dynamic range limit. Thus, the signal has to be isolated from the laser background light, requiring elaborate methods for minimizing detection noise. Here, we analyze the detection sensitivity of a shot-noise limited broadband stimulated time-encoded Raman (TICO-Raman system in detail. In time-encoded Raman, a wavelength-swept Fourier domain mode locking (FDML laser covers a broad range of Raman transition energies while allowing a dual-balanced detection for lowering the detection noise to the fundamental shot-noise limit.

  16. Encoding audio motion: spatial impairment in early blind individuals

    Directory of Open Access Journals (Sweden)

    Sara eFinocchietti

    2015-09-01

    Full Text Available The consequence of blindness on auditory spatial localization has been an interesting issue of research in the last decade providing mixed results. Enhanced auditory spatial skills in individuals with visual impairment have been reported by multiple studies, while some aspects of spatial hearing seem to be impaired in the absence of vision. In this study, the ability to encode the trajectory of a 2 dimensional sound motion, reproducing the complete movement, and reaching the correct end-point sound position, is evaluated in 12 early blind individuals, 8 late blind individuals, and 20 age-matched sighted blindfolded controls. Early blind individuals correctly determine the direction of the sound motion on the horizontal axis, but show a clear deficit in encoding the sound motion in the lower side of the plane. On the contrary, late blind individuals and blindfolded controls perform much better with no deficit in the lower side of the plane. In fact the mean localization error resulted 271 ± 10 mm for early blind individuals, 65 ± 4 mm for late blind individuals, and 68 ± 2 mm for sighted blindfolded controls.These results support the hypothesis that i it exists a trade-off between the development of enhanced perceptual abilities and role of vision in the sound localization abilities of early blind individuals, and ii the visual information is fundamental in calibrating some aspects of the representation of auditory space in the brain.

  17. How does cognitive load influence speech perception? An encoding hypothesis.

    Science.gov (United States)

    Mitterer, Holger; Mattys, Sven L

    2017-01-01

    Two experiments investigated the conditions under which cognitive load exerts an effect on the acuity of speech perception. These experiments extend earlier research by using a different speech perception task (four-interval oddity task) and by implementing cognitive load through a task often thought to be modular, namely, face processing. In the cognitive-load conditions, participants were required to remember two faces presented before the speech stimuli. In Experiment 1, performance in the speech-perception task under cognitive load was not impaired in comparison to a no-load baseline condition. In Experiment 2, we modified the load condition minimally such that it required encoding of the two faces simultaneously with the speech stimuli. As a reference condition, we also used a visual search task that in earlier experiments had led to poorer speech perception. Both concurrent tasks led to decrements in the speech task. The results suggest that speech perception is affected even by loads thought to be processed modularly, and that, critically, encoding in working memory might be the locus of interference.

  18. Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders

    Science.gov (United States)

    Rußwurm, Marc; Körner, Marco

    2018-03-01

    Earth observation (EO) sensors deliver data with daily or weekly temporal resolution. Most land use and land cover (LULC) approaches, however, expect cloud-free and mono-temporal observations. The increasing temporal capabilities of today's sensors enables the use of temporal, along with spectral and spatial features. Domains, such as speech recognition or neural machine translation, work with inherently temporal data and, today, achieve impressive results using sequential encoder-decoder structures. Inspired by these sequence-to-sequence models, we adapt an encoder structure with convolutional recurrent layers in order to approximate a phenological model for vegetation classes based on a temporal sequence of Sentinel 2 (S2) images. In our experiments, we visualize internal activations over a sequence of cloudy and non-cloudy images and find several recurrent cells, which reduce the input activity for cloudy observations. Hence, we assume that our network has learned cloud-filtering schemes solely from input data, which could alleviate the need for tedious cloud-filtering as a preprocessing step for many EO approaches. Moreover, using unfiltered temporal series of top-of-atmosphere (TOA) reflectance data, we achieved in our experiments state-of-the-art classification accuracies on a large number of crop classes with minimal preprocessing compared to other classification approaches.

  19. Environmental cycle of antibiotic resistance encoded genes: A systematic review

    Directory of Open Access Journals (Sweden)

    R. ghanbari

    2017-12-01

    Full Text Available Antibiotic-resistant bacteria and genes enter the environment in different ways. The release of these factors into the environment has increased concerns related to public health. The aim of the study was to evaluate the antibiotic resistance genes (ARGs in the environmental resources. In this systematic review, the data were extracted from valid sources of information including ScienceDirect, PubMed, Google Scholar and SID. Evaluation and selection of articles were conducted on the basis of the PRISMA checklist. A total of 39 articles were included in the study, which were chosen from a total of 1249 papers. The inclusion criterion was the identification of genes encoding antibiotic resistance against the eight important groups of antibiotics determined by using the PCR technique in the environmental sources including municipal and hospital wastewater treatment plants, animal and agricultural wastes, effluents from treatment plants, natural waters, sediments, and drinking waters. In this study, 113 genes encoding antibiotic resistance to eight groups of antibiotics (beta-lactams, aminoglycosides, tetracyclines, macrolides, sulfonamides, chloramphenicol, glycopeptides and quinolones were identified in various environments. Antibiotic resistance genes were found in all the investigated environments. The investigation of microorganisms carrying these genes shows that most of the bacteria especially gram-negative bacteria are effective in the acquisition and the dissemination of these pollutants in the environment. Discharging the raw wastewaters and effluents from wastewater treatments acts as major routes in the dissemination of ARGs into environment sources and can pose hazards to public health.

  20. QualityML: a dictionary for quality metadata encoding

    Science.gov (United States)

    Ninyerola, Miquel; Sevillano, Eva; Serral, Ivette; Pons, Xavier; Zabala, Alaitz; Bastin, Lucy; Masó, Joan

    2014-05-01

    The scenario of rapidly growing geodata catalogues requires tools focused on facilitate users the choice of products. Having quality fields populated in metadata allow the users to rank and then select the best fit-for-purpose products. In this direction, we have developed the QualityML (http://qualityml.geoviqua.org), a dictionary that contains hierarchically structured concepts to precisely define and relate quality levels: from quality classes to quality measurements. Generically, a quality element is the path that goes from the higher level (quality class) to the lowest levels (statistics or quality metrics). This path is used to encode quality of datasets in the corresponding metadata schemas. The benefits of having encoded quality, in the case of data producers, are related with improvements in their product discovery and better transmission of their characteristics. In the case of data users, particularly decision-makers, they would find quality and uncertainty measures to take the best decisions as well as perform dataset intercomparison. Also it allows other components (such as visualization, discovery, or comparison tools) to be quality-aware and interoperable. On one hand, the QualityML is a profile of the ISO geospatial metadata standards providing a set of rules for precisely documenting quality indicator parameters that is structured in 6 levels. On the other hand, QualityML includes semantics and vocabularies for the quality concepts. Whenever possible, if uses statistic expressions from the UncertML dictionary (http://www.uncertml.org) encoding. However it also extends UncertML to provide list of alternative metrics that are commonly used to quantify quality. A specific example, based on a temperature dataset, is shown below. The annual mean temperature map has been validated with independent in-situ measurements to obtain a global error of 0.5 ° C. Level 0: Quality class (e.g., Thematic accuracy) Level 1: Quality indicator (e.g., Quantitative

  1. Integration of quantum key distribution and private classical communication through continuous variable

    Science.gov (United States)

    Wang, Tianyi; Gong, Feng; Lu, Anjiang; Zhang, Damin; Zhang, Zhengping

    2017-12-01

    In this paper, we propose a scheme that integrates quantum key distribution and private classical communication via continuous variables. The integrated scheme employs both quadratures of a weak coherent state, with encrypted bits encoded on the signs and Gaussian random numbers encoded on the values of the quadratures. The integration enables quantum and classical data to share the same physical and logical channel. Simulation results based on practical system parameters demonstrate that both classical communication and quantum communication can be implemented over distance of tens of kilometers, thus providing a potential solution for simultaneous transmission of quantum communication and classical communication.

  2. Sibling bereavement and continuing bonds.

    Science.gov (United States)

    Packman, Wendy; Horsley, Heidi; Davies, Betty; Kramer, Robin

    2006-11-01

    Historically, from a Freudian and medical model perspective, emotional disengagement from the deceased was seen as essential to the successful adaptation of bereavement. A major shift in the bereavement literature has occurred and it is now generally accepted that despite the permanence of physical separation, the bereaved remains involved and connected to the deceased and can be emotionally sustained through continuing bonds. The majority of literature has focused on adults and on the nature of continuing bonds following the death of a spouse. In this article, the authors demonstrate how the continuing bonds concept applies to the sibling relationship. We describe the unique continued relationship formed by bereaved children and adolescents following a sibling loss, highlight the factors that influence the siblings continuing bonds expressions, and offer clinical interventions. In our view, mental health professionals can play an important role in helping parents encourage activities that may facilitate the creation and maintenance of continuing bonds in their children.

  3. The Continued Assessment of Self-Continuity and Identity

    Science.gov (United States)

    Dunkel, Curtis S.; Minor, Leslie; Babineau, Maureen

    2010-01-01

    Studies have found that self-continuity is predictive of a substantial number of important outcome variables. However, a recent series of studies brings into question the traditional method of measuring self-continuity in favor of an alternative (B. M. Baird, K. Le, & R. E. Lucas, 2006). The present study represents a further comparison of…

  4. Beyond Continuous Delivery: An Empirical Investigation of Continuous Deployment Challenges

    DEFF Research Database (Denmark)

    Shahin, Mojtaba; Ali Babar, Muhammad; Zahedi, Mansooreh

    2017-01-01

    Context: A growing number of software organizations have been adopting Continuous DElivery (CDE) and Continuous Deployment (CD) practices. Researchers have started investing significant efforts in studying different aspects of CDE and CD. Many studies refer to CDE (i.e., where an application is p...

  5. Continuous exponential martingales and BMO

    CERN Document Server

    Kazamaki, Norihiko

    1994-01-01

    In three chapters on Exponential Martingales, BMO-martingales, and Exponential of BMO, this book explains in detail the beautiful properties of continuous exponential martingales that play an essential role in various questions concerning the absolute continuity of probability laws of stochastic processes. The second and principal aim is to provide a full report on the exciting results on BMO in the theory of exponential martingales. The reader is assumed to be familiar with the general theory of continuous martingales.

  6. An electrophysiological investigation of memory encoding, depth of processing, and word frequency in humans.

    Science.gov (United States)

    Guo, Chunyan; Zhu, Ying; Ding, Jinhong; Fan, Silu; Paller, Ken A

    2004-02-12

    Memory encoding can be studied by monitoring brain activity correlated with subsequent remembering. To understand brain potentials associated with encoding, we compared multiple factors known to affect encoding. Depth of processing was manipulated by requiring subjects to detect animal names (deep encoding) or boldface (shallow encoding) in a series of Chinese words. Recognition was more accurate with deep than shallow encoding, and for low- compared to high-frequency words. Potentials were generally more positive for subsequently recognized versus forgotten words; for deep compared to shallow processing; and, for remembered words only, for low- than for high-frequency words. Latency and topographic differences between these potentials suggested that several factors influence the effectiveness of encoding and can be distinguished using these methods, even with Chinese logographic symbols.

  7. Low Complexity Encoder of High Rate Irregular QC-LDPC Codes for Partial Response Channels

    Directory of Open Access Journals (Sweden)

    IMTAWIL, V.

    2011-11-01

    Full Text Available High rate irregular QC-LDPC codes based on circulant permutation matrices, for efficient encoder implementation, are proposed in this article. The structure of the code is an approximate lower triangular matrix. In addition, we present two novel efficient encoding techniques for generating redundant bits. The complexity of the encoder implementation depends on the number of parity bits of the code for the one-stage encoding and the length of the code for the two-stage encoding. The advantage of both encoding techniques is that few XOR-gates are used in the encoder implementation. Simulation results on partial response channels also show that the BER performance of the proposed code has gain over other QC-LDPC codes.

  8. Neutral details associated with emotional events are encoded: evidence from a cued recall paradigm.

    Science.gov (United States)

    Mickley Steinmetz, Katherine R; Knight, Aubrey G; Kensinger, Elizabeth A

    2016-11-01

    Enhanced emotional memory often comes at the cost of memory for surrounding background information. Narrowed-encoding theories suggest that this is due to narrowed attention for emotional information at encoding, leading to impaired encoding of background information. Recent work has suggested that an encoding-based theory may be insufficient. Here, we examined whether cued recall-instead of previously used recognition memory tasks-would reveal evidence that non-emotional information associated with emotional information was effectively encoded. Participants encoded positive, negative, or neutral objects on neutral backgrounds. At retrieval, they were given either the item or the background as a memory cue and were asked to recall the associated scene element. Counter to narrowed-encoding theories, emotional items were more likely than neutral items to trigger recall of the associated background. This finding suggests that there is a memory trace of this contextual information and that emotional cues may facilitate retrieval of this information.

  9. Information transfer via implicit encoding with delay time modulation in a time-delay system

    Energy Technology Data Exchange (ETDEWEB)

    Kye, Won-Ho, E-mail: whkye@kipo.go.kr [Korean Intellectual Property Office, Government Complex Daejeon Building 4, 189, Cheongsa-ro, Seo-gu, Daejeon 302-701 (Korea, Republic of)

    2012-08-20

    A new encoding scheme for information transfer with modulated delay time in a time-delay system is proposed. In the scheme, the message is implicitly encoded into the modulated delay time. The information transfer rate as a function of encoding redundancy in various noise scales is presented and it is analyzed that the implicit encoding scheme (IES) has stronger resistance against channel noise than the explicit encoding scheme (EES). In addition, its advantages in terms of secure communication and feasible applications are discussed. -- Highlights: ► We propose new encoding scheme with delay time modulation. ► The message is implicitly encoded with modulated delay time. ► The proposed scheme shows stronger resistance against channel noise.

  10. The herpesvirus 8-encoded chemokine vMIP-II, but not the poxvirus-encoded chemokine MC148, inhibits the CCR10 receptor

    DEFF Research Database (Denmark)

    Lüttichau, H R; Lewis, I C; Gerstoft, J

    2001-01-01

    The viral chemokine antagonist vMIP-II encoded by human herpesvirus 8 (HHV8) and MC148 encoded by the poxvirus - Molluscum contagiosum - were tested against the newly identified chemokine receptor CCR10. As the CCR10 ligand ESkine / CCL27 had the highest identity to MC148 and because both...

  11. Cloning of cDNA encoding steroid 11β-hydroxylase (P450c11)

    International Nuclear Information System (INIS)

    Chua, S.C.; Szabo, P.; Vitek, A.; Grzeschik, K.H.; John, M.; White, P.C.

    1987-01-01

    The authors have isolated bovine and human adrenal cDNA clones encoding the adrenal cytochrome P-450 specific for 11β-hydroxylation (P450c11). A bovine adrenal cDNA library constructed in the bacteriophage λ vector gt10 was probed with a previously isolated cDNA clone corresponding to part of the 3' untranslated region of the 4.2-kilobase (kb) mRNA encoding P450c11. Several clones with 3.2-kb cDNA inserts were isolated. Sequence analysis showed that they overlapped the original probe by 300 base pairs (bp). Combined cDNA and RNA sequence data demonstrated a continuous open reading frame of 1509 bases. P450c11 is predicted to contain 479 amino acid residues in the mature protein in addition to a 24-residue amino-terminal mitochondrial signal sequence. A bovine clone was used to isolate a homologous clone with a 3.5-kb insert from a human adrenal cDNA library. A region of 1100 bp was 81% homologous to 769 bp of the coding sequence of the bovine cDNA except for a 400-bp segment presumed to be an unprocessed intron. Hybridization of the human cDNA to DNA from a panel of human-rodent somatic cell hybrid lines and in situ hybridization to metaphase spreads of human chromosomes localized the gene to the middle of the long arm of chromosome 8. These data should be useful in developing reagents for heterozygote detection and prenatal diagnosis of 11β-hydroxylase deficiency, the second most frequent cause of congenital adrenal hyperplasia

  12. A long HBV transcript encoding pX is inefficiently exported from the nucleus

    International Nuclear Information System (INIS)

    Doitsh, Gilad; Shaul, Yosef

    2003-01-01

    The longest hepatitis B virus transcript is a 3.9-kb mRNA whose function remained unclear. In this study, we wished to identify the translation products and physiological role of this viral transcript. This transcript initiates from the X promoter region ignoring the inefficient and noncanonical viral polyadenylation signal at the first round of transcription. However, an HBV mutant with canonical polyadenylation signal continues, though with lower efficiency, to program the synthesis of this long transcript, indicating that the deviated HBV polyadenylation signal is important but not essential to enable transcription of the 3.9-kb species. The 3.9-kb RNA contains two times the X open reading frame (ORF). The X ORF at the 5'-end is positioned upstream of the CORE gene. By generating an HBV DNA mutant in which the X and Core ORFs are fused, we demonstrated the production of a 40-kDa X-Core fusion protein that must be encoded by the 3.9-kb transcript. Mutagenesis studies revealed that the production of this protein depends on the 5' X ORF ATG, suggesting that the 3.9-kb RNA is active in translation of the X ORF. Based on these features, the 3.9-kb transcript was designated lxRNA for long X RNA. Unlike other HBV transcripts, lxRNA harbors two copies of PRE, the posttranscriptional regulatory element that controls the nuclear export of HBV mRNAs. Unexpectedly, despite the presence of PRE sequences, RNA fractionation analysis revealed that lxRNA barely accumulates in the cytoplasm, suggesting that nuclear export of lxRNA is poor. Collectively, our data suggest that two distinct HBV mRNA species encode pX and that the HBV transcripts are differentially regulated at the level of nuclear export

  13. Continuous acoustic emission from aluminium

    International Nuclear Information System (INIS)

    Fenici, P.; Kiesewetter, N.; Schiller, P.

    1976-01-01

    Continuous acoustic emission of aluminum single crystals and polycrystals during tensile tests at constant cross-head speed and at room temperature is measured with a Root Mean Square Level recorder. By means of the Kaiser effect it is shown that the continuous emission is related to the plastic deformation. The plot of continuous emission against strain takes different shapes for pure single crystals, pure polycrystals and impure polycrystals. The measured voltages have about the same value for pure single and polycrystals and are considerably greater than that for impure polycrystals. A method is developed to distinguish between continuous emission and burst

  14. The effect of encoding strategy on the neural correlates of memory for faces.

    Science.gov (United States)

    Bernstein, Lori J; Beig, Sania; Siegenthaler, Amy L; Grady, Cheryl L

    2002-01-01

    Encoding and recognition of unfamiliar faces in young adults were examined using positron emission tomography to determine whether different encoding strategies would lead to encoding/retrieval differences in brain activity. Three types of encoding were compared: a 'deep' task (judging pleasantness/unpleasantness), a 'shallow' task (judging right/left orientation), and an intentional learning task in which subjects were instructed to learn the faces for a subsequent memory test but were not provided with a specific strategy. Memory for all faces was tested with an old/new recognition test. A modest behavioral effect was obtained, with deeply-encoded faces being recognized more accurately than shallowly-encoded or intentionally-learned faces. Regardless of encoding strategy, encoding activated a primarily ventral system including bilateral temporal and fusiform regions and left prefrontal cortices, whereas recognition activated a primarily dorsal set of regions including right prefrontal and parietal areas. Within encoding, the type of strategy produced different brain activity patterns, with deep encoding being characterized by left amygdala and left anterior cingulate activation. There was no effect of encoding strategy on brain activity during the recognition conditions. Posterior fusiform gyrus activation was related to better recognition accuracy in those conditions encouraging perceptual strategies, whereas activity in left frontal and temporal areas correlated with better performance during the 'deep' condition. Results highlight three important aspects of face memory: (1) the effect of encoding strategy was seen only at encoding and not at recognition; (2) left inferior prefrontal cortex was engaged during encoding of faces regardless of strategy; and (3) differential activity in fusiform gyrus was found, suggesting that activity in this area is not only a result of automatic face processing but is modulated by controlled processes.

  15. Properties of virion transactivator proteins encoded by primate cytomegaloviruses

    Directory of Open Access Journals (Sweden)

    Barry Peter A

    2009-05-01

    Full Text Available Abstract Background Human cytomegalovirus (HCMV is a betaherpesvirus that causes severe disease in situations where the immune system is immature or compromised. HCMV immediate early (IE gene expression is stimulated by the virion phosphoprotein pp71, encoded by open reading frame (ORF UL82, and this transactivation activity is important for the efficient initiation of viral replication. It is currently recognized that pp71 acts to overcome cellular intrinsic defences that otherwise block viral IE gene expression, and that interactions of pp71 with the cell proteins Daxx and ATRX are important for this function. A further property of pp71 is the ability to enable prolonged gene expression from quiescent herpes simplex virus type 1 (HSV-1 genomes. Non-human primate cytomegaloviruses encode homologs of pp71, but there is currently no published information that addresses their effects on gene expression and modes of action. Results The UL82 homolog encoded by simian cytomegalovirus (SCMV, strain Colburn, was identified and cloned. This ORF, named S82, was cloned into an HSV-1 vector, as were those from baboon, rhesus monkey and chimpanzee cytomegaloviruses. The use of an HSV-1 vector enabled expression of the UL82 homologs in a range of cell types, and permitted investigation of their abilities to direct prolonged gene expression from quiescent genomes. The results show that all UL82 homologs activate gene expression, and that neither host cell type nor promoter target sequence has major effects on these activities. Surprisingly, the UL82 proteins specified by non-human primate cytomegaloviruses, unlike pp71, did not direct long term expression from quiescent HSV-1 genomes. In addition, significant differences were observed in the intranuclear localization of the UL82 homologs, and in their effects on Daxx. Strikingly, S82 mediated the release of Daxx from nuclear domain 10 substructures much more rapidly than pp71 or the other proteins tested. All

  16. Are animacy effects in episodic memory independent of encoding instructions?

    Science.gov (United States)

    Gelin, Margaux; Bugaiska, Aurélia; Méot, Alain; Bonin, Patrick

    2017-01-01

    The adaptive view of human memory [Nairne, J. S. 2010. Adaptive memory: Evolutionary constraints on remembering. In B. H. Ross (Ed.), The psychology of learning and motivation (Vol. 53 pp. 1-32). Burlington: Academic Press; Nairne, J. S., & Pandeirada, J. N. S. 2010a. Adaptive memory: Ancestral priorities and the mnemonic value of survival processing. Cognitive Psychology, 61, 1-22, 2010b; Memory functions. In The Corsini encyclopedia of psychology and behavioral science, (Vol 3, 4th ed. pp. 977-979). Hokoben, NJ: John Wiley & Sons] assumes that animates (e.g., baby, rabbit presented as words or pictures) are better remembered than inanimates (e.g., bottle, mountain) because animates are more important for fitness than inanimates. In four studies, we investigated whether the animacy effect in episodic memory (i.e., the better remembering of animates over inanimates) is independent of encoding instructions. Using both a factorial (Studies 1 and 3) and a multiple regression approach (Study 2), three studies tested whether certain contexts drive people to attend to inanimate more than to animate things (or the reverse), and therefore lead to differential animacy effects. The findings showed that animacy effects on recall performance were observed in the grassland-survival scenario used by Nairne, Thompson, and Pandeirada (2007. Adaptive memory: Survival processing enhances retention. Journal of Experimental Psychology: Learning, Memory, & Cognition, 33, 263-273) (Studies 1-3), when words were rated for their pleasantness (Study 2), and in explicit learning (Study 3). In the non-survival scenario of moving to a foreign land (Studies 1-2), animacy effects on recall rates were not reliable in Study 1, but were significant in Study 2, whereas these effects were reliable in the non-survival scenario of planning a trip as a tour guide (Study 3). A final (control) study (Study 4) was conducted to test specifically whether animacy effects are related to the more organised

  17. Constructing LDPC Codes from Loop-Free Encoding Modules

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to

  18. Vibrio Phage KVP40 Encodes a Functional NAD+ Salvage Pathway.

    Science.gov (United States)

    Lee, Jae Yun; Li, Zhiqun; Miller, Eric S

    2017-05-01

    The genome of T4-type Vibrio bacteriophage KVP40 has five genes predicted to encode proteins of pyridine nucleotide metabolism, of which two, nadV and natV , would suffice for an NAD + salvage pathway. NadV is an apparent nicotinamide phosphoribosyltransferase (NAmPRTase), and NatV is an apparent bifunctional nicotinamide mononucleotide adenylyltransferase (NMNATase) and nicotinamide-adenine dinucleotide pyrophosphatase (Nudix hydrolase). Genes encoding the predicted salvage pathway were cloned and expressed in Escherichia coli , the proteins were purified, and their enzymatic properties were examined. KVP40 NadV NAmPRTase is active in vitro , and a clone complements a Salmonella mutant defective in both the bacterial de novo and salvage pathways. Similar to other NAmPRTases, the KVP40 enzyme displayed ATPase activity indicative of energy coupling in the reaction mechanism. The NatV NMNATase activity was measured in a coupled reaction system demonstrating NAD + biosynthesis from nicotinamide, phosphoribosyl pyrophosphate, and ATP. The NatV Nudix hydrolase domain was also shown to be active, with preferred substrates of ADP-ribose, NAD + , and NADH. Expression analysis using reverse transcription-quantitative PCR (qRT-PCR) and enzyme assays of infected Vibrio parahaemolyticus cells demonstrated nadV and natV transcription during the early and delayed-early periods of infection when other KVP40 genes of nucleotide precursor metabolism are expressed. The distribution and phylogeny of NadV and NatV proteins among several large double-stranded DNA (dsDNA) myophages, and also those from some very large siphophages, suggest broad relevance of pyridine nucleotide scavenging in virus-infected cells. NAD + biosynthesis presents another important metabolic resource control point by large, rapidly replicating dsDNA bacteriophages. IMPORTANCE T4-type bacteriophages enhance DNA precursor synthesis through reductive reactions that use NADH/NADPH as the electron donor and NAD

  19. Universal Quantum Computing with Measurement-Induced Continuous-Variable Gate Sequence in a Loop-Based Architecture.

    Science.gov (United States)

    Takeda, Shuntaro; Furusawa, Akira

    2017-09-22

    We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.

  20. Continuous-Variable Quantum Computation of Oracle Decision Problems

    Science.gov (United States)

    Adcock, Mark R. A.

    Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. In the infinite-dimensional case, we study continuous-variable quantum algorithms for the solution of the Deutsch--Jozsa oracle decision problem implemented within a single harmonic-oscillator. Orthogonal states are used as the computational bases, and we show that, contrary to a previous claim in the literature, this implementation of quantum information processing has limitations due to a position-momentum trade-off of the Fourier transform. We further demonstrate that orthogonal encoding bases are not unique, and using the coherent states of the harmonic oscillator as the computational bases, our formalism enables quantifying

  1. The neural encoding of guesses in the human brain.

    Science.gov (United States)

    Bode, Stefan; Bogler, Carsten; Soon, Chun Siong; Haynes, John-Dylan

    2012-01-16

    Human perception depends heavily on the quality of sensory information. When objects are hard to see we often believe ourselves to be purely guessing. Here we investigated whether such guesses use brain networks involved in perceptual decision making or independent networks. We used a combination of fMRI and pattern classification to test how visibility affects the signals, which determine choices. We found that decisions regarding clearly visible objects are predicted by signals in sensory brain regions, whereas different regions in parietal cortex became predictive when subjects were shown invisible objects and believed themselves to be purely guessing. This parietal network was highly overlapping with regions, which have previously been shown to encode free decisions. Thus, the brain might use a dedicated network for determining choices when insufficient sensory information is available. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Expression analysis of a ''Cucurbita'' cDNA encoding endonuclease

    International Nuclear Information System (INIS)

    Szopa, J.

    1995-01-01

    The nuclear matrices of plant cell nuclei display intrinsic nuclease activity which consists in nicking supercoiled DNA. A cDNA encoding a 32 kDa endonuclease has been cloned and sequenced. The nucleotide and deduced amino-acid sequences show high homology to known 14-3-3-protein sequences from other sources. The amino-acid sequence shows agreement with consensus sequences for potential phosphorylation by protein kinase A and C and for calcium, lipid and membrane-binding sites. The nucleotide-binding site is also present within the conserved part of the sequence. By Northern blot analysis, the differential expression of the corresponding mRNA was detected; it was the strongest in sink tissues. The endonuclease activity found on DNA-polyacrylamide gel electrophoresis coincided with mRNA content and was the highest in tuber. (author). 22 refs, 6 figs

  3. Bacteriophages encode factors required for protection in a symbiotic mutualism.

    Science.gov (United States)

    Oliver, Kerry M; Degnan, Patrick H; Hunter, Martha S; Moran, Nancy A

    2009-08-21

    Bacteriophages are known to carry key virulence factors for pathogenic bacteria, but their roles in symbiotic bacteria are less well understood. The heritable symbiont Hamiltonella defensa protects the aphid Acyrthosiphon pisum from attack by the parasitoid Aphidius ervi by killing developing wasp larvae. In a controlled genetic background, we show that a toxin-encoding bacteriophage is required to produce the protective phenotype. Phage loss occurs repeatedly in laboratory-held H. defensa-infected aphid clonal lines, resulting in increased susceptibility to parasitism in each instance. Our results show that these mobile genetic elements can endow a bacterial symbiont with benefits that extend to the animal host. Thus, phages vector ecologically important traits, such as defense against parasitoids, within and among symbiont and animal host lineages.

  4. Cloning of Salmonella typhimurium DNA encoding mutagenic DNA repair

    International Nuclear Information System (INIS)

    Thomas, S.M.; Sedgwick, S.G.

    1989-01-01

    Mutagenic DNA repair in Escherichia coli is encoded by the umuDC operon. Salmonella typhimurium DNA which has homology with E. coli umuC and is able to complement E. coli umuC122::Tn5 and umuC36 mutations has been cloned. Complementation of umuD44 mutants and hybridization with E. coli umuD also occurred, but these activities were much weaker than with umuC. Restriction enzyme mapping indicated that the composition of the cloned fragment is different from the E. coli umuDC operon. Therefore, a umu-like function of S. typhimurium has been found; the phenotype of this function is weaker than that of its E. coli counterpart, which is consistent with the weak mutagenic response of S. typhimurium to UV compared with the response in E. coli

  5. Rapid Automatic Motor Encoding of Competing Reach Options

    Directory of Open Access Journals (Sweden)

    Jason P. Gallivan

    2017-02-01

    Full Text Available Mounting neural evidence suggests that, in situations in which there are multiple potential targets for action, the brain prepares, in parallel, competing movements associated with these targets, prior to implementing one of them. Central to this interpretation is the idea that competing viewed targets, prior to selection, are rapidly and automatically transformed into corresponding motor representations. Here, by applying target-specific, gradual visuomotor rotations and dissociating, unbeknownst to participants, the visual direction of potential targets from the direction of the movements required to reach the same targets, we provide direct evidence for this provocative idea. Our results offer strong empirical support for theories suggesting that competing action options are automatically represented in terms of the movements required to attain them. The rapid motor encoding of potential targets may support the fast optimization of motor costs under conditions of target uncertainty and allow the motor system to inform decisions about target selection.

  6. Designing waveforms for temporal encoding using a frequency sampling method

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    was compared to a linear frequency modulated signal with amplitude tapering, previously used in clinical studies for synthetic transmit aperture imaging. The latter had a relatively flat spectrum which implied that the waveform tried to excite all frequencies including ones with low amplification. The proposed......In this paper a method for designing waveforms for temporal encoding in medical ultrasound imaging is described. The method is based on least squares optimization and is used to design nonlinear frequency modulated signals for synthetic transmit aperture imaging. By using the proposed design method...... waveform, on the other hand, was designed so that only frequencies where the transducer had a large amplification were excited. Hereby, unnecessary heating of the transducer could be avoided and the signal-tonoise ratio could be increased. The experimental ultrasound scanner RASMUS was used to evaluate...

  7. Synthesis and nanoscale thermal encoding of phase-change nanowires

    International Nuclear Information System (INIS)

    Sun Xuhui; Yu Bin; Meyyappan, M.

    2007-01-01

    Low-dimensional phase-change nanostructures provide a valuable research platform for understanding the phase-transition behavior and thermal properties at nanoscale and their potential in achieving superdense data storage. Ge 2 Sb 2 Te 5 nanowires have been grown using a vapor-liquid-solid technique and shown to exhibit distinctive properties that may overcome the present data storage scaling barrier. Local heating of an individual nanowire with a focused electron beam was used to shape a nano-bar-code on a Ge 2 Sb 2 Te 5 nanowire. The data encoding on Ge 2 Sb 2 Te 5 nanowire may promote novel device concepts to implement ultrahigh density, low energy, high speed data storage using phase-change nanomaterials with diverse thermal-programing strategies

  8. Feature-specific encoding flexibility in visual working memory.

    Directory of Open Access Journals (Sweden)

    Aki Kondo

    Full Text Available The current study examined selective encoding in visual working memory by systematically investigating interference from task-irrelevant features. The stimuli were objects defined by three features (color, shape, and location, and during a delay period, any of the features could switch between two objects. Additionally, single- and whole-probe trials were randomized within experimental blocks to investigate effects of memory retrieval. A series of relevant-feature switch detection tasks, where one feature was task-irrelevant, showed that interference from the task-irrelevant feature was only observed in the color-shape task, suggesting that color and shape information could be successfully filtered out, but location information could not, even when location was a task-irrelevant feature. Therefore, although location information is added to object representations independent of task demands in a relatively automatic manner, other features (e.g., color, shape can be flexibly added to object representations.

  9. Acquiring, encoding, and re-using clinical knowledge in PRODIGY

    Directory of Open Access Journals (Sweden)

    Richard Hall

    2002-11-01

    Full Text Available The development, implementation and maintenance of computer-executable clinical guidelines encompass a series of complex processes. As they are often performed by more than one organisation, this introduces further complexity. Within the PRODIGY project we attempt to control as many aspects of the process as possible, in order to increase the likelihood of achieving success. To illustrate the complexity of the process and many of the inherent problems and solutions, this paper describes the evolution of the PRODIGY knowledge base, describing the steps from acquiring knowledge, through encoding, to the execution of guidelines, and 'closing the loop' by discussing an approach to knowledge re-use. We will also consider some of the wider implications of our work and propose directions for future research and development activities.

  10. Multiple genes encode the major surface glycoprotein of Pneumocystis carinii

    DEFF Research Database (Denmark)

    Kovacs, J A; Powell, F; Edman, J C

    1993-01-01

    hydrophobic region at the carboxyl terminus. The presence of multiple related msg genes encoding the major surface glycoprotein of P. carinii suggests that antigenic variation is a possible mechanism for evading host defenses. Further characterization of this family of genes should allow the development......The major surface antigen of Pneumocystis carinii, a life-threatening opportunistic pathogen in human immunodeficiency virus-infected patients, is an abundant glycoprotein that functions in host-organism interactions. A monoclonal antibody to this antigen is protective in animals, and thus...... blot studies using chromosomal or restricted DNA, the major surface glycoproteins are the products of a multicopy family of genes. The predicted protein has an M(r) of approximately 123,000, is relatively rich in cysteine residues (5.5%) that are very strongly conserved, and contains a well conserved...

  11. Genetically encoded probes for NAD+/NADH monitoring.

    Science.gov (United States)

    Bilan, Dmitry S; Belousov, Vsevolod V

    2016-11-01

    NAD + and NADH participate in many metabolic reactions. The NAD + /NADH ratio is an important parameter reflecting the general metabolic and redox state of different types of cells. For a long time, in situ and in vivo NAD + /NADH monitoring has been hampered by the lack of suitable tools. The recent development of genetically encoded indicators based on fluorescent proteins linked to specific nucleotide-binding domains has already helped to address this monitoring problem. In this review, we will focus on four available indicators: Peredox, Frex family probes, RexYFP and SoNar. Each indicator has advantages and limitations. We will also discuss the most important points that should be considered when selecting a suitable indicator for certain experimental conditions. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Optical demodulation system for digitally encoded suspension array in fluoroimmunoassay

    Science.gov (United States)

    He, Qinghua; Li, Dongmei; He, Yonghong; Guan, Tian; Zhang, Yilong; Shen, Zhiyuan; Chen, Xuejing; Liu, Siyu; Lu, Bangrong; Ji, Yanhong

    2017-09-01

    A laser-induced breakdown spectroscopy and fluorescence spectroscopy-coupled optical system is reported to demodulate digitally encoded suspension array in fluoroimmunoassay. It takes advantage of the plasma emissions of assembled elemental materials to digitally decode the suspension array, providing a more stable and accurate recognition to target biomolecules. By separating the decoding procedure of suspension array and adsorption quantity calculation of biomolecules into two independent channels, the cross talk between decoding and label signals in traditional methods had been successfully avoided, which promoted the accuracy of both processes and realized more sensitive quantitative detection of target biomolecules. We carried a multiplexed detection of several types of anti-IgG to verify the quantitative analysis performance of the system. A limit of detection of 1.48×10-10 M was achieved, demonstrating the detection sensitivity of the optical demodulation system.

  13. Feature-specific encoding flexibility in visual working memory.

    Science.gov (United States)

    Kondo, Aki; Saiki, Jun

    2012-01-01

    The current study examined selective encoding in visual working memory by systematically investigating interference from task-irrelevant features. The stimuli were objects defined by three features (color, shape, and location), and during a delay period, any of the features could switch between two objects. Additionally, single- and whole-probe trials were randomized within experimental blocks to investigate effects of memory retrieval. A series of relevant-feature switch detection tasks, where one feature was task-irrelevant, showed that interference from the task-irrelevant feature was only observed in the color-shape task, suggesting that color and shape information could be successfully filtered out, but location information could not, even when location was a task-irrelevant feature. Therefore, although location information is added to object representations independent of task demands in a relatively automatic manner, other features (e.g., color, shape) can be flexibly added to object representations.

  14. Coupled generative adversarial stacked Auto-encoder: CoGASA.

    Science.gov (United States)

    Kiasari, Mohammad Ahangar; Moirangthem, Dennis Singh; Lee, Minho

    2018-04-01

    Coupled Generative Adversarial Network (CoGAN) was recently introduced in order to model a joint distribution of a multi modal dataset. The CoGAN model lacks the capability to handle noisy data as well as it is computationally expensive and inefficient for practical applications such as cross-domain image transformation. In this paper, we propose a new method, named the Coupled Generative Adversarial Stacked Auto-encoder (CoGASA), to directly transfer data from one domain to another domain with robustness to noise in the input data as well to as reduce the computation time. We evaluate the proposed model using MNIST and the Large-scale CelebFaces Attributes (CelebA) datasets, and the results demonstrate a highly competitive performance. Our proposed models can easily transfer images into the target domain with minimal effort. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Method of generating ploynucleotides encoding enhanced folding variants

    Energy Technology Data Exchange (ETDEWEB)

    Bradbury, Andrew M.; Kiss, Csaba; Waldo, Geoffrey S.

    2017-05-02

    The invention provides directed evolution methods for improving the folding, solubility and stability (including thermostability) characteristics of polypeptides. In one aspect, the invention provides a method for generating folding and stability-enhanced variants of proteins, including but not limited to fluorescent proteins, chromophoric proteins and enzymes. In another aspect, the invention provides methods for generating thermostable variants of a target protein or polypeptide via an internal destabilization baiting strategy. Internally destabilization a protein of interest is achieved by inserting a heterologous, folding-destabilizing sequence (folding interference domain) within DNA encoding the protein of interest, evolving the protein sequences adjacent to the heterologous insertion to overcome the destabilization (using any number of mutagenesis methods), thereby creating a library of variants. The variants in the library are expressed, and those with enhanced folding characteristics selected.

  16. Information encoding of a qubit into a multilevel environment

    International Nuclear Information System (INIS)

    Perez, A.

    2010-01-01

    I consider the interaction of a small quantum system (a qubit) with a structured environment consisting of many levels. The qubit will experience a decoherence process, which implies that part of its initial information will be encoded into correlations between system and environment. I investigate how this information is distributed on a given subset of levels as a function of its size, using the mutual information between both entities, in the spirit of the partial-information plots studied by Zurek and co-workers. In this case we can observe some differences, which arise from the fact that I am partitioning just one quantum system and not a collection of them. However, some similar features, like redundancy (in the sense that a given amount of information is shared by many subsets), which increases with the size of the environment, are also found here.

  17. Polymeric peptide pigments with sequence-encoded properties

    Energy Technology Data Exchange (ETDEWEB)

    Lampel, Ayala; McPhee, Scott A.; Park, Hang-Ah; Scott, Gary G.; Humagain, Sunita; Hekstra, Doeke R.; Yoo, Barney; Frederix, Pim W. J. M.; Li, Tai-De; Abzalimov, Rinat R.; Greenbaum, Steven G.; Tuttle, Tell; Hu, Chunhua; Bettinger, Christopher J.; Ulijn, Rein V.

    2017-06-08

    Melanins are a family of heterogeneous polymeric pigments that provide ultraviolet (UV) light protection, structural support, coloration, and free radical scavenging. Formed by oxidative oligomerization of catecholic small molecules, the physical properties of melanins are influenced by covalent and noncovalent disorder. We report the use of tyrosine-containing tripeptides as tunable precursors for polymeric pigments. In these structures, phenols are presented in a (supra-)molecular context dictated by the positions of the amino acids in the peptide sequence. Oxidative polymerization can be tuned in a sequence-dependent manner, resulting in peptide sequence–encoded properties such as UV absorbance, morphology, coloration, and electrochemical properties over a considerable range. Short peptides have low barriers to application and can be easily scaled, suggesting near-term applications in cosmetics and biomedicine.

  18. 3-D reconstruction using an efficient Octree encoding scheme

    International Nuclear Information System (INIS)

    Yeh, H.J.; Jagadeesh, J.M.

    1986-01-01

    Reconstruction of a three dimensional (3-D) model of biological objects from their thin section 2-D slices is a valuable tool for biomedical research. The goal of a 3-D reconstruction routine is to find the 3-D structure from a set of sliced images and display the 3-D view on a 2-D screen. Octree has been widely used as a powerful data structure to represent 3-D objects in computer. The encoding technique is specially useful for the representation of objects with irregular shape, such as biomedical objects. A method called level-wise pointerless representation which can offer much less storage requirement has been developed. In addition, a complete software package has been designed using the efficient data structure to reconstruct 3-D objects from 2-D sliced images and to display the 3-D objects on 2-D screen

  19. Natural biased coin encoded in the genome determines cell strategy.

    Directory of Open Access Journals (Sweden)

    Faezeh Dorri

    Full Text Available Decision making at a cellular level determines different fates for isogenic cells. However, it is not yet clear how rational decisions are encoded in the genome, how they are transmitted to their offspring, and whether they evolve and become optimized throughout generations. In this paper, we use a game theoretic approach to explain how rational decisions are made in the presence of cooperators and competitors. Our results suggest the existence of an internal switch that operates as a biased coin. The biased coin is, in fact, a biochemical bistable network of interacting genes that can flip to one of its stable states in response to different environmental stimuli. We present a framework to describe how the positions of attractors in such a gene regulatory network correspond to the behavior of a rational player in a competing environment. We evaluate our model by considering lysis/lysogeny decision making of bacteriophage lambda in E. coli.

  20. Monitoring thioredoxin redox with a genetically encoded red fluorescent biosensor.

    Science.gov (United States)

    Fan, Yichong; Makar, Merna; Wang, Michael X; Ai, Hui-Wang

    2017-09-01

    Thioredoxin (Trx) is one of the two major thiol antioxidants, playing essential roles in redox homeostasis and signaling. Despite its importance, there is a lack of methods for monitoring Trx redox dynamics in live cells, hindering a better understanding of physiological and pathological roles of the Trx redox system. In this work, we developed the first genetically encoded fluorescent biosensor for Trx redox by engineering a redox relay between the active-site cysteines of human Trx1 and rxRFP1, a redox-sensitive red fluorescent protein. We used the resultant biosensor-TrxRFP1-to selectively monitor perturbations of Trx redox in various mammalian cell lines. We subcellularly localized TrxRFP1 to image compartmentalized Trx redox changes. We further combined TrxRFP1 with a green fluorescent Grx1-roGFP2 biosensor to simultaneously monitor Trx and glutathione redox dynamics in live cells in response to chemical and physiologically relevant stimuli.