WorldWideScience

Sample records for adaptive array processing

  1. Application of optical processing to adaptive phased array radar

    Science.gov (United States)

    Carroll, C. W.; Vijaya Kumar, B. V. K.

    1988-01-01

    The results of the investigation of the applicability of optical processing to Adaptive Phased Array Radar (APAR) data processing will be summarized. Subjects that are covered include: (1) new iterative Fourier transform based technique to determine the array antenna weight vector such that the resulting antenna pattern has nulls at desired locations; (2) obtaining the solution of the optimal Wiener weight vector by both iterative and direct methods on two laboratory Optical Linear Algebra Processing (OLAP) systems; and (3) an investigation of the effects of errors present in OLAP systems on the solution vectors.

  2. Principles of Adaptive Array Processing

    Science.gov (United States)

    2006-09-01

    ACE with and without tapering (homogeneous case). These analytical results are less suited to predict the detection performance of a real system ...Nickel: Adaptive Beamforming for Phased Array Radars. Proc. Int. Radar Symposium IRS’98 (Munich, Sept. 1998), DGON and VDE /ITG, pp. 897-906.(Reprint also...strategies for airborne radar. Asilomar Conf. on Signals, Systems and Computers, Pacific Grove, CA, 1998, IEEE Cat.Nr. 0-7803-5148-7/98, pp. 1327-1331. [17

  3. Using adaptive antenna array in LTE with MIMO for space-time processing

    Directory of Open Access Journals (Sweden)

    Abdourahamane Ahmed Ali

    2015-04-01

    Full Text Available The actual methods of improvement the existent wireless transmission systems are proposed. Mathematical apparatus is considered and proved by models, graph of which are shown, using the adaptive array antenna in LTE with MIMO for space-time processing. The results show that improvements, which are joined with space-time processing, positively reflects on LTE cell size or on throughput

  4. Introduction to adaptive arrays

    CERN Document Server

    Monzingo, Bob; Haupt, Randy

    2011-01-01

    This second edition is an extensive modernization of the bestselling introduction to the subject of adaptive array sensor systems. With the number of applications of adaptive array sensor systems growing each year, this look at the principles and fundamental techniques that are critical to these systems is more important than ever before. Introduction to Adaptive Arrays, 2nd Edition is organized as a tutorial, taking the reader by the hand and leading them through the maze of jargon that often surrounds this highly technical subject. It is easy to read and easy to follow as fundamental concept

  5. Adaptive motion compensation in sonar array processing

    NARCIS (Netherlands)

    Groen, J.

    2006-01-01

    In recent years, sonar performance has mainly improved via a significant increase in array ap-erture, signal bandwidth and computational power. This thesis aims at improving sonar array processing techniques based on these three steps forward. In applications such as anti-submarine warfare and mine

  6. Proceedings of the Adaptive Sensor Array Processing Workshop (12th) Held in Lexington, MA on 16-18 March 2004 (CD-ROM)

    National Research Council Canada - National Science Library

    James, F

    2004-01-01

    ...: The twelfth annual workshop on Adaptive Sensor Array Processing presented a diverse agenda featuring new work on adaptive methods for communications, radar and sonar, algorithmic challenges posed...

  7. Subband Adaptive Array for DS-CDMA Mobile Radio

    Directory of Open Access Journals (Sweden)

    Tran Xuan Nam

    2004-01-01

    Full Text Available We propose a novel scheme of subband adaptive array (SBAA for direct-sequence code division multiple access (DS-CDMA. The scheme exploits the spreading code and pilot signal as the reference signal to estimate the propagation channel. Moreover, instead of combining the array outputs at each output tap using a synthesis filter and then despreading them, we despread directly the array outputs at each output tap by the desired user's code to save the synthesis filter. Although its configuration is far different from that of 2D RAKEs, the proposed scheme exhibits relatively equivalent performance of 2D RAKEs while having less computation load due to utilising adaptive signal processing in subbands. Simulation programs are carried out to explore the performance of the scheme and compare its performance with that of the standard 2D RAKE.

  8. Adaptive Beamforming Based on Complex Quaternion Processes

    Directory of Open Access Journals (Sweden)

    Jian-wu Tao

    2014-01-01

    Full Text Available Motivated by the benefits of array signal processing in quaternion domain, we investigate the problem of adaptive beamforming based on complex quaternion processes in this paper. First, a complex quaternion least-mean squares (CQLMS algorithm is proposed and its performance is analyzed. The CQLMS algorithm is suitable for adaptive beamforming of vector-sensor array. The weight vector update of CQLMS algorithm is derived based on the complex gradient, leading to lower computational complexity. Because the complex quaternion can exhibit the orthogonal structure of an electromagnetic vector-sensor in a natural way, a complex quaternion model in time domain is provided for a 3-component vector-sensor array. And the normalized adaptive beamformer using CQLMS is presented. Finally, simulation results are given to validate the performance of the proposed adaptive beamformer.

  9. Design of Robust Adaptive Array Processors for Non-Stationary Ocean Environments

    National Research Council Canada - National Science Library

    Wage, Kathleen E

    2009-01-01

    The overall goal of this project is to design adaptive array processing algorithms that have good transient performance, are robust to mismatch, work with low sample support, and incorporate waveguide...

  10. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU

    Directory of Open Access Journals (Sweden)

    Hailong Xu

    2016-03-01

    Full Text Available Nowadays, software-defined radio (SDR has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP and Space-Frequency Adaptive Processing (SFAP are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications.

  11. Radar techniques using array antennas

    CERN Document Server

    Wirth, Wulf-Dieter

    2013-01-01

    Radar Techniques Using Array Antennas is a thorough introduction to the possibilities of radar technology based on electronic steerable and active array antennas. Topics covered include array signal processing, array calibration, adaptive digital beamforming, adaptive monopulse, superresolution, pulse compression, sequential detection, target detection with long pulse series, space-time adaptive processing (STAP), moving target detection using synthetic aperture radar (SAR), target imaging, energy management and system parameter relations. The discussed methods are confirmed by simulation stud

  12. Adaptive ground implemented phase array

    Science.gov (United States)

    Spearing, R. E.

    1973-01-01

    The simulation of an adaptive ground implemented phased array of five antenna elements is reported for a very high frequency system design that is tolerant to the radio frequency interference environment encountered by a tracking data relay satellite. Signals originating from satellites are received by the VHF ring array and both horizontal and vertical polarizations from each of the five elements are multiplexed and transmitted down to ground station. A panel on the transmitting end of the simulation chamber contains up to 10 S-band RFI sources along with the desired signal to simulate the dynamic relationship between user and TDRS. The 10 input channels are summed, and desired and interference signals are separated and corrected until the resultant sum signal-to-interference ratio is maximized. Testing performed with this simulation equipment demonstrates good correlation between predicted and actual results.

  13. Sensor array signal processing

    CERN Document Server

    Naidu, Prabhakar S

    2009-01-01

    Chapter One: An Overview of Wavefields 1.1 Types of Wavefields and the Governing Equations 1.2 Wavefield in open space 1.3 Wavefield in bounded space 1.4 Stochastic wavefield 1.5 Multipath propagation 1.6 Propagation through random medium 1.7 ExercisesChapter Two: Sensor Array Systems 2.1 Uniform linear array (ULA) 2.2 Planar array 2.3 Distributed sensor array 2.4 Broadband sensor array 2.5 Source and sensor arrays 2.6 Multi-component sensor array2.7 ExercisesChapter Three: Frequency Wavenumber Processing 3.1 Digital filters in the w-k domain 3.2 Mapping of 1D into 2D filters 3.3 Multichannel Wiener filters 3.4 Wiener filters for ULA and UCA 3.5 Predictive noise cancellation 3.6 Exercises Chapter Four: Source Localization: Frequency Wavenumber Spectrum4.1 Frequency wavenumber spectrum 4.2 Beamformation 4.3 Capon's w-k spectrum 4.4 Maximum entropy w-k spectrum 4.5 Doppler-Azimuth Processing4.6 ExercisesChapter Five: Source Localization: Subspace Methods 5.1 Subspace methods (Narrowband) 5.2 Subspace methods (B...

  14. Micromirror Arrays for Adaptive Optics; TOPICAL

    International Nuclear Information System (INIS)

    Carr, E.J.

    2000-01-01

    The long-range goal of this project is to develop the optical and mechanical design of a micromirror array for adaptive optics that will meet the following criteria: flat mirror surface ((lambda)/20), high fill factor ( and gt; 95%), large stroke (5-10(micro)m), and pixel size(approx)-200(micro)m. This will be accomplished by optimizing the mirror surface and actuators independently and then combining them using bonding technologies that are currently being developed

  15. ArrayBridge: Interweaving declarative array processing with high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Haoyuan [The Ohio State Univ., Columbus, OH (United States); Floratos, Sofoklis [The Ohio State Univ., Columbus, OH (United States); Blanas, Spyros [The Ohio State Univ., Columbus, OH (United States); Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Prabhat, Prabhat [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brown, Paul [Paradigm4, Inc., Waltham, MA (United States)

    2017-05-04

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aims to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.

  16. Adaptive Injection-locking Oscillator Array for RF Spectrum Analysis

    International Nuclear Information System (INIS)

    Leung, Daniel

    2011-01-01

    A highly parallel radio frequency receiver using an array of injection-locking oscillators for on-chip, rapid estimation of signal amplitudes and frequencies is considered. The oscillators are tuned to different natural frequencies, and variable gain amplifiers are used to provide negative feedback to adapt the locking band-width with the input signal to yield a combined measure of input signal amplitude and frequency detuning. To further this effort, an array of 16 two-stage differential ring oscillators and 16 Gilbert-cell mixers is designed for 40-400 MHz operation. The injection-locking oscillator array is assembled on a custom printed-circuit board. Control and calibration is achieved by on-board microcontroller.

  17. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    Science.gov (United States)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  18. Superresolution with Seismic Arrays using Empirical Matched Field Processing

    Energy Technology Data Exchange (ETDEWEB)

    Harris, D B; Kvaerna, T

    2010-03-24

    Scattering and refraction of seismic waves can be exploited with empirical matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term 'superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2% accuracy of 549 explosions conducted by closely-spaced mines in northwest Russia. The mines are observed at 340-410 kilometers range and are separated by as little as 3 kilometers. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely-spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hertz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane wave model.

  19. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT)

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  20. Multiple wall-reflection effect in adaptive-array differential-phase reflectometry on QUEST

    International Nuclear Information System (INIS)

    Idei, H.; Fujisawa, A.; Nagashima, Y.; Onchi, T.; Hanada, K.; Zushi, H.; Mishra, K.; Hamasaki, M.; Hayashi, Y.; Yamamoto, M.K.

    2016-01-01

    A phased array antenna and Software-Defined Radio (SDR) heterodyne-detection systems have been developed for adaptive array approaches in reflectometry on the QUEST. In the QUEST device considered as a large oversized cavity, standing wave (multiple wall-reflection) effect was significantly observed with distorted amplitude and phase evolution even if the adaptive array analyses were applied. The distorted fields were analyzed by Fast Fourier Transform (FFT) in wavenumber domain to treat separately the components with and without wall reflections. The differential phase evolution was properly obtained from the distorted field evolution by the FFT procedures. A frequency derivative method has been proposed to overcome the multiple-wall reflection effect, and SDR super-heterodyned components with small frequency difference for the derivative method were correctly obtained using the FFT analysis

  1. Directional hearing aid using hybrid adaptive beamformer (HAB) and binaural ITE array

    Science.gov (United States)

    Shaw, Scott T.; Larow, Andy J.; Gibian, Gary L.; Sherlock, Laguinn P.; Schulein, Robert

    2002-05-01

    A directional hearing aid algorithm called the Hybrid Adaptive Beamformer (HAB), developed for NIH/NIA, can be applied to many different microphone array configurations. In this project the HAB algorithm was applied to a new array employing in-the-ear microphones at each ear (HAB-ITE), to see if previous HAB performance could be achieved with a more cosmetically acceptable package. With diotic output, the average benefit in threshold SNR was 10.9 dB for three HoH and 11.7 dB for five normal-hearing subjects. These results are slightly better than previous results of equivalent tests with a 3-in. array. With an innovative binaural fitting, a small benefit beyond that provided by diotic adaptive beamforming was observed: 12.5 dB for HoH and 13.3 dB for normal-hearing subjects, a 1.6 dB improvement over the diotic presentation. Subjectively, the binaural fitting preserved binaural hearing abilities, giving the user a sense of space, and providing left-right localization. Thus the goal of creating an adaptive beamformer that simultaneously provides excellent noise reduction and binaural hearing was achieved. Further work remains before the HAB-ITE can be incorporated into a real product, optimizing binaural adaptive beamforming, and integrating the concept with other technologies to produce a viable product prototype. [Work supported by NIH/NIDCD.

  2. A recurrent neural network for adaptive beamforming and array correction.

    Science.gov (United States)

    Che, Hangjun; Li, Chuandong; He, Xing; Huang, Tingwen

    2016-08-01

    In this paper, a recurrent neural network (RNN) is proposed for solving adaptive beamforming problem. In order to minimize sidelobe interference, the problem is described as a convex optimization problem based on linear array model. RNN is designed to optimize system's weight values in the feasible region which is derived from arrays' state and plane wave's information. The new algorithm is proven to be stable and converge to optimal solution in the sense of Lyapunov. So as to verify new algorithm's performance, we apply it to beamforming under array mismatch situation. Comparing with other optimization algorithms, simulations suggest that RNN has strong ability to search for exact solutions under the condition of large scale constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. The fabrication techniques of Z-pinch targets. Techniques of fabricating self-adapted Z-pinch wire-arrays

    International Nuclear Information System (INIS)

    Qiu Longhui; Wei Yun; Liu Debin; Sun Zuoke; Yuan Yuping

    2002-01-01

    In order to fabricate wire arrays for use in the Z-pinch physical experiments, the fabrication techniques are investigated as follow: Thickness of about 1-1.5 μm of gold is electroplated on the surface of ultra-fine tungsten wires. Fibers of deuterated-polystyrene (DPS) with diameters from 30 to 100 microns are made from molten DPS. And two kinds of planar wire-arrays and four types of annular wire-arrays are designed, which are able to adapt to the variation of the distance between the cathode and anode inside the target chamber. Furthermore, wire-arrays with diameters form 5-24 μm are fabricated with tungsten wires, respectively. The on-site test shows that the wire-arrays can self-adapt to the distance changes perfectly

  4. Adaptive antenna array algorithms and their impact on code division ...

    African Journals Online (AJOL)

    In this paper four each blind adaptive array algorithms are developed, and their performance under different test situations (e.g. A WGN (Additive White Gaussian Noise) channel, and multipath environment) is studied A MATLAB test bed is created to show their performance on these two test situations and an optimum one ...

  5. High Dynamic Range adaptive ΔΣ-based Focal Plane Array architecture

    KAUST Repository

    Yao, Shun; Kavusi, Sam; Salama, Khaled N.

    2012-01-01

    In this paper, an Adaptive Delta-Sigma based architecture for High Dynamic Range (HDR) Focal Plane Arrays is presented. The noise shaping effect of the Delta-Sigma modulation in the low end, and the distortion noise induced in the high end of Photo

  6. ALMA Array Operations Group process overview

    Science.gov (United States)

    Barrios, Emilio; Alarcon, Hector

    2016-07-01

    ALMA Science operations activities in Chile are responsibility of the Department of Science Operations, which consists of three groups, the Array Operations Group (AOG), the Program Management Group (PMG) and the Data Management Group (DMG). The AOG includes the Array Operators and have the mission to provide support for science observations, operating safely and efficiently the array. The poster describes the AOG process, management and operational tools.

  7. Computationally Efficient Blind Code Synchronization for Asynchronous DS-CDMA Systems with Adaptive Antenna Arrays

    Directory of Open Access Journals (Sweden)

    Chia-Chang Hu

    2005-04-01

    Full Text Available A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array (J, the rank of the MWF (M, the system processing gain (N, and the number of samples in a chip interval (S, that is, 𝒪(JMNS. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE or the subspace-based eigenstructure analysis is a function of 𝒪((JNS3. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the J-element antenna array, the amount of the L-sample support, and the rank of the M-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.

  8. The optimal configuration of photovoltaic module arrays based on adaptive switching controls

    International Nuclear Information System (INIS)

    Chao, Kuei-Hsiang; Lai, Pei-Lun; Liao, Bo-Jyun

    2015-01-01

    Highlights: • We propose a strategy for determining the optimal configuration of a PV array. • The proposed strategy was based on particle swarm optimization (PSO) method. • It can identify the optimal module array connection scheme in the event of shading. • It can also find the optimal connection of a PV array even in module malfunctions. - Abstract: This study proposes a strategy for determining the optimal configuration of photovoltaic (PV) module arrays in shading or malfunction conditions. This strategy was based on particle swarm optimization (PSO). If shading or malfunctions of the photovoltaic module array occur, the module array immediately undergoes adaptive reconfiguration to increase the power output of the PV power generation system. First, the maximal power generated at various irradiation levels and temperatures was recorded during normal array operation. Subsequently, the irradiation level and module temperature, regardless of operating conditions, were used to recall the maximal power previously recorded. This previous maximum was compared with the maximal power value obtained using the maximum power point tracker to assess whether the PV module array was experiencing shading or malfunctions. After determining that the array was experiencing shading or malfunctions, PSO was used to identify the optimal module array connection scheme in abnormal conditions, and connection switches were used to implement optimal array reconfiguration. Finally, experiments were conducted to assess the strategy for identifying the optimal reconfiguration of a PV module array in the event of shading or malfunctions

  9. Optimal and adaptive methods of processing hydroacoustic signals (review)

    Science.gov (United States)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  10. Theory and applications of spherical microphone array processing

    CERN Document Server

    Jarrett, Daniel P; Naylor, Patrick A

    2017-01-01

    This book presents the signal processing algorithms that have been developed to process the signals acquired by a spherical microphone array. Spherical microphone arrays can be used to capture the sound field in three dimensions and have received significant interest from researchers and audio engineers. Algorithms for spherical array processing are different to corresponding algorithms already known in the literature of linear and planar arrays because the spherical geometry can be exploited to great beneficial effect. The authors aim to advance the field of spherical array processing by helping those new to the field to study it efficiently and from a single source, as well as by offering a way for more experienced researchers and engineers to consolidate their understanding, adding either or both of breadth and depth. The level of the presentation corresponds to graduate studies at MSc and PhD level. This book begins with a presentation of some of the essential mathematical and physical theory relevant to ...

  11. The Applicability of Incoherent Array Processing to IMS Seismic Array Stations

    Science.gov (United States)

    Gibbons, S. J.

    2012-04-01

    The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea

  12. Proceedings of the Adaptive Sensor Array Processing (ASAP) Workshop 12-14 March 1997. Volume 1

    National Research Council Canada - National Science Library

    O'Donovan, G

    1997-01-01

    ... was included in the first and third ASAP workshops, ASAP has traditionally concentrated on radar core topics include airborne radar testbed systems, space time adaptive processing, multipath jamming...

  13. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  14. Robust adaptive multichannel SAR processing based on covariance matrix reconstruction

    Science.gov (United States)

    Tan, Zhen-ya; He, Feng

    2018-04-01

    With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.

  15. Model-based processing for underwater acoustic arrays

    CERN Document Server

    Sullivan, Edmund J

    2015-01-01

    This monograph presents a unified approach to model-based processing for underwater acoustic arrays. The use of physical models in passive array processing is not a new idea, but it has been used on a case-by-case basis, and as such, lacks any unifying structure. This work views all such processing methods as estimation procedures, which then can be unified by treating them all as a form of joint estimation based on a Kalman-type recursive processor, which can be recursive either in space or time, depending on the application. This is done for three reasons. First, the Kalman filter provides a natural framework for the inclusion of physical models in a processing scheme. Second, it allows poorly known model parameters to be jointly estimated along with the quantities of interest. This is important, since in certain areas of array processing already in use, such as those based on matched-field processing, the so-called mismatch problem either degrades performance or, indeed, prevents any solution at all. Third...

  16. Efficient processing of two-dimensional arrays with C or C++

    Science.gov (United States)

    Donato, David I.

    2017-07-20

    Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency

  17. Experimental investigation of the ribbon-array ablation process

    International Nuclear Information System (INIS)

    Li Zhenghong; Xu Rongkun; Chu Yanyun; Yang Jianlun; Xu Zeping; Ye Fan; Chen Faxin; Xue Feibiao; Ning Jiamin; Qin Yi; Meng Shijian; Hu Qingyuan; Si Fenni; Feng Jinghua; Zhang Faqiang; Chen Jinchuan; Li Linbo; Chen Dingyang; Ding Ning; Zhou Xiuwen

    2013-01-01

    Ablation processes of ribbon-array loads, as well as wire-array loads for comparison, were investigated on Qiangguang-1 accelerator. The ultraviolet framing images indicate that the ribbon-array loads have stable passages of currents, which produce axially uniform ablated plasma. The end-on x-ray framing camera observed the azimuthally modulated distribution of the early ablated ribbon-array plasma and the shrink process of the x-ray radiation region. Magnetic probes measured the total and precursor currents of ribbon-array and wire-array loads, and there exists no evident difference between the precursor currents of the two types of loads. The proportion of the precursor current to the total current is 15% to 20%, and the start time of the precursor current is about 25 ns later than that of the total current. The melting time of the load material is about 16 ns, when the inward drift velocity of the ablated plasma is taken to be 1.5 × 10 7 cm/s.

  18. Integrating Scientific Array Processing into Standard SQL

    Science.gov (United States)

    Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter

    2014-05-01

    We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.

  19. Adaptive Port-Starboard Beamforming of Triplet Sonar Arrays

    NARCIS (Netherlands)

    Groen, J.; Beerens, S.P.; Been, R.; Doisy, Y.

    2005-01-01

    Abstract—For a low-frequency active sonar (LFAS) with a triplet receiver array, it is not clear in advance which signal processing techniques optimize its performance. Here, several advanced beamformers are analyzed theoretically, and the results are compared to experimental data obtained in sea

  20. Adaptive lesion formation using dual mode ultrasound array system

    Science.gov (United States)

    Liu, Dalong; Casper, Andrew; Haritonova, Alyona; Ebbini, Emad S.

    2017-03-01

    We present the results from an ultrasound-guided focused ultrasound platform designed to perform real-time monitoring and control of lesion formation. Real-time signal processing of echogenicity changes during lesion formation allows for identification of signature events indicative of tissue damage. The detection of these events triggers the cessation or the reduction of the exposure (intensity and/or time) to prevent overexposure. A dual mode ultrasound array (DMUA) is used for forming single- and multiple-focus patterns in a variety of tissues. The DMUA approach allows for inherent registration between the therapeutic and imaging coordinate systems providing instantaneous, spatially-accurate feedback on lesion formation dynamics. The beamformed RF data has been shown to have high sensitivity and specificity to tissue changes during lesion formation, including in vivo. In particular, the beamformed echo data from the DMUA is very sensitive to cavitation activity in response to HIFU in a variety of modes, e.g. boiling cavitation. This form of feedback is characterized by sudden increase in echogenicity that could occur within milliseconds of the application of HIFU (see http://youtu.be/No2wh-ceTLs for an example). The real-time beamforming and signal processing allowing the adaptive control of lesion formation is enabled by a high performance GPU platform (response time within 10 msec). We present results from a series of experiments in bovine cardiac tissue demonstrating the robustness and increased speed of volumetric lesion formation for a range of clinically-relevant exposures. Gross histology demonstrate clearly that adaptive lesion formation results in tissue damage consistent with the size of the focal spot and the raster scan in 3 dimensions. In contrast, uncontrolled volumetric lesions exhibit significant pre-focal buildup due to excessive exposure from multiple full-exposure HIFU shots. Stopping or reducing the HIFU exposure upon the detection of such an

  1. High Dynamic Range adaptive ΔΣ-based Focal Plane Array architecture

    KAUST Repository

    Yao, Shun

    2012-10-16

    In this paper, an Adaptive Delta-Sigma based architecture for High Dynamic Range (HDR) Focal Plane Arrays is presented. The noise shaping effect of the Delta-Sigma modulation in the low end, and the distortion noise induced in the high end of Photo-diode current were analyzed in detail. The proposed architecture can extend the DR for about 20N log2 dB at the high end of Photo-diode current with an N bit Up-Down counter. At the low end, it can compensate for the larger readout noise by employing Extended Counting. The Adaptive Delta-Sigma architecture employing a 4-bit Up-Down counter achieved about 160dB in the DR, with a Peak SNR (PSNR) of 80dB at the high end. Compared to the other HDR architectures, the Adaptive Delta-Sigma based architecture provides the widest DR with the best SNR performance in the extended range.

  2. Adaptive port-starboard beamforming of triplet arrays

    NARCIS (Netherlands)

    Beerens, S.P.; Been, R.; Groen, J.; Noutary, E.; Doisy, Y.

    2000-01-01

    Triplet arrays are single line arrays with three hydrophones on a circular section of the array. The triplet structure provides immediate port-starboard (PS) discrimination. This paper discusses the theoretical and experimental performance of triplet arrays. Results are obtained on detection gain

  3. Oxide nano-rod array structure via a simple metallurgical process

    International Nuclear Information System (INIS)

    Nanko, M; Do, D T M

    2011-01-01

    A simple method for fabricating oxide nano-rod array structure via metallurgical process is reported. Some dilute alloys such as Ni(Al) solid solution shows internal oxidation with rod-like oxide precipices during high-temperature oxidation with low oxygen partial pressure. By removing a metal part in internal oxidation zone, oxide nano-rod array structure can be developed on the surface of metallic components. In this report, Al 2 O 3 or NiAl 2 O 4 nano-rod array structures were prepared by using Ni(Al) solid solution. Effects of Cr addition into Ni(Al) solid solution on internal oxidation were also reported. Pack cementation process for aluminizing of Ni surface was applied to prepare nano-rod array components with desired shape. Near-net shape Ni components with oxide nano-rod array structure on their surface can be prepared by using the pack cementation process and internal oxidation,

  4. Resource-adaptive cognitive processes

    CERN Document Server

    Crocker, Matthew W

    2010-01-01

    This book investigates the adaptation of cognitive processes to limited resources. The central topics of this book are heuristics considered as results of the adaptation to resource limitations, through natural evolution in the case of humans, or through artificial construction in the case of computational systems; the construction and analysis of resource control in cognitive processes; and an analysis of resource-adaptivity within the paradigm of concurrent computation. The editors integrated the results of a collaborative 5-year research project that involved over 50 scientists. After a mot

  5. Removing Background Noise with Phased Array Signal Processing

    Science.gov (United States)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  6. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  7. Reconfigurable signal processor designs for advanced digital array radar systems

    Science.gov (United States)

    Suarez, Hernan; Zhang, Yan (Rockee); Yu, Xining

    2017-05-01

    The new challenges originated from Digital Array Radar (DAR) demands a new generation of reconfigurable backend processor in the system. The new FPGA devices can support much higher speed, more bandwidth and processing capabilities for the need of digital Line Replaceable Unit (LRU). This study focuses on using the latest Altera and Xilinx devices in an adaptive beamforming processor. The field reprogrammable RF devices from Analog Devices are used as analog front end transceivers. Different from other existing Software-Defined Radio transceivers on the market, this processor is designed for distributed adaptive beamforming in a networked environment. The following aspects of the novel radar processor will be presented: (1) A new system-on-chip architecture based on Altera's devices and adaptive processing module, especially for the adaptive beamforming and pulse compression, will be introduced, (2) Successful implementation of generation 2 serial RapidIO data links on FPGA, which supports VITA-49 radio packet format for large distributed DAR processing. (3) Demonstration of the feasibility and capabilities of the processor in a Micro-TCA based, SRIO switching backplane to support multichannel beamforming in real-time. (4) Application of this processor in ongoing radar system development projects, including OU's dual-polarized digital array radar, the planned new cylindrical array radars, and future airborne radars.

  8. Studies of implosion processes of nested tungsten wire-array Z-pinch

    International Nuclear Information System (INIS)

    Ning Cheng; Ding Ning; Liu Quan; Yang Zhenhua

    2006-01-01

    Nested wire-array is a kind of promising structured-load because it can improve the quality of Z-pinch plasma and enhance the radiation power of X-ray source. Based on the zero-dimensional model, the assumption of wire-array collision, and the criterion of optimized load (maximal load kinetic energy), optimization of the typical nested wire-array as a load of Z machine at Sandia Laboratory was carried out. It was shown that the load has been basically optimized. The Z-pinch process of the typical load was numerically studied by means of one-dimensional three-temperature radiation magneto-hydrodynamics (RMHD) code. The obtained results reproduce the dynamic process of the Z-pinch and show the implosion trajectory of nested wire-array and the transfer process of drive current between the inner and outer array. The experimental and computational X-ray pulse was compared, and it was suggested that the assumption of wire-array collision was reasonable in nested wire-array Z-pinch at least for the current level of Z machine. (authors)

  9. Array signal processing in the NASA Deep Space Network

    Science.gov (United States)

    Pham, Timothy T.; Jongeling, Andre P.

    2004-01-01

    In this paper, we will describe the benefits of arraying and past as well as expected future use of this application. The signal processing aspects of array system are described. Field measurements via actual tracking spacecraft are also presented.

  10. APD arrays and large-area APDs via a new planar process

    CERN Document Server

    Farrell, R; Vanderpuye, K; Grazioso, R; Myers, R; Entine, G

    2000-01-01

    A fabrication process has been developed which allows the beveled-edge-type of avalanche photodiode (APD) to be made without the need for the artful bevel formation steps. This new process, applicable to both APD arrays and to discrete detectors, greatly simplifies manufacture and should lead to significant cost reduction for such photodetectors. This is achieved through a simple innovation that allows isolation around the device or array pixel to be brought into the plane of the surface of the silicon wafer, hence a planar process. A description of the new process is presented along with performance data for a variety of APD device and array configurations. APD array pixel gains in excess of 10 000 have been measured. Array pixel coincidence timing resolution of less than 5 ns has been demonstrated. An energy resolution of 6% for 662 keV gamma-rays using a CsI(T1) scintillator on a planar processed large-area APD has been recorded. Discrete APDs with active areas up to 13 cm sup 2 have been operated.

  11. Wavefront sensing and adaptive control in phased array of fiber collimators

    Science.gov (United States)

    Lachinova, Svetlana L.; Vorontsov, Mikhail A.

    2011-03-01

    A new wavefront control approach for mitigation of atmospheric turbulence-induced wavefront phase aberrations in coherent fiber-array-based laser beam projection systems is introduced and analyzed. This approach is based on integration of wavefront sensing capabilities directly into the fiber-array transmitter aperture. In the coherent fiber array considered, we assume that each fiber collimator (subaperture) of the array is capable of precompensation of local (onsubaperture) wavefront phase tip and tilt aberrations using controllable rapid displacement of the tip of the delivery fiber at the collimating lens focal plane. In the technique proposed, this tip and tilt phase aberration control is based on maximization of the optical power received through the same fiber collimator using the stochastic parallel gradient descent (SPGD) technique. The coordinates of the fiber tip after the local tip and tilt aberrations are mitigated correspond to the coordinates of the focal-spot centroid of the optical wave backscattered off the target. Similar to a conventional Shack-Hartmann wavefront sensor, phase function over the entire fiber-array aperture can then be retrieved using the coordinates obtained. The piston phases that are required for coherent combining (phase locking) of the outgoing beams at the target plane can be further calculated from the reconstructed wavefront phase. Results of analysis and numerical simulations are presented. Performance of adaptive precompensation of phase aberrations in this laser beam projection system type is compared for various system configurations characterized by the number of fiber collimators and atmospheric turbulence conditions. The wavefront control concept presented can be effectively applied for long-range laser beam projection scenarios for which the time delay related with the double-pass laser beam propagation to the target and back is compared or even exceeds the characteristic time of the atmospheric turbulence change

  12. Adaptive Space-Time, Processing for High Performance, Robust Military Wireless Communications

    National Research Council Canada - National Science Library

    Haimovich, Alexander

    2000-01-01

    ...: (I) performance of adaptive arrays for wireless communications over fading channels in the presence of cochannel interference particularly the case when the number of interference sources exceeds...

  13. Adaptive Processes in Hearing

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Christensen-Dalsgaard, Jakob; Tranebjærg, Lisbeth

    2018-01-01

    , and is essential to achieve successful speech communication, correct orientation in our full environment, and eventually survival. These adaptive processes may differ in individuals with hearing loss, whose auditory system may cope via ‘‘readapting’’ itself over a longer time scale to the changes in sensory input...... induced by hearing impairment and the compensation provided by hearing devices. These devices themselves are now able to adapt to the listener’s individual environment, attentional state, and behavior. These topics related to auditory adaptation, in the broad sense of the term, were central to the 6th...... International Symposium on Auditory and Audiological Research held in Nyborg, Denmark, in August 2017. The symposium addressed adaptive processes in hearing from different angles, together with a wide variety of other auditory and audiological topics. The papers in this special issue result from some...

  14. Improvement of resolution in full-view linear-array photoacoustic computed tomography using a novel adaptive weighting method

    Science.gov (United States)

    Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza

    2017-03-01

    Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.

  15. The process of organisational adaptation through innovations, and organisational adaptability

    OpenAIRE

    Tikka, Tommi

    2010-01-01

    This study is about the process of organisational adaptation and organisational adaptability. The study generates a theoretical framework about organisational adaptation behaviour and conditions that have influence on success of organisational adaptation. The research questions of the study are: How does an organisation adapt through innovations, and which conditions enhance or impede organisational adaptation through innovations? The data were gathered from five case organisations withi...

  16. A Versatile Multichannel Digital Signal Processing Module for Microcalorimeter Arrays

    Science.gov (United States)

    Tan, H.; Collins, J. W.; Walby, M.; Hennig, W.; Warburton, W. K.; Grudberg, P.

    2012-06-01

    Different techniques have been developed for reading out microcalorimeter sensor arrays: individual outputs for small arrays, and time-division or frequency-division or code-division multiplexing for large arrays. Typically, raw waveform data are first read out from the arrays using one of these techniques and then stored on computer hard drives for offline optimum filtering, leading not only to requirements for large storage space but also limitations on achievable count rate. Thus, a read-out module that is capable of processing microcalorimeter signals in real time will be highly desirable. We have developed multichannel digital signal processing electronics that are capable of on-board, real time processing of microcalorimeter sensor signals from multiplexed or individual pixel arrays. It is a 3U PXI module consisting of a standardized core processor board and a set of daughter boards. Each daughter board is designed to interface a specific type of microcalorimeter array to the core processor. The combination of the standardized core plus this set of easily designed and modified daughter boards results in a versatile data acquisition module that not only can easily expand to future detector systems, but is also low cost. In this paper, we first present the core processor/daughter board architecture, and then report the performance of an 8-channel daughter board, which digitizes individual pixel outputs at 1 MSPS with 16-bit precision. We will also introduce a time-division multiplexing type daughter board, which takes in time-division multiplexing signals through fiber-optic cables and then processes the digital signals to generate energy spectra in real time.

  17. Generic nano-imprint process for fabrication of nanowire arrays

    Energy Technology Data Exchange (ETDEWEB)

    Pierret, Aurelie; Hocevar, Moira; Algra, Rienk E; Timmering, Eugene C; Verschuuren, Marc A; Immink, George W G; Verheijen, Marcel A; Bakkers, Erik P A M [Philips Research Laboratories Eindhoven, High Tech Campus 11, 5656 AE Eindhoven (Netherlands); Diedenhofen, Silke L [FOM Institute for Atomic and Molecular Physics c/o Philips Research Laboratories, High Tech Campus 4, 5656 AE Eindhoven (Netherlands); Vlieg, E, E-mail: e.p.a.m.bakkers@tue.nl [IMM, Solid State Chemistry, Radboud University Nijmegen, Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands)

    2010-02-10

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2 inch substrates. After lift-off organic residues remain on the surface, which induce the growth of additional undesired nanowires. We show that cleaning of the samples before growth with piranha solution in combination with a thermal anneal at 550 deg. C for InP and 700 deg. C for GaP results in uniform nanowire arrays with 1% variation in nanowire length, and without undesired extra nanowires. Our chemical cleaning procedure is applicable to other lithographic techniques such as e-beam lithography, and therefore represents a generic process.

  18. Generic nano-imprint process for fabrication of nanowire arrays

    NARCIS (Netherlands)

    Pierret, A.; Hocevar, M.; Diedenhofen, S.L.; Algra, R.E.; Vlieg, E.; Timmering, E.C.; Verschuuren, M.A.; Immink, W.G.G.; Verheijen, M.A.; Bakkers, E.P.A.M.

    2010-01-01

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2inch substrates. After lift-off organic residues remain on the surface, which induce the growth of

  19. Adapting Controlled-source Coherence Analysis to Dense Array Data in Earthquake Seismology

    Science.gov (United States)

    Schwarz, B.; Sigloch, K.; Nissen-Meyer, T.

    2017-12-01

    Exploration seismology deals with highly coherent wave fields generated by repeatable controlled sources and recorded by dense receiver arrays, whose geometry is tailored to back-scattered energy normally neglected in earthquake seismology. Owing to these favorable conditions, stacking and coherence analysis are routinely employed to suppress incoherent noise and regularize the data, thereby strongly contributing to the success of subsequent processing steps, including migration for the imaging of back-scattering interfaces or waveform tomography for the inversion of velocity structure. Attempts have been made to utilize wave field coherence on the length scales of passive-source seismology, e.g. for the imaging of transition-zone discontinuities or the core-mantle-boundary using reflected precursors. Results are however often deteriorated due to the sparse station coverage and interference of faint back-scattered with transmitted phases. USArray sampled wave fields generated by earthquake sources at an unprecedented density and similar array deployments are ongoing or planned in Alaska, the Alps and Canada. This makes the local coherence of earthquake data an increasingly valuable resource to exploit.Building on the experience in controlled-source surveys, we aim to extend the well-established concept of beam-forming to the richer toolbox that is nowadays used in seismic exploration. We suggest adapted strategies for local data coherence analysis, where summation is performed with operators that extract the local slope and curvature of wave fronts emerging at the receiver array. Besides estimating wave front properties, we demonstrate that the inherent data summation can also be used to generate virtual station responses at intermediate locations where no actual deployment was performed. Owing to the fact that stacking acts as a directional filter, interfering coherent wave fields can be efficiently separated from each other by means of coherent subtraction. We

  20. Cas4-Dependent Prespacer Processing Ensures High-Fidelity Programming of CRISPR Arrays.

    Science.gov (United States)

    Lee, Hayun; Zhou, Yi; Taylor, David W; Sashital, Dipali G

    2018-04-05

    CRISPR-Cas immune systems integrate short segments of foreign DNA as spacers into the host CRISPR locus to provide molecular memory of infection. Cas4 proteins are widespread in CRISPR-Cas systems and are thought to participate in spacer acquisition, although their exact function remains unknown. Here we show that Bacillus halodurans type I-C Cas4 is required for efficient prespacer processing prior to Cas1-Cas2-mediated integration. Cas4 interacts tightly with the Cas1 integrase, forming a heterohexameric complex containing two Cas1 dimers and two Cas4 subunits. In the presence of Cas1 and Cas2, Cas4 processes double-stranded substrates with long 3' overhangs through site-specific endonucleolytic cleavage. Cas4 recognizes PAM sequences within the prespacer and prevents integration of unprocessed prespacers, ensuring that only functional spacers will be integrated into the CRISPR array. Our results reveal the critical role of Cas4 in maintaining fidelity during CRISPR adaptation, providing a structural and mechanistic model for prespacer processing and integration. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Adaptation in CRISPR-Cas Systems.

    Science.gov (United States)

    Sternberg, Samuel H; Richter, Hagen; Charpentier, Emmanuelle; Qimron, Udi

    2016-03-17

    Clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated (Cas) proteins constitute an adaptive immune system in prokaryotes. The system preserves memories of prior infections by integrating short segments of foreign DNA, termed spacers, into the CRISPR array in a process termed adaptation. During the past 3 years, significant progress has been made on the genetic requirements and molecular mechanisms of adaptation. Here we review these recent advances, with a focus on the experimental approaches that have been developed, the insights they generated, and a proposed mechanism for self- versus non-self-discrimination during the process of spacer selection. We further describe the regulation of adaptation and the protein players involved in this fascinating process that allows bacteria and archaea to harbor adaptive immunity. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Sampling phased array a new technique for signal processing and ultrasonic imaging

    OpenAIRE

    Bulavinov, A.; Joneit, D.; Kröning, M.; Bernus, L.; Dalichow, M.H.; Reddy, K.M.

    2006-01-01

    Different signal processing and image reconstruction techniques are applied in ultrasonic non-destructive material evaluation. In recent years, rapid development in the fields of microelectronics and computer engineering lead to wide application of phased array systems. A new phased array technique, called "Sampling Phased Array" has been developed in Fraunhofer Institute for non-destructive testing. It realizes unique approach of measurement and processing of ultrasonic signals. The sampling...

  3. A Simple Approach in Estimating the Effectiveness of Adapting Mirror Concentrator and Tracking Mechanism for PV Arrays in the Tropics

    Directory of Open Access Journals (Sweden)

    M. E. Ya’acob

    2014-01-01

    Full Text Available Mirror concentrating element and tracking mechanism has been seriously investigated and widely adapted in solar PV technology. In this study, a practical in-field method is conducted in Serdang, Selangor, Malaysia, for the two technologies in comparison to the common fixed flat PV arrays. The data sampling process is measured under stochastic weather characteristics with the main target of calculating the effectiveness of PV power output. The data are monitored, recorded, and analysed in real time via GPRS online monitoring system for 10 consecutive months. The analysis is based on a simple comparison of the actual daily power generation from each PV generator with statistical analysis of multiple linear regression (MLR and analysis of variance test (ANOVA. From the analysis, it is shown that tracking mechanism generates approximately 88 Watts (9.4% compared to the mirror concentrator which generates 144 Watts (23.4% of the cumulative dc power for different array configurations at standard testing condition (STC references. The significant increase in power generation shows feasibilities of implying both mechanisms for PV generators and thus contributes to additional reference in PV array design.

  4. Array processing for seismic surface waves

    Energy Technology Data Exchange (ETDEWEB)

    Marano, S.

    2013-07-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries.

  5. Array processing for seismic surface waves

    International Nuclear Information System (INIS)

    Marano, S.

    2013-01-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries

  6. A FPGA-based signal processing unit for a GEM array detector

    International Nuclear Information System (INIS)

    Yen, W.W.; Chou, H.P.

    2013-06-01

    in the present study, a signal processing unit for a GEM one-dimensional array detector is presented to measure the trajectory of photoelectrons produced by cosmic X-rays. The present GEM array detector system has 16 signal channels. The front-end unit provides timing signals from trigger units and energy signals from charge sensitive amplifies. The prototype of the processing unit is implemented using commercial field programmable gate array circuit boards. The FPGA based system is linked to a personal computer for testing and data analysis. Tests using simulated signals indicated that the FPGA-based signal processing unit has a good linearity and is flexible for parameter adjustment for various experimental conditions (authors)

  7. NeuroSeek dual-color image processing infrared focal plane array

    Science.gov (United States)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  8. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    Science.gov (United States)

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  9. CCD and IR array controllers

    Science.gov (United States)

    Leach, Robert W.; Low, Frank J.

    2000-08-01

    A family of controllers has bene developed that is powerful and flexible enough to operate a wide range of CCD and IR focal plane arrays in a variety of ground-based applications. These include fast readout of small CCD and IR arrays for adaptive optics applications, slow readout of large CCD and IR mosaics, and single CCD and IR array operation at low background/low noise regimes as well as high background/high speed regimes. The CCD and IR controllers have a common digital core based on user- programmable digital signal processors that are used to generate the array clocking and signal processing signals customized for each application. A fiber optic link passes image data and commands to VME or PCI interface boards resident in a host computer to the controller. CCD signal processing is done with a dual slope integrator operating at speeds of up to one Megapixel per second per channel. Signal processing of IR arrays is done either with a dual channel video processor or a four channel video processor that has built-in image memory and a coadder to 32-bit precision for operating high background arrays. Recent developments underway include the implementation of a fast fiber optic data link operating at a speed of 12.5 Megapixels per second for fast image transfer from the controller to the host computer, and supporting image acquisition software and device drivers for the PCI interface board for the Sun Solaris, Linux and Windows 2000 operating systems.

  10. Assessment of Measurement Distortions in GNSS Antenna Array Space-Time Processing

    Directory of Open Access Journals (Sweden)

    Thyagaraja Marathe

    2016-01-01

    Full Text Available Antenna array processing techniques are studied in GNSS as effective tools to mitigate interference in spatial and spatiotemporal domains. However, without specific considerations, the array processing results in biases and distortions in the cross-ambiguity function (CAF of the ranging codes. In space-time processing (STP the CAF misshaping can happen due to the combined effect of space-time processing and the unintentional signal attenuation by filtering. This paper focuses on characterizing these degradations for different controlled signal scenarios and for live data from an antenna array. The antenna array simulation method introduced in this paper enables one to perform accurate analyses in the field of STP. The effects of relative placement of the interference source with respect to the desired signal direction are shown using overall measurement errors and profile of the signal strength. Analyses of contributions from each source of distortion are conducted individually and collectively. Effects of distortions on GNSS pseudorange errors and position errors are compared for blind, semi-distortionless, and distortionless beamforming methods. The results from characterization can be useful for designing low distortion filters that are especially important for high accuracy GNSS applications in challenging environments.

  11. Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit.

    Science.gov (United States)

    Mei, Gang; Xu, Liangliang; Xu, Nengxiong

    2017-09-01

    This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points' spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available.

  12. Adaptation of the Biolog Phenotype MicroArrayTM Technology to Profile the Obligate Anaerobe Geobacter metallireducens

    Energy Technology Data Exchange (ETDEWEB)

    Joyner, Dominique; Fortney, Julian; Chakraborty, Romy; Hazen, Terry

    2010-05-17

    The Biolog OmniLog? Phenotype MicroArray (PM) plate technology was successfully adapted to generate a select phenotypic profile of the strict anaerobe Geobacter metallireducens (G.m.). The profile generated for G.m. provides insight into the chemical sensitivity of the organism as well as some of its metabolic capabilities when grown with a basal medium containing acetate and Fe(III). The PM technology was developed for aerobic organisms. The reduction of a tetrazolium dye by the test organism represents metabolic activity on the array which is detected and measured by the OmniLog(R) system. We have previously adapted the technology for the anaerobic sulfate reducing bacterium Desulfovibrio vulgaris. In this work, we have taken the technology a step further by adapting it for the iron reducing obligate anaerobe Geobacter metallireducens. In an osmotic stress microarray it was determined that the organism has higher sensitivity to impermeable solutes 3-6percent KCl and 2-5percent NaNO3 that result in osmotic stress by osmosis to the cell than to permeable non-ionic solutes represented by 5-20percent ethylene glycol and 2-3percent urea. The osmotic stress microarray also includes an array of osmoprotectants and precursor molecules that were screened to identify substrates that would provide osmotic protection to NaCl stress. None of the substrates tested conferred resistance to elevated concentrations of salt. Verification studies in which G.m. was grown in defined medium amended with 100mM NaCl (MIC) and the common osmoprotectants betaine, glycine and proline supported the PM findings. Further verification was done by analysis of transcriptomic profiles of G.m. grown under 100mM NaCl stress that revealed up-regulation of genes related to degradation rather than accumulation of the above-mentioned osmoprotectants. The phenotypic profile, supported by additional analysis indicates that the accumulation of these osmoprotectants as a response to salt stress does not

  13. An adaptive signal-processing approach to online adaptive tutoring.

    Science.gov (United States)

    Bergeron, Bryan; Cline, Andrew

    2011-01-01

    Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.

  14. The Urban Adaptation and Adaptation Process of Urban Migrant Children: A Qualitative Study

    Science.gov (United States)

    Liu, Yang; Fang, Xiaoyi; Cai, Rong; Wu, Yang; Zhang, Yaofang

    2009-01-01

    This article employs qualitative research methods to explore the urban adaptation and adaptation processes of Chinese migrant children. Through twenty-one in-depth interviews with migrant children, the researchers discovered: The participant migrant children showed a fairly high level of adaptation to the city; their process of urban adaptation…

  15. Performance Analysis of Blind Beamforming Algorithms in Adaptive Antenna Array in Rayleigh Fading Channel Model

    International Nuclear Information System (INIS)

    Yasin, M; Akhtar, Pervez; Pathan, Amir Hassan

    2013-01-01

    In this paper, we analyze the performance of adaptive blind algorithms – i.e. Kaiser Constant Modulus Algorithm (KCMA), Hamming CMA (HAMCMA) – with CMA in a wireless cellular communication system using digital modulation technique. These blind algorithms are used in digital signal processor of adaptive antenna to make it smart and change weights of the antenna array system dynamically. The simulation results revealed that KCMA and HAMCMA provide minimum mean square error (MSE) with 1.247 dB and 1.077 dB antenna gain enhancement, 75% reduction in bit error rate (BER) respectively over that of CMA. Therefore, KCMA and HAMCMA algorithms give a cost effective solution for a communication system

  16. Total focusing method with correlation processing of antenna array signals

    Science.gov (United States)

    Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.

    2018-03-01

    The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.

  17. Interactive Teaching of Adaptive Signal Processing

    OpenAIRE

    Stewart, R W; Harteneck, M; Weiss, S

    2000-01-01

    Over the last 30 years adaptive digital signal processing has progressed from being a strictly graduate level advanced class in signal processing theory to a topic that is part of the core curriculum for many undergraduate signal processing classes. The key reason is the continued advance of communications technology, with its need for echo control and equalisation, and the widespread use of adaptive filters in audio, biomedical, and control applications. In this paper we will review the basi...

  18. Process Development And Simulation For Cold Fabrication Of Doubly Curved Metal Plate By Using Line Array Roll Set

    International Nuclear Information System (INIS)

    Shim, D. S.; Jung, C. G.; Seong, D. Y.; Yang, D. Y.; Han, J. M.; Han, M. S.

    2007-01-01

    For effective manufacturing of a doubly curved sheet metal, a novel sheet metal forming process is proposed. The suggested process uses a Line Array Roll Set (LARS) composed of a pair of upper and lower roll assemblies in a symmetric manner. The process offers flexibility as compared with the conventional manufacturing processes, because it does not require any complex-shaped die and loss of material by blank-holding is minimized. LARS allows flexibility of the incremental forming process and adopts the principle of bending deformation, resulting in a slight deformation in thickness. Rolls composed of line array roll sets are divided into a driving roll row and two idle roll rows. The arrayed rolls in the central lines of the upper and lower roll assemblies are motor-driven so that they deform and transfer the sheet metal using friction between the rolls and the sheet metal. The remaining rolls are idle rolls, generating bending deformation with driving rolls. Furthermore, all the rolls are movable in any direction so that they are adaptable to any size or shape of the desired three-dimensional configuration. In the process, the sheet is deformed incrementally as deformation proceeds simultaneously in rolling and transverse directions step by step. Consequently, it can be applied to the fabrication of doubly curved ship hull plates by undergoing several passes. In this work, FEM simulations are carried out for verification of the proposed incremental forming system using the chosen design parameters. Based on the results of the simulation, the relationship between the roll set configuration and the curvature of a sheet metal is determined. The process information such as the forming loads and torques acting on every roll is analyzed as important data for the design and development of the manufacturing system

  19. Biomimetic micromechanical adaptive flow-sensor arrays

    Science.gov (United States)

    Krijnen, Gijs; Floris, Arjan; Dijkstra, Marcel; Lammerink, Theo; Wiegerink, Remco

    2007-05-01

    We report current developments in biomimetic flow-sensors based on flow sensitive mechano-sensors of crickets. Crickets have one form of acoustic sensing evolved in the form of mechanoreceptive sensory hairs. These filiform hairs are highly perceptive to low-frequency sound with energy sensitivities close to thermal threshold. In this work we describe hair-sensors fabricated by a combination of sacrificial poly-silicon technology, to form silicon-nitride suspended membranes, and SU8 polymer processing for fabrication of hairs with diameters of about 50 μm and up to 1 mm length. The membranes have thin chromium electrodes on top forming variable capacitors with the substrate that allow for capacitive read-out. Previously these sensors have been shown to exhibit acoustic sensitivity. Like for the crickets, the MEMS hair-sensors are positioned on elongated structures, resembling the cercus of crickets. In this work we present optical measurements on acoustically and electrostatically excited hair-sensors. We present adaptive control of flow-sensitivity and resonance frequency by electrostatic spring stiffness softening. Experimental data and simple analytical models derived from transduction theory are shown to exhibit good correspondence, both confirming theory and the applicability of the presented approach towards adaptation.

  20. Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network

    Science.gov (United States)

    Navarro, Robert

    2006-01-01

    The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..

  1. Radiation-induced adaptive response in fetal mice: a micro-array study

    International Nuclear Information System (INIS)

    Vares, G.; Bing, Wang; Mitsuru, Nenoi; Tetsuo, Nakajima; Kaoru, Tanaka; Isamu, Hayata

    2006-01-01

    Exposure of sublethal doses of ionizing radiation can induce protective mechanisms against a subsequent higher dose irradiation. This phenomenon called radio-adaptation (or adaptive response - AR), has been described in a wide range of biological models. In a series of studies, we demonstrated the existence of a radiation-induced AR in mice during late organogenesis. For better understanding of molecular mechanisms underlying AR in our model, we performed a global analysis of transcriptome regulations in cells collected from whole mouse fetuses. Using cDNA micro-arrays, we studied gene expression in these cells after in utero priming exposure to irradiation. Several combinations of radiation dose and dose-rate were applied to induce or not an AR in our system. Gene regulation was observed after exposure to priming radiation in each condition. Student's t-test was performed in order to identify genes whose expression modulation was specifically different in AR-inducing an( non-AR-inducing conditions. Genes were ranked according to their ability in discriminating AR-specific modulations. Since AR genes were implicated in variety of functions and cellular processes, we applied a functional classification algorithm, which clustered genes in a limited number of functionally related group: We established that AR genes are significantly enriched for specific keywords. Our results show a significant modulation of genes implicated in signal transduction pathways. No AR-specific alteration of DNA repair could be observed. Nevertheless, it is likely that modulation of DNA repair activity results, at least partly, from post-transcriptional regulation. One major hypothesis is that de-regulations of signal transduction pathways and apoptosis may be responsible for AR phenotype. In previous work, we demonstrated that radiation-induced AR in mice during organogenesis is related to Trp53 gene status and to the occurrence of radiation-induced apoptosis. Other work proposed that p53

  2. Millimeter-Wave Microstrip Antenna Array Design and an Adaptive Algorithm for Future 5G Wireless Communication Systems

    Directory of Open Access Journals (Sweden)

    Cheng-Nan Hu

    2016-01-01

    Full Text Available This paper presents a high gain millimeter-wave (mmW low-temperature cofired ceramic (LTCC microstrip antenna array with a compact, simple, and low-profile structure. Incorporating minimum mean square error (MMSE adaptive algorithms with the proposed 64-element microstrip antenna array, the numerical investigation reveals substantial improvements in interference reduction. A prototype is presented with a simple design for mass production. As an experiment, HFSS was used to simulate an antenna with a width of 1 mm and a length of 1.23 mm, resonating at 38 GHz. Two identical mmW LTCC microstrip antenna arrays were built for measurement, and the center element was excited. The results demonstrated a return loss better than 15 dB and a peak gain higher than 6.5 dBi at frequencies of interest, which verified the feasibility of the design concept.

  3. Solution-processed single-wall carbon nanotube transistor arrays for wearable display backplanes

    Directory of Open Access Journals (Sweden)

    Byeong-Cheol Kang

    2018-01-01

    Full Text Available In this paper, we demonstrate solution-processed single-wall carbon nanotube thin-film transistor (SWCNT-TFT arrays with polymeric gate dielectrics on the polymeric substrates for wearable display backplanes, which can be directly attached to the human body. The optimized SWCNT-TFTs without any buffer layer on flexible substrates exhibit a linear field-effect mobility of 1.5cm2/V-s and a threshold voltage of around 0V. The statistical plot of the key device metrics extracted from 35 SWCNT-TFTs which were fabricated in different batches at different times conclusively support that we successfully demonstrated high-performance solution-processed SWCNT-TFT arrays which demand excellent uniformity in the device performance. We also investigate the operational stability of wearable SWCNT-TFT arrays against an applied strain of up to 40%, which is the essential for a harsh degree of strain on human body. We believe that the demonstration of flexible SWCNT-TFT arrays which were fabricated by all solution-process except the deposition of metal electrodes at process temperature below 130oC can open up new routes for wearable display backplanes.

  4. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.

    Science.gov (United States)

    Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F

    2015-10-15

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. ADAPTATION PROCESS TO CLIMATE CHANGE IN AGRICULTURE- AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    Ghulam Mustafa

    2017-10-01

    Full Text Available Climatic variations affect agriculture in a process with no known end means. Adaptations help to reduce the adverse impacts of climate change. Unfortunately, adaptation has never been considered as a process. Current study empirically identified the adaptation process and its different stages. Moreover, little is known about the farm level adaptation strategies and their determinants. The study in hand found farm level adaptation strategies and determinants of these strategies. The study identified three stages of adaptation i.e. perception, intention and adaptation. It was found that 71.4% farmers perceived about climate change, 58.5% intended to adapt while 40.2% actually adapted. The study further explored that farmers do adaptations through changing crop variety (56.3%, changing planting dates (44.6%, tree plantation (37.5%, increase/conserve irrigation (39.3% and crop diversification (49.2%. The adaptation strategies used by farmers were autonomous and mostly determined perception to climate change. It was also noted that the adaptation strategies move in a circular process and once they are adapted they remained adapted for a longer period of time. Some constraints slow the adaptation process so; we recommend farmers should be given price incentives to speed-up this process.

  6. Narrowband direction of arrival estimation for antenna arrays

    CERN Document Server

    Foutz, Jeffrey

    2008-01-01

    This book provides an introduction to narrowband array signal processing, classical and subspace-based direction of arrival (DOA) estimation with an extensive discussion on adaptive direction of arrival algorithms. The book begins with a presentation of the basic theory, equations, and data models of narrowband arrays. It then discusses basic beamforming methods and describes how they relate to DOA estimation. Several of the most common classical and subspace-based direction of arrival methods are discussed. The book concludes with an introduction to subspace tracking and shows how subspace tr

  7. CONSTRUCTIVE MODEL OF ADAPTATION OF DATA STRUCTURES IN RAM. PART II. CONSTRUCTORS OF SCENARIOS AND ADAPTATION PROCESSES

    Directory of Open Access Journals (Sweden)

    V. I. Shynkarenko

    2016-04-01

    Full Text Available Purpose.The second part of the paper completes presentation of constructive and the productive structures (CPS, modeling adaptation of data structures in memory (RAM. The purpose of the second part in the research is to develop a model of process of adaptation data in a RAM functioning in different hardware and software environments and scenarios of data processing. Methodology. The methodology of mathematical and algorithmic constructionism was applied. In this part of the paper, changes were developed the constructors of scenarios and adaptation processes based on a generalized CPS through its transformational conversions. Constructors are interpreted, specialized CPS. Were highlighted the terminal alphabets of the constructor scenarios in the form of data processing algorithms and the constructor of adaptation – in the form of algorithmic components of the adaptation process. The methodology involves the development of substitution rules that determine the output process of the relevant structures. Findings. In the second part of the paper, system is represented by CPS modeling adaptation data placement in the RAM, namely, constructors of scenarios and of adaptation processes. The result of the implementation of constructor of scenarios is a set of data processing operations in the form of text in the language of programming C#, constructor of the adaptation processes – a process of adaptation, and the result the process of adaptation – the adapted binary code of processing data structures. Originality. For the first time proposed the constructive model of data processing – the scenario that takes into account the order and number of calls to the various elements of data structures and adaptation of data structures to the different hardware and software environments. At the same the placement of data in RAM and processing algorithms are adapted. Constructionism application in modeling allows to link data models and algorithms for

  8. A multi-step electrochemical etching process for a three-dimensional micro probe array

    International Nuclear Information System (INIS)

    Kim, Yoonji; Youn, Sechan; Cho, Young-Ho; Park, HoJoon; Chang, Byeung Gyu; Oh, Yong Soo

    2011-01-01

    We present a simple, fast, and cost-effective process for three-dimensional (3D) micro probe array fabrication using multi-step electrochemical metal foil etching. Compared to the previous electroplating (add-on) process, the present electrochemical (subtractive) process results in well-controlled material properties of the metallic microstructures. In the experimental study, we describe the single-step and multi-step electrochemical aluminum foil etching processes. In the single-step process, the depth etch rate and the bias etch rate of an aluminum foil have been measured as 1.50 ± 0.10 and 0.77 ± 0.03 µm min −1 , respectively. On the basis of the single-step process results, we have designed and performed the two-step electrochemical etching process for the 3D micro probe array fabrication. The fabricated 3D micro probe array shows the vertical and lateral fabrication errors of 15.5 ± 5.8% and 3.3 ± 0.9%, respectively, with the surface roughness of 37.4 ± 9.6 nm. The contact force and the contact resistance of the 3D micro probe array have been measured to be 24.30 ± 0.98 mN and 2.27 ± 0.11 Ω, respectively, for an overdrive of 49.12 ± 1.25 µm.

  9. Fundamentals of spherical array processing

    CERN Document Server

    Rafaely, Boaz

    2015-01-01

    This book provides a comprehensive introduction to the theory and practice of spherical microphone arrays. It is written for graduate students, researchers and engineers who work with spherical microphone arrays in a wide range of applications.   The first two chapters provide the reader with the necessary mathematical and physical background, including an introduction to the spherical Fourier transform and the formulation of plane-wave sound fields in the spherical harmonic domain. The third chapter covers the theory of spatial sampling, employed when selecting the positions of microphones to sample sound pressure functions in space. Subsequent chapters present various spherical array configurations, including the popular rigid-sphere-based configuration. Beamforming (spatial filtering) in the spherical harmonics domain, including axis-symmetric beamforming, and the performance measures of directivity index and white noise gain are introduced, and a range of optimal beamformers for spherical arrays, includi...

  10. Signal Processing for a Lunar Array: Minimizing Power Consumption

    Science.gov (United States)

    D'Addario, Larry; Simmons, Samuel

    2011-01-01

    Motivation for the study is: (1) Lunar Radio Array for low frequency, high redshift Dark Ages/Epoch of Reionization observations (z =6-50, f=30-200 MHz) (2) High precision cosmological measurements of 21 cm H I line fluctuations (3) Probe universe before first star formation and provide information about the Intergalactic Medium and evolution of large scale structures (5) Does the current cosmological model accurately describe the Universe before reionization? Lunar Radio Array is for (1) Radio interferometer based on the far side of the moon (1a) Necessary for precision measurements, (1b) Shielding from earth-based and solar RFI (12) No permanent ionosphere, (2) Minimum collecting area of approximately 1 square km and brightness sensitivity 10 mK (3)Several technologies must be developed before deployment The power needed to process signals from a large array of nonsteerable elements is not prohibitive, even for the Moon, and even in current technology. Two different concepts have been proposed: (1) Dark Ages Radio Interferometer (DALI) (2)( Lunar Array for Radio Cosmology (LARC)

  11. Sampling phased array - a new technique for ultrasonic signal processing and imaging

    OpenAIRE

    Verkooijen, J.; Boulavinov, A.

    2008-01-01

    Over the past 10 years, the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called 'Sampling Phased Array', has been developed in the Fraunhofer Institute for Non-Destructive Testing([1]). It realises a unique approach of measurement and processing of ultrasonic signals. Th...

  12. An adaptive orienting theory of error processing.

    Science.gov (United States)

    Wessel, Jan R

    2018-03-01

    The ability to detect and correct action errors is paramount to safe and efficient goal-directed behaviors. Existing work on the neural underpinnings of error processing and post-error behavioral adaptations has led to the development of several mechanistic theories of error processing. These theories can be roughly grouped into adaptive and maladaptive theories. While adaptive theories propose that errors trigger a cascade of processes that will result in improved behavior after error commission, maladaptive theories hold that error commission momentarily impairs behavior. Neither group of theories can account for all available data, as different empirical studies find both impaired and improved post-error behavior. This article attempts a synthesis between the predictions made by prominent adaptive and maladaptive theories. Specifically, it is proposed that errors invoke a nonspecific cascade of processing that will rapidly interrupt and inhibit ongoing behavior and cognition, as well as orient attention toward the source of the error. It is proposed that this cascade follows all unexpected action outcomes, not just errors. In the case of errors, this cascade is followed by error-specific, controlled processing, which is specifically aimed at (re)tuning the existing task set. This theory combines existing predictions from maladaptive orienting and bottleneck theories with specific neural mechanisms from the wider field of cognitive control, including from error-specific theories of adaptive post-error processing. The article aims to describe the proposed framework and its implications for post-error slowing and post-error accuracy, propose mechanistic neural circuitry for post-error processing, and derive specific hypotheses for future empirical investigations. © 2017 Society for Psychophysiological Research.

  13. Copper-encapsulated vertically aligned carbon nanotube arrays.

    Science.gov (United States)

    Stano, Kelly L; Chapla, Rachel; Carroll, Murphy; Nowak, Joshua; McCord, Marian; Bradford, Philip D

    2013-11-13

    A new procedure is described for the fabrication of vertically aligned carbon nanotubes (VACNTs) that are decorated, and even completely encapsulated, by a dense network of copper nanoparticles. The process involves the conformal deposition of pyrolytic carbon (Py-C) to stabilize the aligned carbon-nanotube structure during processing. The stabilized arrays are mildly functionalized using oxygen plasma treatment to improve wettability, and they are then infiltrated with an aqueous, supersaturated Cu salt solution. Once dried, the salt forms a stabilizing crystal network throughout the array. After calcination and H2 reduction, Cu nanoparticles are left decorating the CNT surfaces. Studies were carried out to determine the optimal processing parameters to maximize Cu content in the composite. These included the duration of Py-C deposition and system process pressure as well as the implementation of subsequent and multiple Cu salt solution infiltrations. The optimized procedure yielded a nanoscale hybrid material where the anisotropic alignment from the VACNT array was preserved, and the mass of the stabilized arrays was increased by over 24-fold because of the addition of Cu. The procedure has been adapted for other Cu salts and can also be used for other metal salts altogether, including Ni, Co, Fe, and Ag. The resulting composite is ideally suited for application in thermal management devices because of its low density, mechanical integrity, and potentially high thermal conductivity. Additionally, further processing of the material via pressing and sintering can yield consolidated, dense bulk composites.

  14. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    Science.gov (United States)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  15. CR-Calculus and adaptive array theory applied to MIMO random vibration control tests

    Science.gov (United States)

    Musella, U.; Manzato, S.; Peeters, B.; Guillaume, P.

    2016-09-01

    Performing Multiple-Input Multiple-Output (MIMO) tests to reproduce the vibration environment in a user-defined number of control points of a unit under test is necessary in applications where a realistic environment replication has to be achieved. MIMO tests require vibration control strategies to calculate the required drive signal vector that gives an acceptable replication of the target. This target is a (complex) vector with magnitude and phase information at the control points for MIMO Sine Control tests while in MIMO Random Control tests, in the most general case, the target is a complete spectral density matrix. The idea behind this work is to tailor a MIMO random vibration control approach that can be generalized to other MIMO tests, e.g. MIMO Sine and MIMO Time Waveform Replication. In this work the approach is to use gradient-based procedures over the complex space, applying the so called CR-Calculus and the adaptive array theory. With this approach it is possible to better control the process performances allowing the step-by-step Jacobian Matrix update. The theoretical bases behind the work are followed by an application of the developed method to a two-exciter two-axis system and by performance comparisons with standard methods.

  16. 3D-SoftChip: A Novel Architecture for Next-Generation Adaptive Computing Systems

    Directory of Open Access Journals (Sweden)

    Lee Mike Myung-Ok

    2006-01-01

    Full Text Available This paper introduces a novel architecture for next-generation adaptive computing systems, which we term 3D-SoftChip. The 3D-SoftChip is a 3-dimensional (3D vertically integrated adaptive computing system combining state-of-the-art processing and 3D interconnection technology. It comprises the vertical integration of two chips (a configurable array processor and an intelligent configurable switch through an indium bump interconnection array (IBIA. The configurable array processor (CAP is an array of heterogeneous processing elements (PEs, while the intelligent configurable switch (ICS comprises a switch block, 32-bit dedicated RISC processor for control, on-chip program/data memory, data frame buffer, along with a direct memory access (DMA controller. This paper introduces the novel 3D-SoftChip architecture for real-time communication and multimedia signal processing as a next-generation computing system. The paper further describes the advanced HW/SW codesign and verification methodology, including high-level system modeling of the 3D-SoftChip using SystemC, being used to determine the optimum hardware specification in the early design stage.

  17. A Real-Time Capable Software-Defined Receiver Using GPU for Adaptive Anti-Jam GPS Sensors

    Science.gov (United States)

    Seo, Jiwon; Chen, Yu-Hsuan; De Lorenzo, David S.; Lo, Sherman; Enge, Per; Akos, Dennis; Lee, Jiyun

    2011-01-01

    Due to their weak received signal power, Global Positioning System (GPS) signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs). However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR) with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU) coupled with a new generation Graphics Processing Unit (GPU) having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities. PMID:22164116

  18. A Real-Time Capable Software-Defined Receiver Using GPU for Adaptive Anti-Jam GPS Sensors

    Directory of Open Access Journals (Sweden)

    Dennis Akos

    2011-09-01

    Full Text Available Due to their weak received signal power, Global Positioning System (GPS signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs. However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU coupled with a new generation Graphics Processing Unit (GPU having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities.

  19. Robust Nearfield Wideband Beamforming Design Based on Adaptive-Weighted Convex Optimization

    Directory of Open Access Journals (Sweden)

    Guo Ye-Cai

    2017-01-01

    Full Text Available Nearfield wideband beamformers for microphone arrays have wide applications in multichannel speech enhancement. The nearfield wideband beamformer design based on convex optimization is one of the typical representatives of robust approaches. However, in this approach, the coefficient of convex optimization is a constant, which has not used all the freedom provided by the weighting coefficient efficiently. Therefore, it is still necessary to further improve the performance. To solve this problem, we developed a robust nearfield wideband beamformer design approach based on adaptive-weighted convex optimization. The proposed approach defines an adaptive-weighted function by the adaptive array signal processing theory and adjusts its value flexibly, which has improved the beamforming performance. During each process of the adaptive updating of the weighting function, the convex optimization problem can be formulated as a SOCP (Second-Order Cone Program problem, which could be solved efficiently using the well-established interior-point methods. This method is suitable for the case where the sound source is in the nearfield range, can work well in the presence of microphone mismatches, and is applicable to arbitrary array geometries. Several design examples are presented to verify the effectiveness of the proposed approach and the correctness of the theoretical analysis.

  20. Systolic array processing of the sequential decoding algorithm

    Science.gov (United States)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  1. Adaptive security systems -- Combining expert systems with adaptive technologies

    International Nuclear Information System (INIS)

    Argo, P.; Loveland, R.; Anderson, K.

    1997-01-01

    The Adaptive Multisensor Integrated Security System (AMISS) uses a variety of computational intelligence techniques to reason from raw sensor data through an array of processing layers to arrive at an assessment for alarm/alert conditions based on human behavior within a secure facility. In this paper, the authors give an overview of the system and briefly describe some of the major components of the system. This system is currently under development and testing in a realistic facility setting

  2. The Very Large Array Data Processing Pipeline

    Science.gov (United States)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an

  3. An adaptive management process for forest soil conservation.

    Science.gov (United States)

    Michael P. Curran; Douglas G. Maynard; Ronald L. Heninger; Thomas A. Terry; Steven W. Howes; Douglas M. Stone; Thomas Niemann; Richard E. Miller; Robert F. Powers

    2005-01-01

    Soil disturbance guidelines should be based on comparable disturbance categories adapted to specific local soil conditions, validated by monitoring and research. Guidelines, standards, and practices should be continually improved based on an adaptive management process, which is presented in this paper. Core components of this process include: reliable monitoring...

  4. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  5. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    Science.gov (United States)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  6. Adaptive algorithm based on antenna arrays for radio communication systems

    Directory of Open Access Journals (Sweden)

    Fedosov Valentin

    2017-01-01

    Full Text Available Trends in the modern world increasingly lead to the growing popularity of wireless technologies. This is possible due to the rapid development of mobile communications, the Internet gaining high popularity, using wireless networks at enterprises, offices, buildings, etc. It requires advanced network technologies with high throughput capacity to meet the needs of users. To date, a popular destination is the development of spatial signal processing techniques allowing to increase spatial bandwidth of communication channels. The most popular method is spatial coding MIMO to increase data transmission speed which is carried out due to several spatial streams emitted by several antennas. Another advantage of this technology is the bandwidth increase to be achieved without expanding the specified frequency range. Spatial coding methods are even more attractive due to a limited frequency resource. Currently, there is an increasing use of wireless communications (for example, WiFi and WiMAX in information transmission networks. One of the main problems of evolving wireless systems is the need to increase bandwidth and improve the quality of service (reducing the error probability. Bandwidth can be increased by expanding the bandwidth or increasing the radiated power. Nevertheless, the application of these methods has some drawbacks, due to the requirements of biological protection and electromagnetic compatibility, the increase of power and the expansion of the frequency band is limited. This problem is especially relevant in mobile (cellular communication systems and wireless networks operating in difficult signal propagation conditions. One of the most effective ways to solve this problem is to use adaptive antenna arrays with weakly correlated antenna elements. Communication systems using such antennas are called MIMO systems (Multiple Input Multiple Output multiple input - multiple outputs. At the moment, existing MIMO-idea implementations do not

  7. A Readout Integrated Circuit (ROIC) employing self-adaptive background current compensation technique for Infrared Focal Plane Array (IRFPA)

    Science.gov (United States)

    Zhou, Tong; Zhao, Jian; He, Yong; Jiang, Bo; Su, Yan

    2018-05-01

    A novel self-adaptive background current compensation circuit applied to infrared focal plane array is proposed in this paper, which can compensate the background current generated in different conditions. Designed double-threshold detection strategy is to estimate and eliminate the background currents, which could significantly reduce the hardware overhead and improve the uniformity among different pixels. In addition, the circuit is well compatible to various categories of infrared thermo-sensitive materials. The testing results of a 4 × 4 experimental chip showed that the proposed circuit achieves high precision, wide application and high intelligence. Tape-out of the 320 × 240 readout circuit, as well as the bonding, encapsulation and imaging verification of uncooled infrared focal plane array, have also been completed.

  8. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  9. Blind Time-Frequency Analysis for Source Discrimination in Multisensor Array Processing

    National Research Council Canada - National Science Library

    Amin, Moeness

    1999-01-01

    .... We have clearly demonstrated, through analysis and simulations, the offerings of time-frequency distributions in solving key problems in sensor array processing, including direction finding, source...

  10. Monitoring and Evaluation of Alcoholic Fermentation Processes Using a Chemocapacitor Sensor Array

    Science.gov (United States)

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-01-01

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument. PMID:25184490

  11. Adaptive enhancement of learning protocol in hippocampal cultured networks grown on multielectrode arrays

    Science.gov (United States)

    Pimashkin, Alexey; Gladkov, Arseniy; Mukhina, Irina; Kazantsev, Victor

    2013-01-01

    Learning in neuronal networks can be investigated using dissociated cultures on multielectrode arrays supplied with appropriate closed-loop stimulation. It was shown in previous studies that weakly respondent neurons on the electrodes can be trained to increase their evoked spiking rate within a predefined time window after the stimulus. Such neurons can be associated with weak synaptic connections in nearby culture network. The stimulation leads to the increase in the connectivity and in the response. However, it was not possible to perform the learning protocol for the neurons on electrodes with relatively strong synaptic inputs and responding at higher rates. We proposed an adaptive closed-loop stimulation protocol capable to achieve learning even for the highly respondent electrodes. It means that the culture network can reorganize appropriately its synaptic connectivity to generate a desired response. We introduced an adaptive reinforcement condition accounting for the response variability in control stimulation. It significantly enhanced the learning protocol to a large number of responding electrodes independently on its base response level. We also found that learning effect preserved after 4–6 h after training. PMID:23745105

  12. Neural Adaptation Effects in Conceptual Processing

    Directory of Open Access Journals (Sweden)

    Barbara F. M. Marino

    2015-07-01

    Full Text Available We investigated the conceptual processing of nouns referring to objects characterized by a highly typical color and orientation. We used a go/no-go task in which we asked participants to categorize each noun as referring or not to natural entities (e.g., animals after a selective adaptation of color-edge neurons in the posterior LV4 region of the visual cortex was induced by means of a McCollough effect procedure. This manipulation affected categorization: the green-vertical adaptation led to slower responses than the green-horizontal adaptation, regardless of the specific color and orientation of the to-be-categorized noun. This result suggests that the conceptual processing of natural entities may entail the activation of modality-specific neural channels with weights proportional to the reliability of the signals produced by these channels during actual perception. This finding is discussed with reference to the debate about the grounded cognition view.

  13. Adaptive constructive processes and the future of memory

    OpenAIRE

    Schacter, Daniel L.

    2012-01-01

    Memory serves critical functions in everyday life, but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, or illusions. The article describes several types of memory errors that are produced by adaptive constructive processes, and focuses in particular on the process of imagining or simulating events that might occur in one’s personal future. Simulating future events reli...

  14. Active cancellation of probing in linear dipole phased array

    CERN Document Server

    Singh, Hema; Jha, Rakesh Mohan

    2015-01-01

    In this book, a modified improved LMS algorithm is employed for weight adaptation of dipole array for the generation of beam pattern in multiple signal environments. In phased arrays, the generation of adapted pattern according to the signal scenario requires an efficient adaptive algorithm. The antenna array is expected to maintain sufficient gain towards each of the desired source while at the same time suppress the probing sources. This cancels the signal transmission towards each of the hostile probing sources leading to active cancellation. In the book, the performance of dipole phased array is demonstrated in terms of fast convergence, output noise power and output signal-to-interference-and noise ratio. The mutual coupling effect and role of edge elements are taken into account. It is established that dipole array along with an efficient algorithm is able to maintain multilobe beamforming with accurate and deep nulls towards each probing source. This work has application to the active radar cross secti...

  15. Adaptive frequency-difference matched field processing for high frequency source localization in a noisy shallow ocean.

    Science.gov (United States)

    Worthmann, Brian M; Song, H C; Dowling, David R

    2017-01-01

    Remote source localization in the shallow ocean at frequencies significantly above 1 kHz is virtually impossible for conventional array signal processing techniques due to environmental mismatch. A recently proposed technique called frequency-difference matched field processing (Δf-MFP) [Worthmann, Song, and Dowling (2015). J. Acoust. Soc. Am. 138(6), 3549-3562] overcomes imperfect environmental knowledge by shifting the signal processing to frequencies below the signal's band through the use of a quadratic product of frequency-domain signal amplitudes called the autoproduct. This paper extends these prior Δf-MFP results to various adaptive MFP processors found in the literature, with particular emphasis on minimum variance distortionless response, multiple constraint method, multiple signal classification, and matched mode processing at signal-to-noise ratios (SNRs) from -20 to +20 dB. Using measurements from the 2011 Kauai Acoustic Communications Multiple University Research Initiative experiment, the localization performance of these techniques is analyzed and compared to Bartlett Δf-MFP. The results show that a source broadcasting a frequency sweep from 11.2 to 26.2 kHz through a 106 -m-deep sound channel over a distance of 3 km and recorded on a 16 element sparse vertical array can be localized using Δf-MFP techniques within average range and depth errors of 200 and 10 m, respectively, at SNRs down to 0 dB.

  16. Probe suppression in conformal phased array

    CERN Document Server

    Singh, Hema; Neethu, P S

    2017-01-01

    This book considers a cylindrical phased array with microstrip patch antenna elements and half-wavelength dipole antenna elements. The effect of platform and mutual coupling effect is included in the analysis. The non-planar geometry is tackled by using Euler's transformation towards the calculation of array manifold. Results are presented for both conducting and dielectric cylinder. The optimal weights obtained are used to generate adapted pattern according to a given signal scenario. It is shown that array along with adaptive algorithm is able to cater to an arbitrary signal environment even when the platform effect and mutual coupling is taken into account. This book provides a step-by-step approach for analyzing the probe suppression in non-planar geometry. Its detailed illustrations and analysis will be a useful text for graduate and research students, scientists and engineers working in the area of phased arrays, low-observables and stealth technology.

  17. High density processing electronics for superconducting tunnel junction x-ray detector arrays

    Energy Technology Data Exchange (ETDEWEB)

    Warburton, W.K., E-mail: bill@xia.com [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Harris, J.T. [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Friedrich, S. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States)

    2015-06-01

    Superconducting tunnel junctions (STJs) are excellent soft x-ray (100–2000 eV) detectors, particularly for synchrotron applications, because of their ability to obtain energy resolutions below 10 eV at count rates approaching 10 kcps. In order to achieve useful solid detection angles with these very small detectors, they are typically deployed in large arrays – currently with 100+ elements, but with 1000 elements being contemplated. In this paper we review a 5-year effort to develop compact, computer controlled low-noise processing electronics for STJ detector arrays, focusing on the major issues encountered and our solutions to them. Of particular interest are our preamplifier design, which can set the STJ operating points under computer control and achieve 2.7 eV energy resolution; our low noise power supply, which produces only 2 nV/√Hz noise at the preamplifier's critical cascode node; our digital processing card that digitizes and digitally processes 32 channels; and an STJ I–V curve scanning algorithm that computes noise as a function of offset voltage, allowing an optimum operating point to be easily selected. With 32 preamplifiers laid out on a custom 3U EuroCard, and the 32 channel digital card in a 3U PXI card format, electronics for a 128 channel array occupy only two small chassis, each the size of a National Instruments 5-slot PXI crate, and allow full array control with simple extensions of existing beam line data collection packages.

  18. Assessment of low-cost manufacturing process sequences. [photovoltaic solar arrays

    Science.gov (United States)

    Chamberlain, R. G.

    1979-01-01

    An extensive research and development activity to reduce the cost of manufacturing photovoltaic solar arrays by a factor of approximately one hundred is discussed. Proposed and actual manufacturing process descriptions were compared to manufacturing costs. An overview of this methodology is presented.

  19. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    Science.gov (United States)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  20. Adaption of the Magnetometer Towed Array geophysical system to meet Department of Energy needs for hazardous waste site characterization

    International Nuclear Information System (INIS)

    Cochran, J.R.; McDonald, J.R.; Russell, R.J.; Robertson, R.; Hensel, E.

    1995-10-01

    This report documents US Department of Energy (DOE)-funded activities that have adapted the US Navy's Surface Towed Ordnance Locator System (STOLS) to meet DOE needs for a ''... better, faster, safer and cheaper ...'' system for characterizing inactive hazardous waste sites. These activities were undertaken by Sandia National Laboratories (Sandia), the Naval Research Laboratory, Geo-Centers Inc., New Mexico State University and others under the title of the Magnetometer Towed Array (MTA)

  1. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

    Science.gov (United States)

    Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-12-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

  2. The CHARA array adaptive optics I: common-path optical and mechanical design, and preliminary on-sky results

    Science.gov (United States)

    Che, Xiao; Sturmann, Laszlo; Monnier, John D.; ten Brummelaar, Theo A.; Sturmann, Judit; Ridgway, Stephen T.; Ireland, Michael J.; Turner, Nils H.; McAlister, Harold A.

    2014-07-01

    The CHARA array is an optical interferometer with six 1-meter diameter telescopes, providing baselines from 33 to 331 meters. With sub-milliarcsecond angular resolution, its versatile visible and near infrared combiners offer a unique angle of studying nearby stellar systems by spatially resolving their detailed structures. To improve the sensitivity and scientific throughput, the CHARA array was funded by NSF-ATI in 2011 to install adaptive optics (AO) systems on all six telescopes. The initial grant covers Phase I of the AO systems, which includes on-telescope Wavefront Sensors (WFS) and non-common-path (NCP) error correction. Meanwhile we are seeking funding for Phase II which will add large Deformable Mirrors on telescopes to close the full AO loop. The corrections of NCP error and static aberrations in the optical system beyond the WFS are described in the second paper of this series. This paper describes the design of the common-path optical system and the on-telescope WFS, and shows the on-sky commissioning results.

  3. Functional Dual Adaptive Control with Recursive Gaussian Process Model

    International Nuclear Information System (INIS)

    Prüher, Jakub; Král, Ladislav

    2015-01-01

    The paper deals with dual adaptive control problem, where the functional uncertainties in the system description are modelled by a non-parametric Gaussian process regression model. Current approaches to adaptive control based on Gaussian process models are severely limited in their practical applicability, because the model is re-adjusted using all the currently available data, which keeps growing with every time step. We propose the use of recursive Gaussian process regression algorithm for significant reduction in computational requirements, thus bringing the Gaussian process-based adaptive controllers closer to their practical applicability. In this work, we design a bi-criterial dual controller based on recursive Gaussian process model for discrete-time stochastic dynamic systems given in an affine-in-control form. Using Monte Carlo simulations, we show that the proposed controller achieves comparable performance with the full Gaussian process-based controller in terms of control quality while keeping the computational demands bounded. (paper)

  4. Environmentally adaptive processing for shallow ocean applications: A sequential Bayesian approach.

    Science.gov (United States)

    Candy, J V

    2015-09-01

    The shallow ocean is a changing environment primarily due to temperature variations in its upper layers directly affecting sound propagation throughout. The need to develop processors capable of tracking these changes implies a stochastic as well as an environmentally adaptive design. Bayesian techniques have evolved to enable a class of processors capable of performing in such an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean environment. A solution to this problem is addressed by developing a sequential Bayesian processor capable of providing a joint solution to the modal function tracking and environmental adaptivity problem. Here, the focus is on the development of both a particle filter and an unscented Kalman filter capable of providing reasonable performance for this problem. These processors are applied to hydrophone measurements obtained from a vertical array. The adaptivity problem is attacked by allowing the modal coefficients and/or wavenumbers to be jointly estimated from the noisy measurement data along with tracking of the modal functions while simultaneously enhancing the noisy pressure-field measurements.

  5. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  6. Impact of Antenna Placement on Frequency Domain Adaptive Antenna Array in Hybrid FRF Cellular System

    Directory of Open Access Journals (Sweden)

    Sri Maldia Hari Asti

    2012-01-01

    Full Text Available Frequency domain adaptive antenna array (FDAAA is an effective method to suppress interference caused by frequency selective fading and multiple-access interference (MAI in single-carrier (SC transmission. However, the performance of FDAAA receiver will be affected by the antenna placement parameters such as antenna separation and spread of angle of arrival (AOA. On the other hand, hybrid frequency reuse can be adopted in cellular system to improve the cellular capacity. However, optimal frequency reuse factor (FRF depends on the channel propagation and transceiver scheme as well. In this paper, we analyze the impact of antenna separation and AOA spread on FDAAA receiver and optimize the cellular capacity by using hybrid FRF.

  7. Adaptive Constructive Processes and the Future of Memory

    Science.gov (United States)

    Schacter, Daniel L.

    2012-01-01

    Memory serves critical functions in everyday life but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, and illusions. The article describes several types of memory errors that are produced by adaptive constructive processes…

  8. Estimation, filtering and adaptative control of a waste water processing process; Estimation, filtrage et commande adaptive d`un procede de traitement des eaux usees

    Energy Technology Data Exchange (ETDEWEB)

    Ben Youssef, C; Dahhou, B; Roux, G [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France); Rols, J L [Institut National des Sciences Appliquees (INSA), 31 - Toulouse (France)

    1996-12-31

    Controlling the process of a fixed bed bioreactor imply solving filtering and adaptative control problems. Estimation processes have been developed for unmeasurable parameters. An adaptative non linear control has been built, instead of conventional approaches trying to linearize the system and apply a linear control system. (D.L.) 10 refs.

  9. Adaptive radar resource management

    CERN Document Server

    Moo, Peter

    2015-01-01

    Radar Resource Management (RRM) is vital for optimizing the performance of modern phased array radars, which are the primary sensor for aircraft, ships, and land platforms. Adaptive Radar Resource Management gives an introduction to radar resource management (RRM), presenting a clear overview of different approaches and techniques, making it very suitable for radar practitioners and researchers in industry and universities. Coverage includes: RRM's role in optimizing the performance of modern phased array radars The advantages of adaptivity in implementing RRMThe role that modelling and

  10. Bayesian nonparametric adaptive control using Gaussian processes.

    Science.gov (United States)

    Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A

    2015-03-01

    Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

  11. Environmental photobioreactor array (EPBRA) systems and apparatus related thereto

    Science.gov (United States)

    Kramer, David; Zegarac, Robert; Lucker, Ben F.; Hall, Christopher; Abernathy, Casey; Carpenter, Joel; Cruz, Jeffrey

    2017-11-14

    A system is described herein that comprises one or more modular environmental photobioreactor arrays, each array containing two or more photobioreactors, wherein the system is adapted to monitor each of the photobioreactors and/or modulate the conditions with each of the photobioreactors. The photobioreactors are also adapted for measurement of multiple physiological parameters of a biomass contained therein. Various methods for selecting and characterizing biomass are also provided. In one embodiment, the biomass is algae.

  12. Adaptive multiparameter control: application to a Rapid Thermal Processing process; Commande Adaptative Multivariable: Application a un Procede de Traitement Thermique Rapide

    Energy Technology Data Exchange (ETDEWEB)

    Morales Mago, S J

    1995-12-20

    In this work the problem of temperature uniformity control in rapid thermal processing is addressed by means of multivariable adaptive control. Rapid Thermal Processing (RTP) is a set of techniques proposed for semiconductor fabrication processes such as annealing, oxidation, chemical vapour deposition and others. The product quality depends on two mains issues: precise trajectory following and spatial temperature uniformity. RTP is a fabrication technique that requires a sophisticated real-time multivariable control system to achieve acceptable results. Modelling of the thermal behaviour of the process leads to very complex mathematical models. These are the reasons why adaptive control techniques are chosen. A multivariable linear discrete time model of the highly non-linear process is identified on-line, using an identification scheme which includes supervisory actions. This identified model, combined with a multivariable predictive control law allows to prevent the controller from systems variations. The control laws are obtained by minimization of a quadratic cost function or by pole placement. In some of these control laws, a partial state reference model was included. This reference model allows to incorporate an appropriate tracking capability into the control law. Experimental results of the application of the involved multivariable adaptive control laws on a RTP system are presented. (author) refs

  13. Adaptive smart simulator for characterization and MPPT construction of PV array

    International Nuclear Information System (INIS)

    Ouada, Mehdi; Meridjet, Mohamed Salah; Dib, Djalel

    2016-01-01

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  14. Adaptive smart simulator for characterization and MPPT construction of PV array

    Science.gov (United States)

    Ouada, Mehdi; Meridjet, Mohamed Salah; Dib, Djalel

    2016-07-01

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  15. Adaptive smart simulator for characterization and MPPT construction of PV array

    Energy Technology Data Exchange (ETDEWEB)

    Ouada, Mehdi, E-mail: mehdi.ouada@univ-annaba.org; Meridjet, Mohamed Salah [Electromechanical engineering department, Electromechanical engineering laboratory, Badji Mokhtar University, B.P. 12, Annaba (Algeria); Dib, Djalel [Department of Electrical Engineering, University of Tebessa, Tebessa (Algeria)

    2016-07-25

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  16. Adaptation as process: the future of Darwinism and the legacy of Theodosius Dobzhansky.

    Science.gov (United States)

    Depew, David J

    2011-03-01

    Conceptions of adaptation have varied in the history of genetic Darwinism depending on whether what is taken to be focal is the process of adaptation, adapted states of populations, or discrete adaptations in individual organisms. I argue that Theodosius Dobzhansky's view of adaptation as a dynamical process contrasts with so-called "adaptationist" views of natural selection figured as "design-without-a-designer" of relatively discrete, enumerable adaptations. Correlated with these respectively process and product oriented approaches to adaptive natural selection are divergent pictures of organisms themselves as developmental wholes or as "bundles" of adaptations. While even process versions of genetical Darwinism are insufficiently sensitive to the fact much of the variation on which adaptive selection works consists of changes in the timing, rate, or location of ontogenetic events, I argue that articulations of the Modern Synthesis influenced by Dobzhansky are more easily reconciled with the recent shift to evolutionary developmentalism than are versions that make discrete adaptations central. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. Focal plane array with modular pixel array components for scalability

    Science.gov (United States)

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  18. Anti-hebbian spike-timing-dependent plasticity and adaptive sensory processing.

    Science.gov (United States)

    Roberts, Patrick D; Leen, Todd K

    2010-01-01

    Adaptive sensory processing influences the central nervous system's interpretation of incoming sensory information. One of the functions of this adaptive sensory processing is to allow the nervous system to ignore predictable sensory information so that it may focus on important novel information needed to improve performance of specific tasks. The mechanism of spike-timing-dependent plasticity (STDP) has proven to be intriguing in this context because of its dual role in long-term memory and ongoing adaptation to maintain optimal tuning of neural responses. Some of the clearest links between STDP and adaptive sensory processing have come from in vitro, in vivo, and modeling studies of the electrosensory systems of weakly electric fish. Plasticity in these systems is anti-Hebbian, so that presynaptic inputs that repeatedly precede, and possibly could contribute to, a postsynaptic neuron's firing are weakened. The learning dynamics of anti-Hebbian STDP learning rules are stable if the timing relations obey strict constraints. The stability of these learning rules leads to clear predictions of how functional consequences can arise from the detailed structure of the plasticity. Here we review the connection between theoretical predictions and functional consequences of anti-Hebbian STDP, focusing on adaptive processing in the electrosensory system of weakly electric fish. After introducing electrosensory adaptive processing and the dynamics of anti-Hebbian STDP learning rules, we address issues of predictive sensory cancelation and novelty detection, descending control of plasticity, synaptic scaling, and optimal sensory tuning. We conclude with examples in other systems where these principles may apply.

  19. Anti-Hebbian Spike Timing Dependent Plasticity and Adaptive Sensory Processing

    Directory of Open Access Journals (Sweden)

    Patrick D Roberts

    2010-12-01

    Full Text Available Adaptive processing influences the central nervous system's interpretation of incoming sensory information. One of the functions of this adaptative sensory processing is to allow the nervous system to ignore predictable sensory information so that it may focus on important new information needed to improve performance of specific tasks. The mechanism of spike timing-dependent plasticity (STDP has proven to be intriguing in this context because of its dual role in long-term memory and ongoing adaptation to maintain optimal tuning of neural responses. Some of the clearest links between STDP and adaptive sensory processing have come from in vitro, in vivo, and modeling studies of the electrosensory systems of fish. Plasticity in such systems is anti-Hebbian, i.e. presynaptic inputs that repeatedly precede and hence could contribute to a postsynaptic neuron’s firing are weakened. The learning dynamics of anti-Hebbian STDP learning rules are stable if the timing relations obey strict constraints. The stability of these learning rules leads to clear predictions of how functional consequences can arise from the detailed structure of the plasticity. Here we review the connection between theoretical predictions and functional consequences of anti-Hebbian STDP, focusing on adaptive processing in the electrosensory system of weakly electric fish. After introducing electrosensory adaptive processing and the dynamics of anti-Hebbian STDP learning rules, we address issues of predictive sensory cancellation and novelty detection, descending control of plasticity, synaptic scaling, and optimal sensory tuning. We conclude with examples in other systems where these principles may apply.

  20. Frequency Adaptability and Waveform Design for OFDM Radar Space-Time Adaptive Processing

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL; Glover, Charles Wayne [ORNL

    2012-01-01

    We propose an adaptive waveform design technique for an orthogonal frequency division multiplexing (OFDM) radar signal employing a space-time adaptive processing (STAP) technique. We observe that there are inherent variabilities of the target and interference responses in the frequency domain. Therefore, the use of an OFDM signal can not only increase the frequency diversity of our system, but also improve the target detectability by adaptively modifying the OFDM coefficients in order to exploit the frequency-variabilities of the scenario. First, we formulate a realistic OFDM-STAP measurement model considering the sparse nature of the target and interference spectra in the spatio-temporal domain. Then, we show that the optimal STAP-filter weight-vector is equal to the generalized eigenvector corresponding to the minimum generalized eigenvalue of the interference and target covariance matrices. With numerical examples we demonstrate that the resultant OFDM-STAP filter-weights are adaptable to the frequency-variabilities of the target and interference responses, in addition to the spatio-temporal variabilities. Hence, by better utilizing the frequency variabilities, we propose an adaptive OFDM-waveform design technique, and consequently gain a significant amount of STAP-performance improvement.

  1. Modeling of processes of an adaptive business management

    Directory of Open Access Journals (Sweden)

    Karev Dmitry Vladimirovich

    2011-04-01

    Full Text Available On the basis of the analysis of systems of adaptive management board business proposed the original version of the real system of adaptive management, the basis of which used dynamic recursive model cash flow forecast and real data. Proposed definitions and the simulation of scales and intervals of model time in the control system, as well as the thresholds observations and conditions of changing (correction of the administrative decisions. The process of adaptive management is illustrated on the basis proposed by the author of the script of development of business.

  2. Physics-based signal processing algorithms for micromachined cantilever arrays

    Science.gov (United States)

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  3. A diversified portfolio model of adaptability.

    Science.gov (United States)

    Chandra, Siddharth; Leong, Frederick T L

    2016-12-01

    A new model of adaptability, the diversified portfolio model (DPM) of adaptability, is introduced. In the 1950s, Markowitz developed the financial portfolio model by demonstrating that investors could optimize the ratio of risk and return on their portfolios through risk diversification. The DPM integrates attractive features of a variety of models of adaptability, including Linville's self-complexity model, the risk and resilience model, and Bandura's social cognitive theory. The DPM draws on the concept of portfolio diversification, positing that diversified investment in multiple life experiences, life roles, and relationships promotes positive adaptation to life's challenges. The DPM provides a new integrative model of adaptability across the biopsychosocial levels of functioning. More importantly, the DPM addresses a gap in the literature by illuminating the antecedents of adaptive processes studied in a broad array of psychological models. The DPM is described in relation to the biopsychosocial model and propositions are offered regarding its utility in increasing adaptiveness. Recommendations for future research are also offered. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Processing and display of medical three dimensional arrays of numerical data using octree encoding

    International Nuclear Information System (INIS)

    Amans, J.L.; Darier, P.

    1985-01-01

    Imaging modalities such as X-ray computerized Tomography (CT), Nuclear Medicine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging. We are currently developing experimental software that allows the analysis, processing and display of 3-D arrays of numerical data that are organized in a related hierarchical data structure using OCTREE (octal-tree) encoding technique based on a recursive subdivision of the data volume. The OCTREE encoding structure is an extension of the two-dimensional tree structure: the quadtree, developed for image processing applications. Before any operations, the 3-D array of data is OCTREE encoded, thereafter all processings are concerned with the encoded object. The elementary process for the elaboration of a synthetic image includes: conditioning the volume: volume partition (numerical and spatial segmentation), choice of the view-point..., two dimensional display, either by spatial integration (radiography) or by shaded surface representation. This paper introduces these different concepts and specifies the advantages of OCTREE encoding techniques in realizing these operations. Furthermore the application of the OCTREE encoding scheme to the display of 3-D medical volumes generated from multiple CT scans is presented

  5. A novel, substrate independent three-step process for the growth of uniform ZnO nanorod arrays

    International Nuclear Information System (INIS)

    Byrne, D.; McGlynn, E.; Henry, M.O.; Kumar, K.; Hughes, G.

    2010-01-01

    We report a three-step deposition process for uniform arrays of ZnO nanorods, involving chemical bath deposition of aligned seed layers followed by nanorod nucleation sites and subsequent vapour phase transport growth of nanorods. This combines chemical bath deposition techniques, which enable substrate independent seeding and nucleation site generation with vapour phase transport growth of high crystalline and optical quality ZnO nanorod arrays. Our data indicate that the three-step process produces uniform nanorod arrays with narrow and rather monodisperse rod diameters (∼ 70 nm) across substrates of centimetre dimensions. X-ray photoelectron spectroscopy, scanning electron microscopy and X-ray diffraction were used to study the growth mechanism and characterise the nanostructures.

  6. Sampling phased array, a new technique for ultrasonic signal processing and imaging now available to industry

    OpenAIRE

    Verkooijen, J.; Bulavinov, A.

    2008-01-01

    Over the past 10 years the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called "Sampling Phased Array" has been developed in the Fraunhofer Institute for non-destructive testing [1]. It realizes a unique approach of measurement and processing of ultrasonic signals. The s...

  7. Arrays of surface-normal electroabsorption modulators for the generation and signal processing of microwave photonics signals

    NARCIS (Netherlands)

    Noharet, Bertrand; Wang, Qin; Platt, Duncan; Junique, Stéphane; Marpaung, D.A.I.; Roeloffzen, C.G.H.

    2011-01-01

    The development of an array of 16 surface-normal electroabsorption modulators operating at 1550nm is presented. The modulator array is dedicated to the generation and processing of microwave photonics signals, targeting a modulation bandwidth in excess of 5GHz. The hybrid integration of the

  8. Cold plasma decontamination using flexible jet arrays

    Science.gov (United States)

    Konesky, Gregory

    2010-04-01

    Arrays of atmospheric discharge cold plasma jets have been used to decontaminate surfaces of a wide range of microorganisms quickly, yet not damage that surface. Its effectiveness in decomposing simulated chemical warfare agents has also been demonstrated, and may also find use in assisting in the cleanup of radiological weapons. Large area jet arrays, with short dwell times, are necessary for practical applications. Realistic situations will also require jet arrays that are flexible to adapt to contoured or irregular surfaces. Various large area jet array prototypes, both planar and flexible, are described, as is the application to atmospheric decontamination.

  9. SAR processing with stepped chirps and phased array antennas.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2006-09-01

    Wideband radar signals are problematic for phased array antennas. Wideband radar signals can be generated from series or groups of narrow-band signals centered at different frequencies. An equivalent wideband LFM chirp can be assembled from lesser-bandwidth chirp segments in the data processing. The chirp segments can be transmitted as separate narrow-band pulses, each with their own steering phase operation. This overcomes the problematic dilemma of steering wideband chirps with phase shifters alone, that is, without true time-delay elements.

  10. Multilevel processes and cultural adaptation: Examples from past and present small-scale societies

    OpenAIRE

    Reyes-García, V.; Balbo, A. L.; Gomez-Baggethun, E.; Gueze, M.; Mesoudi, A.; Richerson, P.; Rubio-Campillo, X.; Ruiz-Mallén, I.; Shennan, S.

    2016-01-01

    Cultural adaptation has become central in the context of accelerated global change with authors increasingly acknowledging the importance of understanding multilevel processes that operate as adaptation takes place. We explore the importance of multilevel processes in explaining cultural adaptation by describing how processes leading to cultural (mis)adaptation are linked through a complex nested hierarchy, where the lower levels combine into new units with new organizations, functions, and e...

  11. Adaptive transmit selection with interference suppression

    KAUST Repository

    Radaydeh, Redha Mahmoud Mesleh

    2010-01-01

    This paper studies the performance of adaptive transmit channel selection in multipath fading channels. The adaptive selection algorithms are configured for single-antenna bandwidth-efficient or power-efficient transmission with as low transmit channel estimations as possible. Due to the fact that the number of active co-channel interfering signals and their corresponding powers experience random behavior, the adaptation to channels conditions, assuming uniform buffer and traffic loading, is proposed to be jointly based on the transmit channels instantaneous signal-to-noise ratios (SNRs) and signal-to- interference-plus- noise ratios (SINRs). Two interference cancelation algorithms, which are the dominant cancelation and the less complex arbitrary cancelation, are considered, for which the receive antenna array is assumed to have small angular spread. Analytical formulation for some performance measures in addition to several processing complexity and numerical comparisons between various adaptation schemes are presented. ©2010 IEEE.

  12. Behavioral training promotes multiple adaptive processes following acute hearing loss.

    Science.gov (United States)

    Keating, Peter; Rosenior-Patten, Onayomi; Dahmen, Johannes C; Bell, Olivia; King, Andrew J

    2016-03-23

    The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders.

  13. Solution processed bismuth sulfide nanowire array core/silver shuffle shell solar cells

    NARCIS (Netherlands)

    Cao, Y.; Bernechea, M.; Maclachlan, A.; Zardetto, V.; Creatore, M.; Haque, S.A.; Konstantatos, G.

    2015-01-01

    Low bandgap inorganic semiconductor nanowires have served as building blocks in solution processed solar cells to improve their power conversion capacity and reduce fabrication cost. In this work, we first reported bismuth sulfide nanowire arrays grown from colloidal seeds on a transparent

  14. Structural control of ultra-fine CoPt nanodot arrays via electrodeposition process

    Energy Technology Data Exchange (ETDEWEB)

    Wodarz, Siggi [Department of Applied Chemistry, Waseda University, Shinjuku, Tokyo 169-8555 (Japan); Hasegawa, Takashi; Ishio, Shunji [Department of Materials Science, Akita University, Akita City 010-8502 (Japan); Homma, Takayuki, E-mail: t.homma@waseda.jp [Department of Applied Chemistry, Waseda University, Shinjuku, Tokyo 169-8555 (Japan)

    2017-05-15

    CoPt nanodot arrays were fabricated by combining electrodeposition and electron beam lithography (EBL) for the use of bit-patterned media (BPM). To achieve precise control of deposition uniformity and coercivity of the CoPt nanodot arrays, their crystal structure and magnetic properties were controlled by controlling the diffusion state of metal ions from the initial deposition stage with the application of bath agitation. Following bath agitation, the composition gradient of the CoPt alloy with thickness was mitigated to have a near-ideal alloy composition of Co:Pt =80:20, which induces epitaxial-like growth from Ru substrate, thus resulting in the improvement of the crystal orientation of the hcp (002) structure from its initial deposition stages. Furthermore, the cross-sectional transmission electron microscope (TEM) analysis of the nanodots deposited with bath agitation showed CoPt growth along its c-axis oriented in the perpendicular direction, having uniform lattice fringes on the hcp (002) plane from the Ru underlayer interface, which is a significant factor to induce perpendicular magnetic anisotropy. Magnetic characterization of the CoPt nanodot arrays showed increase in the perpendicular coercivity and squareness of the hysteresis loops from 2.0 kOe and 0.64 (without agitation) to 4.0 kOe and 0.87 with bath agitation. Based on the detailed characterization of nanodot arrays, the precise crystal structure control of the nanodot arrays with ultra-high recording density by electrochemical process was successfully demonstrated. - Highlights: • Ultra-fine CoPt nanodot arrays were fabricated by electrodeposition. • Crystallinity of hcp (002) was improved with uniform composition formation. • Uniform formation of hcp lattices leads to an increase in the coercivity.

  15. ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.

    Science.gov (United States)

    Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles

    2018-04-19

    Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We

  16. New customizable phased array UT instrument opens door for furthering research and better industrial implementation

    International Nuclear Information System (INIS)

    Dao, Gavin; Ginzel, Robert

    2014-01-01

    Phased array UT as an inspection technique in itself continues to gain wide acceptance. However, there is much room for improvement in terms of implementation of Phased Array (PA) technology for every unique NDT application across several industries (e.g. oil and petroleum, nuclear and power generation, steel manufacturing, etc.). Having full control of the phased array instrument and customizing a software solution is necessary for more seamless and efficient inspections, from setting the PA parameters, collecting data and reporting, to the final analysis. NDT researchers and academics also need a flexible and open platform to be able to control various aspects of the phased array process. A high performance instrument with advanced PA features, faster data rates, a smaller form factor, and capability to adapt to specific applications, will be discussed

  17. Psychosocial intervention effects on adaptation, disease course and biobehavioral processes in cancer.

    Science.gov (United States)

    Antoni, Michael H

    2013-03-01

    A diagnosis of cancer and subsequent treatments place demands on psychological adaptation. Behavioral research suggests the importance of cognitive, behavioral, and social factors in facilitating adaptation during active treatment and throughout cancer survivorship, which forms the rationale for the use of many psychosocial interventions in cancer patients. This cancer experience may also affect physiological adaptation systems (e.g., neuroendocrine) in parallel with psychological adaptation changes (negative affect). Changes in adaptation may alter tumor growth-promoting processes (increased angiogenesis, migration and invasion, and inflammation) and tumor defense processes (decreased cellular immunity) relevant for cancer progression and the quality of life of cancer patients. Some evidence suggests that psychosocial intervention can improve psychological and physiological adaptation indicators in cancer patients. However, less is known about whether these interventions can influence tumor activity and tumor growth-promoting processes and whether changes in these processes could explain the psychosocial intervention effects on recurrence and survival documented to date. Documenting that psychosocial interventions can modulate molecular activities (e.g., transcriptional indicators of cell signaling) that govern tumor promoting and tumor defense processes on the one hand, and clinical disease course on the other is a key challenge for biobehavioral oncology research. This mini-review will summarize current knowledge on psychological and physiological adaptation processes affected throughout the stress of the cancer experience, and the effects of psychosocial interventions on psychological adaptation, cancer disease progression, and changes in stress-related biobehavioral processes that may mediate intervention effects on clinical cancer outcomes. Very recent intervention work in breast cancer will be used to illuminate emerging trends in molecular probes of

  18. Seismic array processing and computational infrastructure for improved monitoring of Alaskan and Aleutian seismicity and volcanoes

    Science.gov (United States)

    Lindquist, Kent Gordon

    We constructed a near-real-time system, called Iceworm, to automate seismic data collection, processing, storage, and distribution at the Alaska Earthquake Information Center (AEIC). Phase-picking, phase association, and interprocess communication components come from Earthworm (U.S. Geological Survey). A new generic, internal format for digital data supports unified handling of data from diverse sources. A new infrastructure for applying processing algorithms to near-real-time data streams supports automated information extraction from seismic wavefields. Integration of Datascope (U. of Colorado) provides relational database management of all automated measurements, parametric information for located hypocenters, and waveform data from Iceworm. Data from 1997 yield 329 earthquakes located by both Iceworm and the AEIC. Of these, 203 have location residuals under 22 km, sufficient for hazard response. Regionalized inversions for local magnitude in Alaska yield Msb{L} calibration curves (logAsb0) that differ from the Californian Richter magnitude. The new curve is 0.2\\ Msb{L} units more attenuative than the Californian curve at 400 km for earthquakes north of the Denali fault. South of the fault, and for a region north of Cook Inlet, the difference is 0.4\\ Msb{L}. A curve for deep events differs by 0.6\\ Msb{L} at 650 km. We expand geographic coverage of Alaskan regional seismic monitoring to the Aleutians, the Bering Sea, and the entire Arctic by initiating the processing of four short-period, Alaskan seismic arrays. To show the array stations' sensitivity, we detect and locate two microearthquakes that were missed by the AEIC. An empirical study of the location sensitivity of the arrays predicts improvements over the Alaskan regional network that are shown as map-view contour plots. We verify these predictions by detecting an Msb{L} 3.2 event near Unimak Island with one array. The detection and location of four representative earthquakes illustrates the expansion

  19. Adaptive Processing for Sequence Alignment

    KAUST Repository

    Zidan, Mohammed A.; Bonny, Talal; Salama, Khaled N.

    2012-01-01

    Disclosed are various embodiments for adaptive processing for sequence alignment. In one embodiment, among others, a method includes obtaining a query sequence and a plurality of database sequences. A first portion of the plurality of database sequences is distributed to a central processing unit (CPU) and a second portion of the plurality of database sequences is distributed to a graphical processing unit (GPU) based upon a predetermined splitting ratio associated with the plurality of database sequences, where the database sequences of the first portion are shorter than the database sequences of the second portion. A first alignment score for the query sequence is determined with the CPU based upon the first portion of the plurality of database sequences and a second alignment score for the query sequence is determined with the GPU based upon the second portion of the plurality of database sequences.

  20. Adaptive Processing for Sequence Alignment

    KAUST Repository

    Zidan, Mohammed A.

    2012-01-26

    Disclosed are various embodiments for adaptive processing for sequence alignment. In one embodiment, among others, a method includes obtaining a query sequence and a plurality of database sequences. A first portion of the plurality of database sequences is distributed to a central processing unit (CPU) and a second portion of the plurality of database sequences is distributed to a graphical processing unit (GPU) based upon a predetermined splitting ratio associated with the plurality of database sequences, where the database sequences of the first portion are shorter than the database sequences of the second portion. A first alignment score for the query sequence is determined with the CPU based upon the first portion of the plurality of database sequences and a second alignment score for the query sequence is determined with the GPU based upon the second portion of the plurality of database sequences.

  1. Processing and display of three-dimensional arrays of numerical data using octree encoding

    International Nuclear Information System (INIS)

    Amans, J.L.; Antoine, M.; Darier, P.

    1986-04-01

    The analysis of three-dimensional (3-D) arrays of numerical data from medical, industrial or scientific imaging, by synthetic generation of realistic images, has been widely developed. The Octree encoding, that organizes the volume data in a hierarchical tree structure, has some interesting features for 3-D arrays of data processing. The Octree encoding method, based on the recursive subdivision of a 3-D array, is an extension of the Quadtree encoding in the two-dimensional plane. We have developed a software package to validate the basic Octree encoding methodology for some manipulation and display operations of volume data. The contribution introduces the technique we have used (called ''overlay technique'') to make the projection operation of an Octree on a Quadtree encoded image plane. The application of this technique to the hidden surface display is presented [fr

  2. Processing and Linguistics Properties of Adaptable Systems

    Directory of Open Access Journals (Sweden)

    Dumitru TODOROI

    2006-01-01

    Full Text Available Continuation and development of the research in Adaptable Programming Initialization [Tod-05.1,2,3] is presented. As continuation of [Tod-05.2,3] in this paper metalinguistic tools used in the process of introduction of new constructions (data, operations, instructions and controls are developed. The generalization schemes of evaluation of adaptable languages and systems are discussed. These results analogically with [Tod-05.2,3] are obtained by the team, composed from the researchers D. Todoroi [Tod-05.4], Z. Todoroi [ZTod-05], and D. Micusa [Mic-03]. Presented results will be included in the book [Tod-06].

  3. Simpler Adaptive Optics using a Single Device for Processing and Control

    Science.gov (United States)

    Zovaro, A.; Bennet, F.; Rye, D.; D'Orgeville, C.; Rigaut, F.; Price, I.; Ritchie, I.; Smith, C.

    The management of low Earth orbit is becoming more urgent as satellite and debris densities climb, in order to avoid a Kessler syndrome. A key part of this management is to precisely measure the orbit of both active satellites and debris. The Research School of Astronomy and Astrophysics at the Australian National University have been developing an adaptive optics (AO) system to image and range orbiting objects. The AO system provides atmospheric correction for imaging and laser ranging, allowing for the detection of smaller angular targets and drastically increasing the number of detectable objects. AO systems are by nature very complex and high cost systems, often costing millions of dollars and taking years to design. It is not unusual for AO systems to comprise multiple servers, digital signal processors (DSP) and field programmable gate arrays (FPGA), with dedicated tasks such as wavefront sensor data processing or wavefront reconstruction. While this multi-platform approach has been necessary in AO systems to date due to computation and latency requirements, this may no longer be the case for those with less demanding processing needs. In recent years, large strides have been made in FPGA and microcontroller technology, with todays devices having clock speeds in excess of 200 MHz whilst using a 1kHz) with low latency (general AO applications, such as in 1-3 m telescopes for space surveillance, or even for amateur astronomy.

  4. Root locus analysis and design of the adaptation process in active noise control.

    Science.gov (United States)

    Tabatabaei Ardekani, Iman; Abdulla, Waleed H

    2012-10-01

    This paper applies root locus theory to develop a graphical tool for the analysis and design of adaptive active noise control systems. It is shown that the poles of the adaptation process performed in these systems move on typical trajectories in the z-plane as the adaptation step-size varies. Based on this finding, the dominant root of the adaptation process and its trajectory can be determined. The first contribution of this paper is formulating parameters of the adaptation process root locus. The next contribution is introducing a mechanism for modifying the trajectory of the dominant root in the root locus. This mechanism creates a single open loop zero in the original root locus. It is shown that appropriate localization of this zero can cause the dominant root of the locus to be pushed toward the origin, and thereby the adaptation process becomes faster. The validity of the theoretical findings is confirmed in an experimental setup which is implemented using real-time multi-threading and multi-core processing techniques.

  5. Psychological and socio-cultural adaptation of international journalism students in Russia: The role of communication skills in the adaptation process

    Directory of Open Access Journals (Sweden)

    Gladkova A.A.

    2017-12-01

    Full Text Available Background. The study of both Russian and international publications issued in the last twenty years revealed a significant gap in the number of studies examining adaptation (general living, psychological, socio-cultural, etc. in general, i.e., without regard to specific characteristics of the audience, and those describing adaptation of a particular group of people (specific age, ethnic, professional groups, etc.. Objective. The current paper aims to overcome this gap by offering a closer look at the adaptation processes of international journalism students at Russian universities, in particular, their psychological and socio-cultural types of adaptation. The question that interests us the most is how psychological and socio-cultural adaptation of international journalists to-be can be made easier and whether communication-oriented techniques can somehow facilitate this process. Design. In this paper, we provide an overview of current research analyzing adaptation from different angles, which is essential for creating a context for further narrower studies. Results. We discuss adaptation of journalism students in Russia, suggesting ways to make their adaptation in a host country easier and arguing that the development of communication skills can be important for successful adaptation to new living and learning conditions. Conclusion. We argue that there is a need for more detailed, narrow-focused research discussing the specifics of adaptation of different groups of people to a new environment (since we believe different people tend to adapt to new conditions in different ways as well as research outlining the role of communication competences in their adaptation processes.

  6. Projection neuron circuits resolved using correlative array tomography

    Directory of Open Access Journals (Sweden)

    Daniele eOberti

    2011-04-01

    Full Text Available Assessment of three-dimensional morphological structure and synaptic connectivity is essential for a comprehensive understanding of neural processes controlling behavior. Different microscopy approaches have been proposed based on light microcopy (LM, electron microscopy (EM, or a combination of both. Correlative array tomography (CAT is a technique in which arrays of ultrathin serial sections are repeatedly stained with fluorescent antibodies against synaptic molecules and neurotransmitters and imaged with LM and EM (Micheva and Smith, 2007. The utility of this correlative approach is limited by the ability to preserve fluorescence and antigenicity on the one hand, and EM tissue ultrastructure on the other. We demonstrate tissue staining and fixation protocols and a workflow that yield an excellent compromise between these multimodal imaging constraints. We adapt CAT for the study of projection neurons between different vocal brain regions in the songbird. We inject fluorescent tracers of different colors into afferent and efferent areas of HVC in zebra finches. Fluorescence of some tracers is lost during tissue preparation but recovered using anti-dye antibodies. Synapses are identified in EM imagery based on their morphology and ultrastructure and classified into projection neuron type based on fluorescence signal. Our adaptation of array tomography, involving the use of fluorescent tracers and heavy-metal rich staining and embedding protocols for high membrane contrast in EM will be useful for research aimed at statistically describing connectivity between different projection neuron types and for elucidating how sensory signals are routed in the brain and transformed into a meaningful motor output.

  7. Millimetre Level Accuracy GNSS Positioning with the Blind Adaptive Beamforming Method in Interference Environments

    Directory of Open Access Journals (Sweden)

    Saeed Daneshmand

    2016-10-01

    Full Text Available The use of antenna arrays in Global Navigation Satellite System (GNSS applications is gaining significant attention due to its superior capability to suppress both narrowband and wideband interference. However, the phase distortions resulting from array processing may limit the applicability of these methods for high precision applications using carrier phase based positioning techniques. This paper studies the phase distortions occurring with the adaptive blind beamforming method in which satellite angle of arrival (AoA information is not employed in the optimization problem. To cater to non-stationary interference scenarios, the array weights of the adaptive beamformer are continuously updated. The effects of these continuous updates on the tracking parameters of a GNSS receiver are analyzed. The second part of this paper focuses on reducing the phase distortions during the blind beamforming process in order to allow the receiver to perform carrier phase based positioning by applying a constraint on the structure of the array configuration and by compensating the array uncertainties. Limitations of the previous methods are studied and a new method is proposed that keeps the simplicity of the blind beamformer structure and, at the same time, reduces tracking degradations while achieving millimetre level positioning accuracy in interference environments. To verify the applicability of the proposed method and analyze the degradations, array signals corresponding to the GPS L1 band are generated using a combination of hardware and software simulators. Furthermore, the amount of degradation and performance of the proposed method under different conditions are evaluated based on Monte Carlo simulations.

  8. Sensory Processing Subtypes in Autism: Association with Adaptive Behavior

    Science.gov (United States)

    Lane, Alison E.; Young, Robyn L.; Baker, Amy E. Z.; Angley, Manya T.

    2010-01-01

    Children with autism are frequently observed to experience difficulties in sensory processing. This study examined specific patterns of sensory processing in 54 children with autistic disorder and their association with adaptive behavior. Model-based cluster analysis revealed three distinct sensory processing subtypes in autism. These subtypes…

  9. [Role adaptation process of elementary school health teachers: establishing their own positions].

    Science.gov (United States)

    Lee, Jeong Hee; Lee, Byoung Sook

    2014-06-01

    The purpose of this study was to explore and identify patterns from the phenomenon of the role adaptation process in elementary school health teachers and finally, suggest a model to describe the process. Grounded theory methodology and focus group interviews were used. Data were collected from 24 participants of four focus groups. The questions used were about their experience of role adaptation including situational contexts and interactional coping strategies. Transcribed data and field notes were analyzed with continuous comparative analysis. The core category was 'establishing their own positions', an interactional coping strategy. The phenomenon identified by participants was confusion and wandering in their role performance. Influencing contexts were unclear beliefs for their role as health teachers and non-supportive job environments. The result of the adaptation process was consolidation of their positions. Pride as health teachers and social recognition and supports intervened to produce that result. The process had three stages; entry, growth, and maturity. The role adaptation process of elementary school health teachers can be explained as establishing, strengthening and consolidating their own positions. Results of this study can be used as fundamental information for developing programs to support the role adaptation of health teachers.

  10. A quantitative formulation of the dynamic behaviour of adaptation processes to ionizing radiation

    International Nuclear Information System (INIS)

    Pfandler, S.

    1999-12-01

    The discovery of adaptation processes in cells (i.e., increased resistance to effects of a challenge dose administered after a lower adapting dose) has fuelled the debate on possible cellular processes relevant for low dose exposures. However, numerous experiments on radioadaptive response do not provide a clear picture of the nature of adaptive response and the conditions under which it occurs. This work proposes a model that succeeds in modelling data obtained from various experiments on radioadaptation. The model assumes impaired DNA integrity as triggering signal for induction of adaptation. Induction of adaptive response is seen as two-phase process. First, ionizing radiation induces radicals by water radiolysis which give rise to specific DNA lesions. On the other hand, these lesions must be perceived and, in a way, processed by the cell, thereby creating the final signal necessary for the comprehensive adaptive response. This processing occurs through some event in S-phase and can be halted by local conformational changes of chromatin induced by ionizing radiation. Thus, the model assumes two counteracting processes that have to be balanced for the triggering signal of adaptation to occur, each of them related to different target volumes. This work comprises mathematical treatment of radical formation, DNA lesion induction and inhibition of local initiation of replication which finally provides functions that quantify the reduction of double strand breaks introduced by challenge doses in adapted cells as compared to non-adapted cells. Non-linear regression analyses based upon data from experiments on radioadaptation yield regression curves which describe existing data satisfactorily. Thus, it corroborates the existence of adaptive response as, in principle, universal feature of cells and specifies conditions which favor development of radioadaptation. (author)

  11. The adaptive nature of the human neurocognitive architecture: an alternative model.

    Science.gov (United States)

    La Cerra, P; Bingham, R

    1998-09-15

    The model of the human neurocognitive architecture proposed by evolutionary psychologists is based on the presumption that the demands of hunter-gatherer life generated a vast array of cognitive adaptations. Here we present an alternative model. We argue that the problems inherent in the biological markets of ancestral hominids and their mammalian predecessors would have required an adaptively flexible, on-line information-processing system, and would have driven the evolution of a functionally plastic neural substrate, the neocortex, rather than a confederation of evolutionarily prespecified social cognitive adaptations. In alignment with recent neuroscientific evidence, we suggest that human cognitive processes result from the activation of constructed cortical representational networks, which reflect probabilistic relationships between sensory inputs, behavioral responses, and adaptive outcomes. The developmental construction and experiential modification of these networks are mediated by subcortical circuitries that are responsive to the life history regulatory system. As a consequence, these networks are intrinsically adaptively constrained. The theoretical and research implications of this alternative evolutionary model are discussed.

  12. Adaptively optimizing stochastic resonance in visual system

    Science.gov (United States)

    Yang, Tao

    1998-08-01

    Recent psychophysics experiment has showed that the noise strength could affect the perceived image quality. This work gives an adaptive process for achieving the optimal perceived image quality in a simple image perception array, which is a simple model of an image sensor. A reference image from memory is used for constructing a cost function and defining the optimal noise strength where the cost function gets its minimum point. The reference image is a binary image, which is used to define the background and the object. Finally, an adaptive algorithm is proposed for searching the optimal noise strength. Computer experimental results show that if the reference image is a thresholded version of the sub-threshold input image then the output of the sensor array gives an optimal output, in which the background and the object have the biggest contrast. If the reference image is different from a thresholded version of the sub-threshold input image then the output usually gives a sub-optimal contrast between the object and the background.

  13. Adaptive process triage system cannot identify patients with gastrointestinal perforation

    DEFF Research Database (Denmark)

    Bohm, Aske Mathias; Tolstrup, Mai-Britt; Gögenur, Ismail

    2017-01-01

    INTRODUCTION: Adaptive process triage (ADAPT) is a triage tool developed to assess the severity and address the priority of emergency patients. In 2009-2011, ADAPT was the most frequently used triage system in Denmark. Until now, no Danish triage system has been evaluated based on a selective group...... triaged as green or yellow had a GIP that was not identified by the triage system. CONCLUSION: ADAPT is incapable of identifying one of the most critically ill patient groups in need of emergency abdominal surgery. FUNDING: none. TRIAL REGISTRATION: HEH-2013-034 I-Suite: 02336....

  14. Array-based techniques for fingerprinting medicinal herbs

    Directory of Open Access Journals (Sweden)

    Xue Charlie

    2011-05-01

    Full Text Available Abstract Poor quality control of medicinal herbs has led to instances of toxicity, poisoning and even deaths. The fundamental step in quality control of herbal medicine is accurate identification of herbs. Array-based techniques have recently been adapted to authenticate or identify herbal plants. This article reviews the current array-based techniques, eg oligonucleotides microarrays, gene-based probe microarrays, Suppression Subtractive Hybridization (SSH-based arrays, Diversity Array Technology (DArT and Subtracted Diversity Array (SDA. We further compare these techniques according to important parameters such as markers, polymorphism rates, restriction enzymes and sample type. The applicability of the array-based methods for fingerprinting depends on the availability of genomics and genetics of the species to be fingerprinted. For the species with few genome sequence information but high polymorphism rates, SDA techniques are particularly recommended because they require less labour and lower material cost.

  15. Fabrication of Aligned Polyaniline Nanofiber Array via a Facile Wet Chemical Process.

    Science.gov (United States)

    Sun, Qunhui; Bi, Wu; Fuller, Thomas F; Ding, Yong; Deng, Yulin

    2009-06-17

    In this work, we demonstrate for the first time a template free approach to synthesize aligned polyaniline nanofiber (PN) array on a passivated gold (Au) substrate via a facile wet chemical process. The Au surface was first modified using 4-aminothiophenol (4-ATP) to afford the surface functionality, followed subsequently by an oxidation polymerization of aniline (AN) monomer in an aqueous medium using ammonium persulfate as the oxidant and tartaric acid as the doping agent. The results show that a vertically aligned PANI nanofiber array with individual fiber diameters of ca. 100 nm, heights of ca. 600 nm and a packing density of ca. 40 pieces·µm(-2) , was synthesized. Copyright © 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A single-rate context-dependent learning process underlies rapid adaptation to familiar object dynamics.

    Science.gov (United States)

    Ingram, James N; Howard, Ian S; Flanagan, J Randall; Wolpert, Daniel M

    2011-09-01

    Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics

  17. A single-rate context-dependent learning process underlies rapid adaptation to familiar object dynamics.

    Directory of Open Access Journals (Sweden)

    James N Ingram

    2011-09-01

    Full Text Available Motor learning has been extensively studied using dynamic (force-field perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar

  18. Influencing adaptation processes on the Australian rangelands for social and ecological resilience

    Directory of Open Access Journals (Sweden)

    Nadine A. Marshall

    2014-06-01

    Full Text Available Resource users require the capacity to cope and adapt to climate changes affecting resource condition if they, and their industries, are to remain viable. Understanding individual-scale responses to a changing climate will be an important component of designing well-targeted, broad-scale strategies and policies. Because of the interdependencies between people and ecosystems, understanding and supporting resilience of resource-dependent people may be as important an aspect of effective resource management as managing the resilience of ecological components. We refer to the northern Australian rangelands as an example of a system that is particularly vulnerable to the impacts of climate change and look for ways to enhance the resilience of the system. Vulnerability of the social system comprises elements of adaptive capacity and sensitivity to change (resource dependency as well as exposure, which is not examined here. We assessed the adaptive capacity of 240 cattle producers, using four established dimensions, and investigated the association between adaptive capacity and climate sensitivity (or resource dependency as measured through 14 established dimensions. We found that occupational identity, employability, networks, strategic approach, environmental awareness, dynamic resource use, and use of technology were all positively correlated with at least one dimension of adaptive capacity and that place attachment was negatively correlated with adaptive capacity. These results suggest that adaptation processes could be influenced by focusing on adaptive capacity and these aspects of climate sensitivity. Managing the resilience of individuals is critical to processes of adaptation at higher levels and needs greater attention if adaptation processes are to be shaped and influenced.

  19. Frequency Diverse Array Radar Signal Processing via Space-Range-Doppler Focus (SRDF Method

    Directory of Open Access Journals (Sweden)

    Chen Xiaolong

    2018-04-01

    Full Text Available To meet the urgent demand of low-observable moving target detection in complex environments, a novel method of Frequency Diverse Array (FDA radar signal processing method based on Space-Rang-Doppler Focusing (SRDF is proposed in this paper. The current development status of the FDA radar, the design of the array structure, beamforming, and joint estimation of distance and angle are systematically reviewed. The extra degrees of freedom provided by FDA radar are fully utilizsed, which include the Degrees Of Freedom (DOFs of the transmitted waveform, the location of array elements, correlation of beam azimuth and distance, and the long dwell time, which are also the DOFs in joint spatial (angle, distance, and frequency (Doppler dimensions. Simulation results show that the proposed method has the potential of improving target detection and parameter estimation for weak moving targets in complex environments and has broad application prospects in clutter and interference suppression, moving target refinement, etc..

  20. Programmable cellular arrays. Faults testing and correcting in cellular arrays

    International Nuclear Information System (INIS)

    Cercel, L.

    1978-03-01

    A review of some recent researches about programmable cellular arrays in computing and digital processing of information systems is presented, and includes both combinational and sequential arrays, with full arbitrary behaviour, or which can realize better implementations of specialized blocks as: arithmetic units, counters, comparators, control systems, memory blocks, etc. Also, the paper presents applications of cellular arrays in microprogramming, in implementing of a specialized computer for matrix operations, in modeling of universal computing systems. The last section deals with problems of fault testing and correcting in cellular arrays. (author)

  1. Assembly and Integration Process of the First High Density Detector Array for the Atacama Cosmology Telescope

    Science.gov (United States)

    Li, Yaqiong; Choi, Steve; Ho, Shuay-Pwu; Crowley, Kevin T.; Salatino, Maria; Simon, Sara M.; Staggs, Suzanne T.; Nati, Federico; Wollack, Edward J.

    2016-01-01

    The Advanced ACTPol (AdvACT) upgrade on the Atacama Cosmology Telescope (ACT) consists of multichroicTransition Edge Sensor (TES) detector arrays to measure the Cosmic Microwave Background (CMB) polarization anisotropies in multiple frequency bands. The first AdvACT detector array, sensitive to both 150 and 230 GHz, is fabricated on a 150 mm diameter wafer and read out with a completely different scheme compared to ACTPol. Approximately 2000 TES bolometers are packed into the wafer leading to both a much denser detector density and readout circuitry. The demonstration of the assembly and integration of the AdvACT arrays is important for the next generation CMB experiments, which will continue to increase the pixel number and density. We present the detailed assembly process of the first AdvACT detector array.

  2. Micro-magnet arrays for specific single bacterial cell positioning

    Energy Technology Data Exchange (ETDEWEB)

    Pivetal, Jérémy, E-mail: jeremy.piv@netcmail.com [Ecole Centrale de Lyon, CNRS UMR 5005, Laboratoire Ampère, F-69134 Écully (France); Royet, David [Ecole Centrale de Lyon, CNRS UMR 5005, Laboratoire Ampère, F-69134 Écully (France); Ciuta, Georgeta [Univ. Grenoble Alpes, Inst NEEL, F-38042 Grenoble (France); CNRS, Inst NEEL, F-38042 Grenoble (France); Frenea-Robin, Marie [Université de Lyon, Université Lyon 1, CNRS UMR 5005, Laboratoire Ampère, F-69622 Villeurbanne (France); Haddour, Naoufel [Ecole Centrale de Lyon, CNRS UMR 5005, Laboratoire Ampère, F-69134 Écully (France); Dempsey, Nora M. [Univ. Grenoble Alpes, Inst NEEL, F-38042 Grenoble (France); CNRS, Inst NEEL, F-38042 Grenoble (France); Dumas-Bouchiat, Frédéric [Univ Limoges, CNRS, SPCTS UMR 7513, 12 Rue Atlantis, F-87068 Limoges (France); Simonet, Pascal [Ecole Centrale de Lyon, CNRS UMR 5005, Laboratoire Ampère, F-69134 Écully (France)

    2015-04-15

    In various contexts such as pathogen detection or analysis of microbial diversity where cellular heterogeneity must be taken into account, there is a growing need for tools and methods that enable microbiologists to analyze bacterial cells individually. One of the main challenges in the development of new platforms for single cell studies is to perform precise cell positioning, but the ability to specifically target cells is also important in many applications. In this work, we report the development of new strategies to selectively trap single bacterial cells upon large arrays, based on the use of micro-magnets. Escherichia coli bacteria were used to demonstrate magnetically driven bacterial cell organization. In order to provide a flexible approach adaptable to several applications in the field of microbiology, cells were magnetically and specifically labeled using two different strategies, namely immunomagnetic labeling and magnetic in situ hybridization. Results show that centimeter-sized arrays of targeted, isolated bacteria can be successfully created upon the surface of a flat magnetically patterned hard magnetic film. Efforts are now being directed towards the integration of a detection tool to provide a complete micro-system device for a variety of microbiological applications. - Highlights: 1.We report a new approach to selectively micropattern bacterial cells individually upon micro-magnet arrays. 2.Permanent micro-magnets of a size approaching that of bacteria could be fabricated using a Thermo-Magnetic Patterning process. 3.Bacterial cells were labeled using two different magnetic labeling strategies providing flexible approach adaptable to several applications in the field of microbiology.

  3. High-throughput fabrication of micrometer-sized compound parabolic mirror arrays by using parallel laser direct-write processing

    International Nuclear Information System (INIS)

    Yan, Wensheng; Gu, Min; Cumming, Benjamin P

    2015-01-01

    Micrometer-sized parabolic mirror arrays have significant applications in both light emitting diodes and solar cells. However, low fabrication throughput has been identified as major obstacle for the mirror arrays towards large-scale applications due to the serial nature of the conventional method. Here, the mirror arrays are fabricated by using a parallel laser direct-write processing, which addresses this barrier. In addition, it is demonstrated that the parallel writing is able to fabricate complex arrays besides simple arrays and thus offers wider applications. Optical measurements show that each single mirror confines the full-width at half-maximum value to as small as 17.8 μm at the height of 150 μm whilst providing a transmittance of up to 68.3% at a wavelength of 633 nm in good agreement with the calculation values. (paper)

  4. Genome architecture enables local adaptation of Atlantic cod despite high connectivity

    DEFF Research Database (Denmark)

    Barth, Julia M I; Berg, Paul R; Jonsson, Per R.

    2017-01-01

    Adaptation to local conditions is a fundamental process in evolution; however, mechanisms maintaining local adaptation despite high gene flow are still poorly understood. Marine ecosystems provide a wide array of diverse habitats that frequently promote ecological adaptation even in species...... characterized by strong levels of gene flow. As one example, populations of the marine fish Atlantic cod (Gadus morhua) are highly connected due to immense dispersal capabilities but nevertheless show local adaptation in several key traits. By combining population genomic analyses based on 12K single......-nucleotide polymorphisms with larval dispersal patterns inferred using a biophysical ocean model, we show that Atlantic cod individuals residing in sheltered estuarine habitats of Scandinavian fjords mainly belong to offshore oceanic populations with considerable connectivity between these diverse ecosystems. Nevertheless...

  5. Collective Mindfulness in Post-implementation IS Adaptation Processes

    DEFF Research Database (Denmark)

    Aanestad, Margun; Jensen, Tina Blegind

    2016-01-01

    identify the way in which the organizational capability we call "collective mindfulness" was achieved. Being aware of how to practically achieve collective mindfulness, managers may be able to better facilitate mindful handling of post-implementation IS adaptation processes....

  6. ADAPTIVE CONTEXT PROCESSING IN ON-LINE HANDWRITTEN CHARACTER RECOGNITION

    NARCIS (Netherlands)

    Iwayama, N.; Ishigaki, K.

    2004-01-01

    We propose a new approach to context processing in on-line handwritten character recognition (OLCR). Based on the observation that writers often repeat the strings that they input, we take the approach of adaptive context processing. (ACP). In ACP, the strings input by a writer are automatically

  7. Fabricating process of hollow out-of-plane Ni microneedle arrays and properties of the integrated microfluidic device

    Science.gov (United States)

    Zhu, Jun; Cao, Ying; Wang, Hong; Li, Yigui; Chen, Xiang; Chen, Di

    2013-07-01

    Although microfluidic devices that integrate microfluidic chips with hollow out-of-plane microneedle arrays have many advantages in transdermal drug delivery applications, difficulties exist in their fabrication due to the special three-dimensional structures of hollow out-of-plane microneedles. A new, cost-effective process for the fabrication of a hollow out-of-plane Ni microneedle array is presented. The integration of PDMS microchips with the Ni hollow microneedle array and the properties of microfluidic devices are also presented. The integrated microfluidic devices provide a new approach for transdermal drug delivery.

  8. Post-processing Free Quantum Random Number Generator Based on Avalanche Photodiode Array

    International Nuclear Information System (INIS)

    Li Yang; Liao Sheng-Kai; Liang Fu-Tian; Shen Qi; Liang Hao; Peng Cheng-Zhi

    2016-01-01

    Quantum random number generators adopting single photon detection have been restricted due to the non-negligible dead time of avalanche photodiodes (APDs). We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32 × 32 APD array is up to tens of Gbits/s. (paper)

  9. Dissociating Face Identity and Facial Expression Processing Via Visual Adaptation

    Directory of Open Access Journals (Sweden)

    Hong Xu

    2012-10-01

    Full Text Available Face identity and facial expression are processed in two distinct neural pathways. However, most of the existing face adaptation literature studies them separately, despite the fact that they are two aspects from the same face. The current study conducted a systematic comparison between these two aspects by face adaptation, investigating how top- and bottom-half face parts contribute to the processing of face identity and facial expression. A real face (sad, “Adam” and its two size-equivalent face parts (top- and bottom-half were used as the adaptor in separate conditions. For face identity adaptation, the test stimuli were generated by morphing Adam's sad face with another person's sad face (“Sam”. For facial expression adaptation, the test stimuli were created by morphing Adam's sad face with his neutral face and morphing the neutral face with his happy face. In each trial, after exposure to the adaptor, observers indicated the perceived face identity or facial expression of the following test face via a key press. They were also tested in a baseline condition without adaptation. Results show that the top- and bottom-half face each generated a significant face identity aftereffect. However, the aftereffect by top-half face adaptation is much larger than that by the bottom-half face. On the contrary, only the bottom-half face generated a significant facial expression aftereffect. This dissociation of top- and bottom-half face adaptation suggests that face parts play different roles in face identity and facial expression. It thus provides further evidence for the distributed systems of face perception.

  10. Controllable 3D architectures of aligned carbon nanotube arrays by multi-step processes

    Science.gov (United States)

    Huang, Shaoming

    2003-06-01

    An effective way to fabricate large area three-dimensional (3D) aligned CNTs pattern based on pyrolysis of iron(II) phthalocyanine (FePc) by two-step processes is reported. The controllable generation of different lengths and selective growth of the aligned CNT arrays on metal-patterned (e.g., Ag and Au) substrate are the bases for generating such 3D aligned CNTs architectures. By controlling experimental conditions 3D aligned CNT arrays with different lengths/densities and morphologies/structures as well as multi-layered architectures can be fabricated in large scale by multi-step pyrolysis of FePc. These 3D architectures could have interesting properties and be applied for developing novel nanotube-based devices.

  11. Astronomical Data Processing Using SciQL, an SQL Based Query Language for Array Data

    Science.gov (United States)

    Zhang, Y.; Scheers, B.; Kersten, M.; Ivanova, M.; Nes, N.

    2012-09-01

    SciQL (pronounced as ‘cycle’) is a novel SQL-based array query language for scientific applications with both tables and arrays as first class citizens. SciQL lowers the entrance fee of adopting relational DBMS (RDBMS) in scientific domains, because it includes functionality often only found in mathematics software packages. In this paper, we demonstrate the usefulness of SciQL for astronomical data processing using examples from the Transient Key Project of the LOFAR radio telescope. In particular, how the LOFAR light-curve database of all detected sources can be constructed, by correlating sources across the spatial, frequency, time and polarisation domains.

  12. Numerical Simulation of the Diffusion Processes in Nanoelectrode Arrays Using an Axial Neighbor Symmetry Approximation.

    Science.gov (United States)

    Peinetti, Ana Sol; Gilardoni, Rodrigo S; Mizrahi, Martín; Requejo, Felix G; González, Graciela A; Battaglini, Fernando

    2016-06-07

    Nanoelectrode arrays have introduced a complete new battery of devices with fascinating electrocatalytic, sensitivity, and selectivity properties. To understand and predict the electrochemical response of these arrays, a theoretical framework is needed. Cyclic voltammetry is a well-fitted experimental technique to understand the undergoing diffusion and kinetics processes. Previous works describing microelectrode arrays have exploited the interelectrode distance to simulate its behavior as the summation of individual electrodes. This approach becomes limited when the size of the electrodes decreases to the nanometer scale due to their strong radial effect with the consequent overlapping of the diffusional fields. In this work, we present a computational model able to simulate the electrochemical behavior of arrays working either as the summation of individual electrodes or being affected by the overlapping of the diffusional fields without previous considerations. Our computational model relays in dividing a regular electrode array in cells. In each of them, there is a central electrode surrounded by neighbor electrodes; these neighbor electrodes are transformed in a ring maintaining the same active electrode area than the summation of the closest neighbor electrodes. Using this axial neighbor symmetry approximation, the problem acquires a cylindrical symmetry, being applicable to any diffusion pattern. The model is validated against micro- and nanoelectrode arrays showing its ability to predict their behavior and therefore to be used as a designing tool.

  13. Effect of Source, Surfactant, and Deposition Process on Electronic Properties of Nanotube Arrays

    Directory of Open Access Journals (Sweden)

    Dheeraj Jain

    2011-01-01

    Full Text Available The electronic properties of arrays of carbon nanotubes from several different sources differing in the manufacturing process used with a variety of average properties such as length, diameter, and chirality are studied. We used several common surfactants to disperse each of these nanotubes and then deposited them on Si wafers from their aqueous solutions using dielectrophoresis. Transport measurements were performed to compare and determine the effect of different surfactants, deposition processes, and synthesis processes on nanotubes synthesized using CVD, CoMoCAT, laser ablation, and HiPCO.

  14. Modeling for deformable mirrors and the adaptive optics optimization program

    International Nuclear Information System (INIS)

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-01-01

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language

  15. Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers

    Directory of Open Access Journals (Sweden)

    Wei Shu

    1994-01-01

    Full Text Available One of the challenges in programming distributed memory parallel machines is deciding how to allocate work to processors. This problem is particularly important for computations with unpredictable dynamic behaviors or irregular structures. We present a scheme for dynamic scheduling of medium-grained processes that is useful in this context. The adaptive contracting within neighborhood (ACWN is a dynamic, distributed, load-dependent, and scalable scheme. It deals with dynamic and unpredictable creation of processes and adapts to different systems. The scheme is described and contrasted with two other schemes that have been proposed in this context, namely the randomized allocation and the gradient model. The performance of the three schemes on an Intel iPSC/2 hypercube is presented and analyzed. The experimental results show that even though the ACWN algorithm incurs somewhat larger overhead than the randomized allocation, it achieves better performance in most cases due to its adaptiveness. Its feature of quickly spreading the work helps it outperform the gradient model in performance and scalability.

  16. Adoption: biological and social processes linked to adaptation.

    Science.gov (United States)

    Grotevant, Harold D; McDermott, Jennifer M

    2014-01-01

    Children join adoptive families through domestic adoption from the public child welfare system, infant adoption through private agencies, and international adoption. Each pathway presents distinctive developmental opportunities and challenges. Adopted children are at higher risk than the general population for problems with adaptation, especially externalizing, internalizing, and attention problems. This review moves beyond the field's emphasis on adoptee-nonadoptee differences to highlight biological and social processes that affect adaptation of adoptees across time. The experience of stress, whether prenatal, postnatal/preadoption, or during the adoption transition, can have significant impacts on the developing neuroendocrine system. These effects can contribute to problems with physical growth, brain development, and sleep, activating cascading effects on social, emotional, and cognitive development. Family processes involving contact between adoptive and birth family members, co-parenting in gay and lesbian adoptive families, and racial socialization in transracially adoptive families affect social development of adopted children into adulthood.

  17. Lightweight solar array blanket tooling, laser welding and cover process technology

    Science.gov (United States)

    Dillard, P. A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  18. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A; Fraga, F A F; Fraga, M M F R; Margato, L M S; Pereira, L [LIP-Coimbra and Departamento de Física, Universidade de Coimbra, Rua Larga, Coimbra (Portugal); Defendi, I; Jurkovic, M [Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II), TUM, Lichtenbergstr. 1, Garching (Germany); Engels, R; Kemmerling, G [Zentralinstitut für Elektronik, Forschungszentrum Jülich GmbH, Wilhelm-Johnen-Straße, Jülich (Germany); Gongadze, A; Guerard, B; Manzin, G; Niko, H; Peyaud, A; Piscitelli, F [Institut Laue Langevin, 6 Rue Jules Horowitz, Grenoble (France); Petrillo, C; Sacchetti, F [Istituto Nazionale per la Fisica della Materia, Unità di Perugia, Via A. Pascoli, Perugia (Italy); Raspino, D; Rhodes, N J; Schooneveld, E M, E-mail: andrei@coimbra.lip.pt [Science and Technology Facilities Council, Rutherford Appleton Laboratory, Harwell Oxford, Didcot (United Kingdom); others, and

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/∼andrei/.

  19. Adaptive multiple importance sampling for Gaussian processes

    Czech Academy of Sciences Publication Activity Database

    Xiong, X.; Šmídl, Václav; Filippone, M.

    2017-01-01

    Roč. 87, č. 8 (2017), s. 1644-1665 ISSN 0094-9655 R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Gaussian Process * Bayesian estimation * Adaptive importance sampling Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 0.757, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/smidl-0469804.pdf

  20. Processes and Materials for Flexible PV Arrays

    National Research Council Canada - National Science Library

    Gierow, Paul

    2002-01-01

    .... A parallel incentive for development of flexible PV arrays are the possibilities of synergistic advantages for certain types of spacecraft, in particular the Solar Thermal Propulsion (STP) Vehicle...

  1. Multivariable adaptive control of bio process

    Energy Technology Data Exchange (ETDEWEB)

    Maher, M.; Bahhou, B.; Roux, G. [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France); Maher, M. [Faculte des Sciences, Rabat (Morocco). Lab. de Physique

    1995-12-31

    This paper presents a multivariable adaptive control of a continuous-flow fermentation process for the alcohol production. The linear quadratic control strategy is used for the regulation of substrate and ethanol concentrations in the bioreactor. The control inputs are the dilution rate and the influent substrate concentration. A robust identification algorithm is used for the on-line estimation of linear MIMO model`s parameters. Experimental results of a pilot-plant fermenter application are reported and show the control performances. (authors) 8 refs.

  2. Adaptation as a political process: adjusting to drought and conflict in Kenya's drylands.

    Science.gov (United States)

    Eriksen, Siri; Lind, Jeremy

    2009-05-01

    In this article, we argue that people's adjustments to multiple shocks and changes, such as conflict and drought, are intrinsically political processes that have uneven outcomes. Strengthening local adaptive capacity is a critical component of adapting to climate change. Based on fieldwork in two areas in Kenya, we investigate how people seek to access livelihood adjustment options and promote particular adaptation interests through forming social relations and political alliances to influence collective decision-making. First, we find that, in the face of drought and conflict, relations are formed among individuals, politicians, customary institutions, and government administration aimed at retaining or strengthening power bases in addition to securing material means of survival. Second, national economic and political structures and processes affect local adaptive capacity in fundamental ways, such as through the unequal allocation of resources across regions, development policy biased against pastoralism, and competition for elected political positions. Third, conflict is part and parcel of the adaptation process, not just an external factor inhibiting local adaptation strategies. Fourth, there are relative winners and losers of adaptation, but whether or not local adjustments to drought and conflict compound existing inequalities depends on power relations at multiple geographic scales that shape how conflicting interests are negotiated locally. Climate change adaptation policies are unlikely to be successful or minimize inequity unless the political dimensions of local adaptation are considered; however, existing power structures and conflicts of interests represent political obstacles to developing such policies.

  3. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    Science.gov (United States)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-08-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN_x) materials and microwave structures, and the resulting performance improvements.

  4. An adaptive algorithm for simulation of stochastic reaction-diffusion processes

    International Nuclear Information System (INIS)

    Ferm, Lars; Hellander, Andreas; Loetstedt, Per

    2010-01-01

    We propose an adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes. For such systems, simulation of the diffusion requires the predominant part of the computing time. In order to reduce the computational work, the diffusion in parts of the domain is treated macroscopically, in other parts with the tau-leap method and in the remaining parts with Gillespie's stochastic simulation algorithm (SSA) as implemented in the next subvolume method (NSM). The chemical reactions are handled by SSA everywhere in the computational domain. A trajectory of the process is advanced in time by an operator splitting technique and the timesteps are chosen adaptively. The spatial adaptation is based on estimates of the errors in the tau-leap method and the macroscopic diffusion. The accuracy and efficiency of the method are demonstrated in examples from molecular biology where the domain is discretized by unstructured meshes.

  5. Fast But Fleeting: Adaptive Motor Learning Processes Associated with Aging and Cognitive Decline

    Science.gov (United States)

    Trewartha, Kevin M.; Garcia, Angeles; Wolpert, Daniel M.

    2014-01-01

    Motor learning has been shown to depend on multiple interacting learning processes. For example, learning to adapt when moving grasped objects with novel dynamics involves a fast process that adapts and decays quickly—and that has been linked to explicit memory—and a slower process that adapts and decays more gradually. Each process is characterized by a learning rate that controls how strongly motor memory is updated based on experienced errors and a retention factor determining the movement-to-movement decay in motor memory. Here we examined whether fast and slow motor learning processes involved in learning novel dynamics differ between younger and older adults. In addition, we investigated how age-related decline in explicit memory performance influences learning and retention parameters. Although the groups adapted equally well, they did so with markedly different underlying processes. Whereas the groups had similar fast processes, they had different slow processes. Specifically, the older adults exhibited decreased retention in their slow process compared with younger adults. Within the older group, who exhibited considerable variation in explicit memory performance, we found that poor explicit memory was associated with reduced retention in the fast process, as well as the slow process. These findings suggest that explicit memory resources are a determining factor in impairments in the both the fast and slow processes for motor learning but that aging effects on the slow process are independent of explicit memory declines. PMID:25274819

  6. Fast but fleeting: adaptive motor learning processes associated with aging and cognitive decline.

    Science.gov (United States)

    Trewartha, Kevin M; Garcia, Angeles; Wolpert, Daniel M; Flanagan, J Randall

    2014-10-01

    Motor learning has been shown to depend on multiple interacting learning processes. For example, learning to adapt when moving grasped objects with novel dynamics involves a fast process that adapts and decays quickly-and that has been linked to explicit memory-and a slower process that adapts and decays more gradually. Each process is characterized by a learning rate that controls how strongly motor memory is updated based on experienced errors and a retention factor determining the movement-to-movement decay in motor memory. Here we examined whether fast and slow motor learning processes involved in learning novel dynamics differ between younger and older adults. In addition, we investigated how age-related decline in explicit memory performance influences learning and retention parameters. Although the groups adapted equally well, they did so with markedly different underlying processes. Whereas the groups had similar fast processes, they had different slow processes. Specifically, the older adults exhibited decreased retention in their slow process compared with younger adults. Within the older group, who exhibited considerable variation in explicit memory performance, we found that poor explicit memory was associated with reduced retention in the fast process, as well as the slow process. These findings suggest that explicit memory resources are a determining factor in impairments in the both the fast and slow processes for motor learning but that aging effects on the slow process are independent of explicit memory declines. Copyright © 2014 the authors 0270-6474/14/3413411-11$15.00/0.

  7. Normalized value coding explains dynamic adaptation in the human valuation process.

    Science.gov (United States)

    Khaw, Mel W; Glimcher, Paul W; Louie, Kenway

    2017-11-28

    The notion of subjective value is central to choice theories in ecology, economics, and psychology, serving as an integrated decision variable by which options are compared. Subjective value is often assumed to be an absolute quantity, determined in a static manner by the properties of an individual option. Recent neurobiological studies, however, have shown that neural value coding dynamically adapts to the statistics of the recent reward environment, introducing an intrinsic temporal context dependence into the neural representation of value. Whether valuation exhibits this kind of dynamic adaptation at the behavioral level is unknown. Here, we show that the valuation process in human subjects adapts to the history of previous values, with current valuations varying inversely with the average value of recently observed items. The dynamics of this adaptive valuation are captured by divisive normalization, linking these temporal context effects to spatial context effects in decision making as well as spatial and temporal context effects in perception. These findings suggest that adaptation is a universal feature of neural information processing and offer a unifying explanation for contextual phenomena in fields ranging from visual psychophysics to economic choice.

  8. Applications of the phased array technique

    International Nuclear Information System (INIS)

    Erhard, A.; Schenk, G.; Hauser, Th.; Voelz, U.

    1999-01-01

    The application of the phased array technique was limited to heavy and thick wall components as present in the nuclear industry. With the improvement of the equipment and probes other application areas are now open for the phased array technique, e.g. the inspection of the turbine blade root, weld inspection in a wall thickness range between 12 and 40 mm, inspection of aircraft components, inspection of spot welds or inspection of concretes. The aim of the use of phased array techniques has not been changed related to the first applications, i.e. the adaptation of the sound beam to the geometry by steering the angel of incidence or the skewing angle as well as the focussing of sound fields. Due to the fact, that the new applications of the phased array techniques in some cases don't leave the laboratories for the time being, the examples of this contribution will focus applications with practical background. (orig.)

  9. Conversion of electromagnetic energy in Z-pinch process of single planar wire arrays at 1.5 MA

    International Nuclear Information System (INIS)

    Liangping, Wang; Mo, Li; Juanjuan, Han; Ning, Guo; Jian, Wu; Aici, Qiu

    2014-01-01

    The electromagnetic energy conversion in the Z-pinch process of single planar wire arrays was studied on Qiangguang generator (1.5 MA, 100 ns). Electrical diagnostics were established to monitor the voltage of the cathode-anode gap and the load current for calculating the electromagnetic energy. Lumped-element circuit model of wire arrays was employed to analyze the electromagnetic energy conversion. Inductance as well as resistance of a wire array during the Z-pinch process was also investigated. Experimental data indicate that the electromagnetic energy is mainly converted to magnetic energy and kinetic energy and ohmic heating energy can be neglected before the final stagnation. The kinetic energy can be responsible for the x-ray radiation before the peak power. After the stagnation, the electromagnetic energy coupled by the load continues increasing and the resistance of the load achieves its maximum of 0.6–1.0 Ω in about 10–20 ns

  10. Piezo-Phototronic Enhanced UV Sensing Based on a Nanowire Photodetector Array.

    Science.gov (United States)

    Han, Xun; Du, Weiming; Yu, Ruomeng; Pan, Caofeng; Wang, Zhong Lin

    2015-12-22

    A large array of Schottky UV photodetectors (PDs) based on vertical aligned ZnO nanowires is achieved. By introducing the piezo-phototronic effect, the performance of the PD array is enhanced up to seven times in photoreponsivity, six times in sensitivity, and 2.8 times in detection limit. The UV PD array may have applications in optoelectronic systems, adaptive optical computing, and communication. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. A Framework for Speech Enhancement with Ad Hoc Microphone Arrays

    DEFF Research Database (Denmark)

    Tavakoli, Vincent Mohammad; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    Speech enhancement is vital for improved listening practices. Ad hoc microphone arrays are promising assets for this purpose. Most well-established enhancement techniques with conventional arrays can be adapted into ad hoc scenarios. Despite recent efforts to introduce various ad hoc speech...... enhancement apparatus, a common framework for integration of conventional methods into this new scheme is still missing. This paper establishes such an abstraction based on inter and intra sub-array speech coherencies. Along with measures for signal quality at the input of sub-arrays, a measure of coherency...... is proposed both for sub-array selection in local enhancement approaches, and also for selecting a proper global reference when more than one sub-array are used. Proposed methods within this framework are evaluated with regard to quantitative and qualitative measures, including array gains, the speech...

  12. Interconnection of socio-cultural adaptation and identity in the socialization process

    Directory of Open Access Journals (Sweden)

    L Y Rakhmanova

    2015-12-01

    Full Text Available The article considers the influence of the socio-cultural adaptation of an individual on his personality and identity structure; analyzes the processes of primary and secondary socialization in comparison with subsequent adaptation processes, as well as the possibility of a compromise between the unchanging, rigid identity and the ability to adapt flexibly to the changing context. The author identifies positive and negative aspects of adaptation in the contemporary society while testing the hypothesis that if the adaptation is successful and proceeds within the normal range, it helps to preserve the stability of social structures, but does not contribute to their development for the maladaptive behavior of individuals and groups stimulates social transformations. In the second part of the article, the author shows the relationship of the socio-cultural identity and the individual status in various social communities and tries to answer the question whether the existence and functioning of the social community as a pure ‘form’ without individuals (its members is possible. The author describes the identity phenomenon in the context of the opposition of the universal and unique, similarities and differences. The article also introduces the concept of the involvement in the socio-cultural context as one of the indicators of the completeness and depth of individual socio-cultural adaptation to a certain environment, which is quite important for the internal hierarchy of individual identity.

  13. The Fuge Tube Diode Array Spectrophotometer

    Science.gov (United States)

    Arneson, B. T.; Long, S. R.; Stewart, K. K.; Lagowski, J. J.

    2008-01-01

    We present the details for adapting a diode array UV-vis spectrophotometer to incorporate the use of polypropylene microcentrifuge tubes--fuge tubes--as cuvettes. Optical data are presented validating that the polyethylene fuge tubes are equivalent to the standard square cross section polystyrene or glass cuvettes generally used in…

  14. Array processor architecture

    Science.gov (United States)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  15. Increasing the specificity and function of DNA microarrays by processing arrays at different stringencies

    DEFF Research Database (Denmark)

    Dufva, Martin; Petersen, Jesper; Poulsen, Lena

    2009-01-01

    DNA microarrays have for a decade been the only platform for genome-wide analysis and have provided a wealth of information about living organisms. DNA microarrays are processed today under one condition only, which puts large demands on assay development because all probes on the array need to f...

  16. Beamspace Adaptive Beamforming for Hydrodynamic Towed Array Self-Noise Cancellation

    National Research Council Canada - National Science Library

    Premus, Vincent

    2001-01-01

    ... against signal self-nulling associated with steering vector mismatch. Particular attention is paid to the definition of white noise gain as the metric that reflects the level of mainlobe adaptive nulling for an adaptive beamformer...

  17. Beamspace Adaptive Beamforming for Hydrodynamic Towed Array Self-Noise Cancellation

    National Research Council Canada - National Science Library

    Premus, Vincent

    2000-01-01

    ... against signal self-nulling associated with steering vector mismatch. Particular attention is paid to the definition of white noise gain as the metric that reflects the level of mainlobe adaptive nulling for an adaptive beamformer...

  18. Adapting the unified software development process for user interface development

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.

    2006-01-01

    In this paper we describe how existing software developing processes, such as Rational Unified Process, can be adapted in order to allow disciplined and more efficient development of user interfaces. The main objective of this paper is to demonstrate that standard modeling environments, based on the

  19. Adaptive Moving Object Tracking Integrating Neural Networks And Intelligent Processing

    Science.gov (United States)

    Lee, James S. J.; Nguyen, Dziem D.; Lin, C.

    1989-03-01

    A real-time adaptive scheme is introduced to detect and track moving objects under noisy, dynamic conditions including moving sensors. This approach integrates the adaptiveness and incremental learning characteristics of neural networks with intelligent reasoning and process control. Spatiotemporal filtering is used to detect and analyze motion, exploiting the speed and accuracy of multiresolution processing. A neural network algorithm constitutes the basic computational structure for classification. A recognition and learning controller guides the on-line training of the network, and invokes pattern recognition to determine processing parameters dynamically and to verify detection results. A tracking controller acts as the central control unit, so that tracking goals direct the over-all system. Performance is benchmarked against the Widrow-Hoff algorithm, for target detection scenarios presented in diverse FLIR image sequences. Efficient algorithm design ensures that this recognition and control scheme, implemented in software and commercially available image processing hardware, meets the real-time requirements of tracking applications.

  20. Real-time data acquisition and parallel data processing solution for TJ-II Bolometer arrays diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, E. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain)]. E-mail: eduardo.barrera@upm.es; Ruiz, M. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Lopez, S. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Machon, D. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain); Ochando, M. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain)

    2006-07-15

    Maps of local plasma emissivity of TJ-II plasmas are determined using three-array cameras of silicon photodiodes (AXUV type from IRD). They have assigned the top and side ports of the same sector of the vacuum vessel. Each array consists of 20 unfiltered detectors. The signals from each of these detectors are the inputs to an iterative algorithm of tomographic reconstruction. Currently, these signals are acquired by a PXI standard system at approximately 50 kS/s, with 12 bits of resolution and are stored for off-line processing. A 0.5 s discharge generates 3 Mbytes of raw data. The algorithm's load exceeds the CPU capacity of the PXI system's controller in a continuous mode, making unfeasible to process the samples in parallel with their acquisition in a PXI standard system. A new architecture model has been developed, making possible to add one or several processing cards to a standard PXI system. With this model, it is possible to define how to distribute, in real-time, the data from all acquired signals in the system among the processing cards and the PXI controller. This way, by distributing the data processing among the system controller and two processing cards, the data processing can be done in parallel with the acquisition. Hence, this system configuration would be able to measure even in long pulse devices.

  1. Preventing KPI Violations in Business Processes based on Decision Tree Learning and Proactive Runtime Adaptation

    Directory of Open Access Journals (Sweden)

    Dimka Karastoyanova

    2012-01-01

    Full Text Available The performance of business processes is measured and monitored in terms of Key Performance Indicators (KPIs. If the monitoring results show that the KPI targets are violated, the underlying reasons have to be identified and the process should be adapted accordingly to address the violations. In this paper we propose an integrated monitoring, prediction and adaptation approach for preventing KPI violations of business process instances. KPIs are monitored continuously while the process is executed. Additionally, based on KPI measurements of historical process instances we use decision tree learning to construct classification models which are then used to predict the KPI value of an instance while it is still running. If a KPI violation is predicted, we identify adaptation requirements and adaptation strategies in order to prevent the violation.

  2. Fabrication process for CMUT arrays with polysilicon electrodes, nanometre precision cavity gaps and through-silicon vias

    International Nuclear Information System (INIS)

    Due-Hansen, J; Poppe, E; Summanwar, A; Jensen, G U; Breivik, L; Wang, D T; Schjølberg-Henriksen, K; Midtbø, K

    2012-01-01

    Capacitive micromachined ultrasound transducers (CMUTs) can be used to realize miniature ultrasound probes. Through-silicon vias (TSVs) allow for close integration of the CMUT and read-out electronics. A fabrication process enabling the realization of a CMUT array with TSVs is being developed. The integrated process requires the formation of highly doped polysilicon electrodes with low surface roughness. A process for polysilicon film deposition, doping, CMP, RIE and thermal annealing that resulted in a film with sheet resistance of 4.0 Ω/□ and a surface roughness of 1 nm rms has been developed. The surface roughness of the polysilicon film was found to increase with higher phosphorus concentrations. The surface roughness also increased when oxygen was present in the thermal annealing ambient. The RIE process for etching CMUT cavities in the doped polysilicon gave a mean etch depth of 59.2 ± 3.9 nm and a uniformity across the wafer ranging from 1.0 to 4.7%. The two presented processes are key processes that enable the fabrication of CMUT arrays suitable for applications in for instance intravascular cardiology and gastrointestinal imaging. (paper)

  3. Free-running ADC- and FPGA-based signal processing method for brain PET using GAPD arrays

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Choi, Yong, E-mail: ychoi.image@gmail.com [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Hong, Key Jo [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kang, Jihoon [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Jung, Jin Ho [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Huh, Youn Suk [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Lim, Hyun Keong; Kim, Sang Su [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kim, Byung-Tae [Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Chung, Yonghyun [Department of Radiological Science, Yonsei University College of Health Science, 234 Meaji, Heungup Wonju, Kangwon-Do 220-710 (Korea, Republic of)

    2012-02-01

    Currently, for most photomultiplier tube (PMT)-based PET systems, constant fraction discriminators (CFD) and time to digital converters (TDC) have been employed to detect gamma ray signal arrival time, whereas anger logic circuits and peak detection analog-to-digital converters (ADCs) have been implemented to acquire position and energy information of detected events. As compared to PMT the Geiger-mode avalanche photodiodes (GAPDs) have a variety of advantages, such as compactness, low bias voltage requirement and MRI compatibility. Furthermore, the individual read-out method using a GAPD array coupled 1:1 with an array scintillator can provide better image uniformity than can be achieved using PMT and anger logic circuits. Recently, a brain PET using 72 GAPD arrays (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm) coupled 1:1 with LYSO scintillators (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm Multiplication-Sign 20 mm) has been developed for simultaneous PET/MRI imaging in our laboratory. Eighteen 64:1 position decoder circuits (PDCs) were used to reduce GAPD channel number and three off-the-shelf free-running ADC and field programmable gate array (FPGA) combined data acquisition (DAQ) cards were used for data acquisition and processing. In this study, a free-running ADC- and FPGA-based signal processing method was developed for the detection of gamma ray signal arrival time, energy and position information all together for each GAPD channel. For the method developed herein, three DAQ cards continuously acquired 18 channels of pre-amplified analog gamma ray signals and 108-bit digital addresses from 18 PDCs. In the FPGA, the digitized gamma ray pulses and digital addresses were processed to generate data packages containing pulse arrival time, baseline value, energy value and GAPD channel ID. Finally, these data packages were saved to a 128 Mbyte on-board synchronous dynamic random access memory (SDRAM) and

  4. Explicit and implicit processes in behavioural adaptation to road width.

    Science.gov (United States)

    Lewis-Evans, Ben; Charlton, Samuel G

    2006-05-01

    The finding that drivers may react to safety interventions in a way that is contrary to what was intended is the phenomenon of behavioural adaptation. This phenomenon has been demonstrated across various safety interventions and has serious implications for road safety programs the world over. The present research used a driving simulator to assess behavioural adaptation in drivers' speed and lateral displacement in response to manipulations of road width. Of interest was whether behavioural adaptation would occur and whether we could determine whether it was the result of explicit, conscious decisions or implicit perceptual processes. The results supported an implicit, zero perceived risk model of behavioural adaptation with reduced speeds on a narrowed road accompanied by increased ratings of risk and a marked inability of the participants to identify that any change in road width had occurred.

  5. Signal processing for solar array monitoring, fault detection, and optimization

    CERN Document Server

    Braun, Henry; Spanias, Andreas

    2012-01-01

    Although the solar energy industry has experienced rapid growth recently, high-level management of photovoltaic (PV) arrays has remained an open problem. As sensing and monitoring technology continues to improve, there is an opportunity to deploy sensors in PV arrays in order to improve their management. In this book, we examine the potential role of sensing and monitoring technology in a PV context, focusing on the areas of fault detection, topology optimization, and performance evaluation/data visualization. First, several types of commonly occurring PV array faults are considered and detection algorithms are described. Next, the potential for dynamic optimization of an array's topology is discussed, with a focus on mitigation of fault conditions and optimization of power output under non-fault conditions. Finally, monitoring system design considerations such as type and accuracy of measurements, sampling rate, and communication protocols are considered. It is our hope that the benefits of monitoring presen...

  6. Dependently typed array programs don’t go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2009-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  7. Dependently typed array programs don't go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2008-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  8. Fundamentals of adaptive signal processing

    CERN Document Server

    Uncini, Aurelio

    2015-01-01

    This book is an accessible guide to adaptive signal processing methods that equips the reader with advanced theoretical and practical tools for the study and development of circuit structures and provides robust algorithms relevant to a wide variety of application scenarios. Examples include multimodal and multimedia communications, the biological and biomedical fields, economic models, environmental sciences, acoustics, telecommunications, remote sensing, monitoring, and, in general, the modeling and prediction of complex physical phenomena. The reader will learn not only how to design and implement the algorithms but also how to evaluate their performance for specific applications utilizing the tools provided. While using a simple mathematical language, the employed approach is very rigorous. The text will be of value both for research purposes and for courses of study.

  9. Fabrication of metal-matrix composites and adaptive composites using ultrasonic consolidation process

    International Nuclear Information System (INIS)

    Kong, C.Y.; Soar, R.C.

    2005-01-01

    Ultrasonic consolidation (UC) has been used to embed thermally sensitive and damage intolerant fibres within aluminium matrix structures using high frequency, low amplitude, mechanical vibrations. The UC process can induce plastic flow in the metal foils being bonded, to allow the embedding of fibres at typically 25% of the melting temperature of the base metal and at a fraction of the clamping force when compared to fusion processes. To date, the UC process has successfully embedded Sigma silicon carbide (SiC) fibres, shape memory alloy wires and optical fibres, which are presented in this paper. The eventual aim of this research is targeted at the fabrication of adaptive composite structures having the ability to measure external stimuli and respond by adapting their structure accordingly, through the action of embedded active and passive functional fibres within a freeform fabricated metal-matrix structure. This paper presents the fundamental studies of this research to identify embedding methods and working range for the fabrication of adaptive composite structures. The methods considered have produced embedded fibre specimens in which large amounts of plastic flow have been observed, within the matrix, as it is deformed around the fibres, resulting in fully consolidated specimens without damage to the fibres. The microscopic observation techniques and macroscopic functionality tests confirms that the UC process could be applied to the fabrication of metal-matrix composites and adaptive composites, where fusion techniques are not feasible and where a 'cold' process is necessary

  10. Adaptive processes drive ecomorphological convergent evolution in antwrens (Thamnophilidae).

    Science.gov (United States)

    Bravo, Gustavo A; Remsen, J V; Brumfield, Robb T

    2014-10-01

    Phylogenetic niche conservatism (PNC) and convergence are contrasting evolutionary patterns that describe phenotypic similarity across independent lineages. Assessing whether and how adaptive processes give origin to these patterns represent a fundamental step toward understanding phenotypic evolution. Phylogenetic model-based approaches offer the opportunity not only to distinguish between PNC and convergence, but also to determine the extent that adaptive processes explain phenotypic similarity. The Myrmotherula complex in the Neotropical family Thamnophilidae is a polyphyletic group of sexually dimorphic small insectivorous forest birds that are relatively homogeneous in size and shape. Here, we integrate a comprehensive species-level molecular phylogeny of the Myrmotherula complex with morphometric and ecological data within a comparative framework to test whether phenotypic similarity is described by a pattern of PNC or convergence, and to identify evolutionary mechanisms underlying body size and shape evolution. We show that antwrens in the Myrmotherula complex represent distantly related clades that exhibit adaptive convergent evolution in body size and divergent evolution in body shape. Phenotypic similarity in the group is primarily driven by their tendency to converge toward smaller body sizes. Differences in body size and shape across lineages are associated to ecological and behavioral factors. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  11. A Spatiotemporal Indexing Approach for Efficient Processing of Big Array-Based Climate Data with MapReduce

    Science.gov (United States)

    Li, Zhenlong; Hu, Fei; Schnase, John L.; Duffy, Daniel Q.; Lee, Tsengdar; Bowen, Michael K.; Yang, Chaowei

    2016-01-01

    Climate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient processing of these data is essential for assessing global challenges such as climate change, natural disasters, and diseases. This is challenging not only because of the large data volume, but also because of the intrinsic high-dimensional nature of geoscience data. To tackle this challenge, we propose a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries. Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balancing the workload. The proposed indexing approach is evaluated using the National Aeronautics and Space Administration (NASA) Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. The experimental results show that the index can significantly accelerate querying and processing (10 speedup compared to the baseline test using the same computing cluster), while keeping the index-to-data ratio small (0.0328). The applicability of the indexing approach is demonstrated by a climate anomaly detection deployed on a NASA Hadoop cluster. This approach is also able to support efficient processing of general array-based spatiotemporal data in various geoscience domains without special configuration on a Hadoop cluster.

  12. Adaptive RAC codes employing statistical channel evaluation ...

    African Journals Online (AJOL)

    An adaptive encoding technique using row and column array (RAC) codes employing a different number of parity columns that depends on the channel state is proposed in this paper. The trellises of the proposed adaptive codes and a statistical channel evaluation technique employing these trellises are designed and ...

  13. Recommendations for elaboration, transcultural adaptation and validation process of tests in Speech, Hearing and Language Pathology.

    Science.gov (United States)

    Pernambuco, Leandro; Espelt, Albert; Magalhães, Hipólito Virgílio; Lima, Kenio Costa de

    2017-06-08

    to present a guide with recommendations for translation, adaptation, elaboration and process of validation of tests in Speech and Language Pathology. the recommendations were based on international guidelines with a focus on the elaboration, translation, cross-cultural adaptation and validation process of tests. the recommendations were grouped into two Charts, one of them with procedures for translation and transcultural adaptation and the other for obtaining evidence of validity, reliability and measures of accuracy of the tests. a guide with norms for the organization and systematization of the process of elaboration, translation, cross-cultural adaptation and validation process of tests in Speech and Language Pathology was created.

  14. Interface for Barge-in Free Spoken Dialogue System Based on Sound Field Reproduction and Microphone Array

    Directory of Open Access Journals (Sweden)

    Hinamoto Yoichi

    2007-01-01

    Full Text Available A barge-in free spoken dialogue interface using sound field control and microphone array is proposed. In the conventional spoken dialogue system using an acoustic echo canceller, it is indispensable to estimate a room transfer function, especially when the transfer function is changed by various interferences. However, the estimation is difficult when the user and the system speak simultaneously. To resolve the problem, we propose a sound field control technique to prevent the response sound from being observed. Combined with a microphone array, the proposed method can achieve high elimination performance with no adaptive process. The efficacy of the proposed interface is ascertained in the experiments on the basis of sound elimination and speech recognition.

  15. Integration of spintronic interface for nanomagnetic arrays

    Directory of Open Access Journals (Sweden)

    Andrew Lyle

    2011-12-01

    Full Text Available An experimental demonstration utilizing a spintronic input/output (I/O interface for arrays of closely spaced nanomagnets is presented. The free layers of magnetic tunnel junctions (MTJs form dipole coupled nanomagnet arrays which can be applied to different contexts including Magnetic Quantum Cellular Automata (MQCA for logic applications and self-biased devices for field sensing applications. Dipole coupled nanomagnet arrays demonstrate adaptability to a variety of contexts due to the ability for tuning of magnetic response. Spintronics allows individual nanomagnets to be manipulated with spin transfer torque and monitored with magnetoresistance. This facilitates measurement of the magnetic coupling which is important for (yet to be demonstrated data propagation reliability studies. In addition, the same magnetic coupling can be tuned to reduce coercivity for field sensing. Dipole coupled nanomagnet arrays have the potential to be thousands of times more energy efficient than CMOS technology for logic applications, and they also have the potential to form multi-axis field sensors.

  16. Processing-Efficient Distributed Adaptive RLS Filtering for Computationally Constrained Platforms

    Directory of Open Access Journals (Sweden)

    Noor M. Khan

    2017-01-01

    Full Text Available In this paper, a novel processing-efficient architecture of a group of inexpensive and computationally incapable small platforms is proposed for a parallely distributed adaptive signal processing (PDASP operation. The proposed architecture runs computationally expensive procedures like complex adaptive recursive least square (RLS algorithm cooperatively. The proposed PDASP architecture operates properly even if perfect time alignment among the participating platforms is not available. An RLS algorithm with the application of MIMO channel estimation is deployed on the proposed architecture. Complexity and processing time of the PDASP scheme with MIMO RLS algorithm are compared with sequentially operated MIMO RLS algorithm and liner Kalman filter. It is observed that PDASP scheme exhibits much lesser computational complexity parallely than the sequential MIMO RLS algorithm as well as Kalman filter. Moreover, the proposed architecture provides an improvement of 95.83% and 82.29% decreased processing time parallely compared to the sequentially operated Kalman filter and MIMO RLS algorithm for low doppler rate, respectively. Likewise, for high doppler rate, the proposed architecture entails an improvement of 94.12% and 77.28% decreased processing time compared to the Kalman and RLS algorithms, respectively.

  17. Fast Spectral Velocity Estimation Using Adaptive Techniques: In-Vivo Results

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Udesen, Jesper

    2007-01-01

    Adaptive spectral estimation techniques are known to provide good spectral resolution and contrast even when the observation window(OW) is very sbort. In this paper two adaptive techniques are tested and compared to the averaged perlodogram (Welch) for blood velocity estimation. The Blood Power...... the blood process over slow-time and averaging over depth to find the power spectral density estimate. In this paper, the two adaptive methods are explained, and performance Is assessed in controlled steady How experiments and in-vivo measurements. The three methods were tested on a circulating How rig...... with a blood mimicking fluid flowing in the tube. The scanning section is submerged in water to allow ultrasound data acquisition. Data was recorded using a BK8804 linear array transducer and the RASMUS ultrasound scanner. The controlled experiments showed that the OW could be significantly reduced when...

  18. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    International Nuclear Information System (INIS)

    Tu, K T; Chung, C K

    2016-01-01

    An integrated technology of CO 2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO 2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO 2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO 2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold. (paper)

  19. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    Science.gov (United States)

    Tu, K. T.; Chung, C. K.

    2016-06-01

    An integrated technology of CO2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold.

  20. A conceptual model for the development process of confirmatory adaptive clinical trials within an emergency research network.

    Science.gov (United States)

    Mawocha, Samkeliso C; Fetters, Michael D; Legocki, Laurie J; Guetterman, Timothy C; Frederiksen, Shirley; Barsan, William G; Lewis, Roger J; Berry, Donald A; Meurer, William J

    2017-06-01

    Adaptive clinical trials use accumulating data from enrolled subjects to alter trial conduct in pre-specified ways based on quantitative decision rules. In this research, we sought to characterize the perspectives of key stakeholders during the development process of confirmatory-phase adaptive clinical trials within an emergency clinical trials network and to build a model to guide future development of adaptive clinical trials. We used an ethnographic, qualitative approach to evaluate key stakeholders' views about the adaptive clinical trial development process. Stakeholders participated in a series of multidisciplinary meetings during the development of five adaptive clinical trials and completed a Strengths-Weaknesses-Opportunities-Threats questionnaire. In the analysis, we elucidated overarching themes across the stakeholders' responses to develop a conceptual model. Four major overarching themes emerged during the analysis of stakeholders' responses to questioning: the perceived statistical complexity of adaptive clinical trials and the roles of collaboration, communication, and time during the development process. Frequent and open communication and collaboration were viewed by stakeholders as critical during the development process, as were the careful management of time and logistical issues related to the complexity of planning adaptive clinical trials. The Adaptive Design Development Model illustrates how statistical complexity, time, communication, and collaboration are moderating factors in the adaptive design development process. The intensity and iterative nature of this process underscores the need for funding mechanisms for the development of novel trial proposals in academic settings.

  1. Logarithmic Adaptive Neighborhood Image Processing (LANIP): Introduction, Connections to Human Brightness Perception, and Application Issues

    OpenAIRE

    J. Debayle; J.-C. Pinoli

    2007-01-01

    A new framework for image representation, processing, and analysis is introduced and exposed through practical applications. The proposed approach is called logarithmic adaptive neighborhood image processing (LANIP) since it is based on the logarithmic image processing (LIP) and on the general adaptive neighborhood image processing (GANIP) approaches, that allow several intensity and spatial properties of the human brightness perception to be mathematically modeled and operationalized, and c...

  2. A dual-directional light-control film with a high-sag and high-asymmetrical-shape microlens array fabricated by a UV imprinting process

    International Nuclear Information System (INIS)

    Lin, Ta-Wei; Liao, Yunn-Shiuan; Chen, Chi-Feng; Yang, Jauh-Jung

    2008-01-01

    A dual-directional light-control film with a high-sag and high-asymmetric-shape long gapless hexagonal microlens array fabricated by an ultra-violent (UV) imprinting process is presented. Such a lens array is designed by ray-tracing simulation and fabricated by a micro-replication process including gray-scale lithography, electroplating process and UV curing. The shape of the designed lens array is similar to that of a near half-cylindrical lens array with a periodical ripple. The measurement results of a prototype show that the incident lights using a collimated LED with the FWHM of dispersion angle, 12°, are diversified differently in short and long axes. The numerical and experimental results show that the FWHMs of the view angle for angular brightness in long and short axis directions through the long hexagonal lens are about 34.3° and 18.1° and 31° and 13°, respectively. Compared with the simulation result, the errors in long and short axes are about 5% and 16%, respectively. Obviously, the asymmetric gapless microlens array can realize the aim of the controlled asymmetric angular brightness. Such a light-control film can be used as a power saving screen compared with convention diffusing film for the application of a rear projection display

  3. Adaptive Convergence Rates of a Dirichlet Process Mixture of Multivariate Normals

    OpenAIRE

    Tokdar, Surya T.

    2011-01-01

    It is shown that a simple Dirichlet process mixture of multivariate normals offers Bayesian density estimation with adaptive posterior convergence rates. Toward this, a novel sieve for non-parametric mixture densities is explored, and its rate adaptability to various smoothness classes of densities in arbitrary dimension is demonstrated. This sieve construction is expected to offer a substantial technical advancement in studying Bayesian non-parametric mixture models based on stick-breaking p...

  4. Is adaptation. Truly an adaptation? Is adaptation. Truly an adaptation?

    Directory of Open Access Journals (Sweden)

    Thais Flores Nogueira Diniz

    2008-04-01

    Full Text Available The article begins by historicizing film adaptation from the arrival of cinema, pointing out the many theoretical approaches under which the process has been seen: from the concept of “the same story told in a different medium” to a comprehensible definition such as “the process through which works can be transformed, forming an intersection of textual surfaces, quotations, conflations and inversions of other texts”. To illustrate this new concept, the article discusses Spike Jonze’s film Adaptation. according to James Naremore’s proposal which considers the study of adaptation as part of a general theory of repetition, joined with the study of recycling, remaking, and every form of retelling. The film deals with the attempt by the scriptwriter Charles Kaufman, cast by Nicholas Cage, to adapt/translate a non-fictional book to the cinema, but ends up with a kind of film which is by no means what it intended to be: a film of action in the model of Hollywood productions. During the process of creation, Charles and his twin brother, Donald, undergo a series of adventures involving some real persons from the world of film, the author and the protagonist of the book, all of them turning into fictional characters in the film. In the film, adaptation then signifies something different from itstraditional meaning. The article begins by historicizing film adaptation from the arrival of cinema, pointing out the many theoretical approaches under which the process has been seen: from the concept of “the same story told in a different medium” to a comprehensible definition such as “the process through which works can be transformed, forming an intersection of textual surfaces, quotations, conflations and inversions of other texts”. To illustrate this new concept, the article discusses Spike Jonze’s film Adaptation. according to James Naremore’s proposal which considers the study of adaptation as part of a general theory of repetition

  5. Optimization of ultrasonic arrays design and setting using a differential evolution

    International Nuclear Information System (INIS)

    Puel, B.; Chatillon, S.; Calmon, P.; Lesselier, D.

    2011-01-01

    Optimization of both design and setting of phased arrays could be not so easy when they are performed manually via parametric studies. An optimization method based on an Evolutionary Algorithm and numerical simulation is proposed and evaluated. The Randomized Adaptive Differential Evolution has been adapted to meet the specificities of the non-destructive testing applications. In particular, the solution of multi-objective problems is aimed at with the implementation of the concept of pareto-optimal sets of solutions. The algorithm has been implemented and connected to the ultrasonic simulation modules of the CIVA software used as forward model. The efficiency of the method is illustrated on two realistic cases of application: optimization of the position and delay laws of a flexible array inspecting a nozzle, considered as a mono-objective problem; and optimization of the design of a surrounded array and its delay laws, considered as a constrained bi-objective problem. (authors)

  6. X-ray imager using solution processed organic transistor arrays and bulk heterojunction photodiodes on thin, flexible plastic substrate

    NARCIS (Netherlands)

    Gelinck, G.H.; Kumar, A.; Moet, D.; Steen, J.L. van der; Shafique, U.; Malinowski, P.E.; Myny, K.; Rand, B.P.; Simon, M.; Rütten, W.; Douglas, A.; Jorritsma, J.; Heremans, P.L.; Andriessen, H.A.J.M.

    2013-01-01

    We describe the fabrication and characterization of large-area active-matrix X-ray/photodetector array of high quality using organic photodiodes and organic transistors. All layers with the exception of the electrodes are solution processed. Because it is processed on a very thin plastic substrate

  7. A self-adaptive thermal switch array for rapid temperature stabilization under various thermal power inputs

    International Nuclear Information System (INIS)

    Geng, Xiaobao; Patel, Pragnesh; Narain, Amitabh; Meng, Dennis Desheng

    2011-01-01

    A self-adaptive thermal switch array (TSA) based on actuation by low-melting-point alloy droplets is reported to stabilize the temperature of a heat-generating microelectromechanical system (MEMS) device at a predetermined range (i.e. the optimal working temperature of the device) with neither a control circuit nor electrical power consumption. When the temperature is below this range, the TSA stays off and works as a thermal insulator. Therefore, the MEMS device can quickly heat itself up to its optimal working temperature during startup. Once this temperature is reached, TSA is automatically turned on to increase the thermal conductance, working as an effective thermal spreader. As a result, the MEMS device tends to stay at its optimal working temperature without complex thermal management components and the associated parasitic power loss. A prototype TSA was fabricated and characterized to prove the concept. The stabilization temperatures under various power inputs have been studied both experimentally and theoretically. Under the increment of power input from 3.8 to 5.8 W, the temperature of the device increased only by 2.5 °C due to the stabilization effect of TSA

  8. Optimal control of stretching process of flexible solar arrays on spacecraft based on a hybrid optimization strategy

    Directory of Open Access Journals (Sweden)

    Qijia Yao

    2017-07-01

    Full Text Available The optimal control of multibody spacecraft during the stretching process of solar arrays is investigated, and a hybrid optimization strategy based on Gauss pseudospectral method (GPM and direct shooting method (DSM is presented. First, the elastic deformation of flexible solar arrays was described approximately by the assumed mode method, and a dynamic model was established by the second Lagrangian equation. Then, the nonholonomic motion planning problem is transformed into a nonlinear programming problem by using GPM. By giving fewer LG points, initial values of the state variables and control variables were obtained. A serial optimization framework was adopted to obtain the approximate optimal solution from a feasible solution. Finally, the control variables were discretized at LG points, and the precise optimal control inputs were obtained by DSM. The optimal trajectory of the system can be obtained through numerical integration. Through numerical simulation, the stretching process of solar arrays is stable with no detours, and the control inputs match the various constraints of actual conditions. The results indicate that the method is effective with good robustness. Keywords: Motion planning, Multibody spacecraft, Optimal control, Gauss pseudospectral method, Direct shooting method

  9. A review of culturally adapted versions of the Oswestry Disability Index: the adaptation process, construct validity, test-retest reliability and internal consistency.

    Science.gov (United States)

    Sheahan, Peter J; Nelson-Wong, Erika J; Fischer, Steven L

    2015-01-01

    The Oswestry Disability Index (ODI) is a self-report-based outcome measure used to quantify the extent of disability related to low back pain (LBP), a substantial contributor to workplace absenteeism. The ODI tool has been adapted for use by patients in several non-English speaking nations. It is unclear, however, if these adapted versions of the ODI are as credible as the original ODI developed for English-speaking nations. The objective of this study was to conduct a review of the literature to identify culturally adapted versions of the ODI and to report on the adaptation process, construct validity, test-retest reliability and internal consistency of these ODIs. Following a pragmatic review process, data were extracted from each study with regard to these four outcomes. While most studies applied adaptation processes in accordance with best-practice guidelines, there were some deviations. However, all studies reported high-quality psychometric properties: group mean construct validity was 0.734 ± 0.094 (indicated via a correlation coefficient), test-retest reliability was 0.937 ± 0.032 (indicated via an intraclass correlation coefficient) and internal consistency was 0.876 ± 0.047 (indicated via Cronbach's alpha). Researchers can be confident when using any of these culturally adapted ODIs, or when comparing and contrasting results between cultures where these versions were employed. Implications for Rehabilitation Low back pain is the second leading cause of disability in the world, behind only cancer. The Oswestry Disability Index (ODI) has been developed as a self-report outcome measure of low back pain for administration to patients. An understanding of the various cross-cultural adaptations of the ODI is important for more concerted multi-national research efforts. This review examines 16 cross-cultural adaptations of the ODI and should inform the work of health care and rehabilitation professionals.

  10. Multi-Model Adaptive Fuzzy Controller for a CSTR Process

    Directory of Open Access Journals (Sweden)

    Shubham Gogoria

    2015-09-01

    Full Text Available Continuous Stirred Tank Reactors are intensively used to control exothermic reactions in chemical industries. It is a very complex multi-variable system with non-linear characteristics. This paper deals with linearization of the mathematical model of a CSTR Process. Multi model adaptive fuzzy controller has been designed to control the reactor concentration and temperature of CSTR process. This method combines the output of multiple Fuzzy controllers, which are operated at various operating points. The proposed solution is a straightforward implementation of Fuzzy controller with gain scheduler to control the linearly inseparable parameters of a highly non-linear process.

  11. Advances on Frequency Diverse Array Radar and Its Applications

    Directory of Open Access Journals (Sweden)

    Wang Wenqin

    2018-04-01

    Full Text Available Unlike the conventional phased array that provides only angle-dependent transmit beampattern, Frequency Diverse Array (FDA employs a small frequency increment across its array elements to produce automatic beam scanning without requiring phase shifters or mechanical steering. FDA can produce both rangedependent and time-variant transmit beampatterns, which overcomes the disadvantages of conventional phased arrays that produce only angle-dependent beampattern. Thus, FDA has many promising applications. Based on a previous study conducted by the author, “Frequency Diverse Array Radar: Concept, Principle and Application” (Journal of Electronics & Information Technology, 2016, 38(4: 1000–1011, the current study introduces basic FDA radar concepts, principles, and application characteristics and reviews recent advances on FDA radar and its applications. In addition, several new promising applications of FDA technology are discussed, such as radar electronic warfare and radar-communications, as well as open technical challenges such as beampattern variance, effective receiver design, adaptive signal detection and estimation, and the implementation of practical FDA radar demos.

  12. CRISPRDetect: A flexible algorithm to define CRISPR arrays.

    Science.gov (United States)

    Biswas, Ambarish; Staals, Raymond H J; Morales, Sergio E; Fineran, Peter C; Brown, Chris M

    2016-05-17

    CRISPR (clustered regularly interspaced short palindromic repeats) RNAs provide the specificity for noncoding RNA-guided adaptive immune defence systems in prokaryotes. CRISPR arrays consist of repeat sequences separated by specific spacer sequences. CRISPR arrays have previously been identified in a large proportion of prokaryotic genomes. However, currently available detection algorithms do not utilise recently discovered features regarding CRISPR loci. We have developed a new approach to automatically detect, predict and interactively refine CRISPR arrays. It is available as a web program and command line from bioanalysis.otago.ac.nz/CRISPRDetect. CRISPRDetect discovers putative arrays, extends the array by detecting additional variant repeats, corrects the direction of arrays, refines the repeat/spacer boundaries, and annotates different types of sequence variations (e.g. insertion/deletion) in near identical repeats. Due to these features, CRISPRDetect has significant advantages when compared to existing identification tools. As well as further support for small medium and large repeats, CRISPRDetect identified a class of arrays with 'extra-large' repeats in bacteria (repeats 44-50 nt). The CRISPRDetect output is integrated with other analysis tools. Notably, the predicted spacers can be directly utilised by CRISPRTarget to predict targets. CRISPRDetect enables more accurate detection of arrays and spacers and its gff output is suitable for inclusion in genome annotation pipelines and visualisation. It has been used to analyse all complete bacterial and archaeal reference genomes.

  13. Adaptive nonparametric estimation for L\\'evy processes observed at low frequency

    OpenAIRE

    Kappus, Johanna

    2013-01-01

    This article deals with adaptive nonparametric estimation for L\\'evy processes observed at low frequency. For general linear functionals of the L\\'evy measure, we construct kernel estimators, provide upper risk bounds and derive rates of convergence under regularity assumptions. Our focus lies on the adaptive choice of the bandwidth, using model selection techniques. We face here a non-standard problem of model selection with unknown variance. A new approach towards this problem is proposed, ...

  14. Application of adaptive digital signal processing to speech enhancement for the hearing impaired.

    Science.gov (United States)

    Chabries, D M; Christiansen, R W; Brey, R H; Robinette, M S; Harris, R W

    1987-01-01

    A major complaint of individuals with normal hearing and hearing impairments is a reduced ability to understand speech in a noisy environment. This paper describes the concept of adaptive noise cancelling for removing noise from corrupted speech signals. Application of adaptive digital signal processing has long been known and is described from a historical as well as technical perspective. The Widrow-Hoff LMS (least mean square) algorithm developed in 1959 forms the introduction to modern adaptive signal processing. This method uses a "primary" input which consists of the desired speech signal corrupted with noise and a second "reference" signal which is used to estimate the primary noise signal. By subtracting the adaptively filtered estimate of the noise, the desired speech signal is obtained. Recent developments in the field as they relate to noise cancellation are described. These developments include more computationally efficient algorithms as well as algorithms that exhibit improved learning performance. A second method for removing noise from speech, for use when no independent reference for the noise exists, is referred to as single channel noise suppression. Both adaptive and spectral subtraction techniques have been applied to this problem--often with the result of decreased speech intelligibility. Current techniques applied to this problem are described, including signal processing techniques that offer promise in the noise suppression application.

  15. [The Psychosocial Adaptation Process of Psychiatric Nurses Working in Community Mental Health Centers].

    Science.gov (United States)

    Min, So Young

    2015-12-01

    The aim of this study was to verify psychosocial issues faced by psychiatric and community mental health nurse practitioners (PCMHNP) working in community mental health centers, and to identify the adaptation processes used to resolve the issues. Data were collected through in-depth interviews between December 2013 and August 2014. Participants were 11 PCMHNP working in community mental health centers. Analysis was done using the grounded theory methodology. The first question was "How did you start working at a community mental health center; what were the difficulties you faced during your employment and how did you resolve them?" The core category was 'regulating within relationships.' The adaptation process was categorized into three sequential stages: 'nesting,' 'hanging around the nest,' and 'settling into the nest.' Various action/interaction strategies were employed in these stages. The adaptation results from using these strategies were 'psychiatric nursing within life' and 'a long way to go.' The results of this study are significant as they aid in understanding the psychosocial adaptation processes of PCMHNP working in community mental health centers, and indicate areas to be addressed in the future in order for PCMHNP to fulfill their professional role in the local community.

  16. An adaptive deep-coupled GNSS/INS navigation system with hybrid pre-filter processing

    Science.gov (United States)

    Wu, Mouyan; Ding, Jicheng; Zhao, Lin; Kang, Yingyao; Luo, Zhibin

    2018-02-01

    The deep-coupling of a global navigation satellite system (GNSS) with an inertial navigation system (INS) can provide accurate and reliable navigation information. There are several kinds of deeply-coupled structures. These can be divided mainly into coherent and non-coherent pre-filter based structures, which have their own strong advantages and disadvantages, especially in accuracy and robustness. In this paper, the existing pre-filters of the deeply-coupled structures are analyzed and modified to improve them firstly. Then, an adaptive GNSS/INS deeply-coupled algorithm with hybrid pre-filters processing is proposed to combine the advantages of coherent and non-coherent structures. An adaptive hysteresis controller is designed to implement the hybrid pre-filters processing strategy. The simulation and vehicle test results show that the adaptive deeply-coupled algorithm with hybrid pre-filters processing can effectively improve navigation accuracy and robustness, especially in a GNSS-challenged environment.

  17. Adaptive Layer Height During DLP Materials Processing

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Zhang, Yang; Nielsen, Jakob Skov

    2016-01-01

    for considerable process speedup during the Additive Manufacture of components that contain areas of low cross-section variability, at no loss of surface quality. The adaptive slicing strategy was tested with a purpose built vat polymerisation system and numerical engine designed and constructed to serve as a Next......-Gen technology platform. By means of assessing hemispherical manufactured test specimen and through 3D surface mapping with variable-focus microscopy and confocal microscopy, a balance between minimal loss of surface quality with a maximal increase of manufacturing rate has been identified as a simple angle...

  18. Adaptive algorithm of magnetic heading detection

    Science.gov (United States)

    Liu, Gong-Xu; Shi, Ling-Feng

    2017-11-01

    Magnetic data obtained from a magnetic sensor usually fluctuate in a certain range, which makes it difficult to estimate the magnetic heading accurately. In fact, magnetic heading information is usually submerged in noise because of all kinds of electromagnetic interference and the diversity of the pedestrian’s motion states. In order to solve this problem, a new adaptive algorithm based on the (typically) right-angled corridors of a building or residential buildings is put forward to process heading information. First, a 3D indoor localization platform is set up based on MPU9250. Then, several groups of data are measured by changing the experimental environment and pedestrian’s motion pace. The raw data from the attached inertial measurement unit are calibrated and arranged into a time-stamped array and written to a data file. Later, the data file is imported into MATLAB for processing and analysis using the proposed adaptive algorithm. Finally, the algorithm is verified by comparison with the existing algorithm. The experimental results show that the algorithm has strong robustness and good fault tolerance, which can detect the heading information accurately and in real-time.

  19. ASIC Readout Circuit Architecture for Large Geiger Photodiode Arrays

    Science.gov (United States)

    Vasile, Stefan; Lipson, Jerold

    2012-01-01

    The objective of this work was to develop a new class of readout integrated circuit (ROIC) arrays to be operated with Geiger avalanche photodiode (GPD) arrays, by integrating multiple functions at the pixel level (smart-pixel or active pixel technology) in 250-nm CMOS (complementary metal oxide semiconductor) processes. In order to pack a maximum of functions within a minimum pixel size, the ROIC array is a full, custom application-specific integrated circuit (ASIC) design using a mixed-signal CMOS process with compact primitive layout cells. The ROIC array was processed to allow assembly in bump-bonding technology with photon-counting infrared detector arrays into 3-D imaging cameras (LADAR). The ROIC architecture was designed to work with either common- anode Si GPD arrays or common-cathode InGaAs GPD arrays. The current ROIC pixel design is hardwired prior to processing one of the two GPD array configurations, and it has the provision to allow soft reconfiguration to either array (to be implemented into the next ROIC array generation). The ROIC pixel architecture implements the Geiger avalanche quenching, bias, reset, and time to digital conversion (TDC) functions in full-digital design, and uses time domain over-sampling (vernier) to allow high temporal resolution at low clock rates, increased data yield, and improved utilization of the laser beam.

  20. Fiber optic modification of a diode array spectrophotometer

    International Nuclear Information System (INIS)

    Van Hare, D.R.; Prather, W.S.

    1986-01-01

    Fiber optics were adapted to a Hewlett-Packard diode array spectrophotometer to permit the analysis of radioactive samples without risking contamination of the instrument. Instrument performance was not compromised by the fiber optics. The instrument is in routine use at the Savannah River Plant control laboratories

  1. Remote online process measurements by a fiber optic diode array spectrometer

    International Nuclear Information System (INIS)

    Van Hare, D.R.; Prather, W.S.; O'Rourke, P.E.

    1986-01-01

    The development of remote online monitors for radioactive process streams is an active research area at the Savannah River Laboratory (SRL). A remote offline spectrophotometric measurement system has been developed and used at the Savannah River Plant (SRP) for the past year to determine the plutonium concentration of process solution samples. The system consists of a commercial diode array spectrophotometer modified with fiber optic cables that allow the instrument to be located remotely from the measurement cell. Recently, a fiber optic multiplexer has been developed for this instrument, which allows online monitoring of five locations sequentially. The multiplexer uses a motorized micrometer to drive one of five sets of optical fibers into the optical path of the instrument. A sixth optical fiber is used as an external reference and eliminates the need to flush out process lines to re-reference the spectrophotometer. The fiber optic multiplexer has been installed in a process prototype facility to monitor uranium loading and breakthrough of ion exchange columns. The design of the fiber optic multiplexer is discussed and data from the prototype facility are presented to demonstrate the capabilities of the measurement system

  2. Microneedle array electrode for human EEG recording.

    NARCIS (Netherlands)

    Lüttge, Regina; van Nieuwkasteele-Bystrova, Svetlana Nikolajevna; van Putten, Michel Johannes Antonius Maria; Vander Sloten, Jos; Verdonck, Pascal; Nyssen, Marc; Haueisen, Jens

    2009-01-01

    Microneedle array electrodes for EEG significantly reduce the mounting time, particularly by circumvention of the need for skin preparation by scrubbing. We designed a new replication process for numerous types of microneedle arrays. Here, polymer microneedle array electrodes with 64 microneedles,

  3. Design considerations for large roof-integrated photovoltaic arrays

    Energy Technology Data Exchange (ETDEWEB)

    Ropp, M.E.; Begovic, M.; Rohatgi, A. [Georgia Inst. of Tech., Atlanta, GA (United States); Long, R. [Georgia Institute of Technology, Atlanta (United States). Office of Facilities

    1997-01-01

    This paper describes calculations and modeling used in the design of the photovoltaic (PV) array built on the roof of the Georgia Tech Aquatic Center, the aquatic sports venue for the 1996 Olympic and Paralympic Games. The software package PVFORM (version 3.3) was extensively utilized; because of its importance to this work, it is thoroughly reviewed here. Procedures required to adapt PVFORM to this particular installation are described. The expected behavior and performance of the system, including maximum power output, annual energy output and maximum expected temperature, are then presented, and the use of this information in making informed design decisions is described. Finally, since the orientation of the PV array is not optimal, the effect of the unoptimized array orientation on the system`s performance is quantified. (author)

  4. A fuzzy model based adaptive PID controller design for nonlinear and uncertain processes.

    Science.gov (United States)

    Savran, Aydogan; Kahraman, Gokalp

    2014-03-01

    We develop a novel adaptive tuning method for classical proportional-integral-derivative (PID) controller to control nonlinear processes to adjust PID gains, a problem which is very difficult to overcome in the classical PID controllers. By incorporating classical PID control, which is well-known in industry, to the control of nonlinear processes, we introduce a method which can readily be used by the industry. In this method, controller design does not require a first principal model of the process which is usually very difficult to obtain. Instead, it depends on a fuzzy process model which is constructed from the measured input-output data of the process. A soft limiter is used to impose industrial limits on the control input. The performance of the system is successfully tested on the bioreactor, a highly nonlinear process involving instabilities. Several tests showed the method's success in tracking, robustness to noise, and adaptation properties. We as well compared our system's performance to those of a plant with altered parameters with measurement noise, and obtained less ringing and better tracking. To conclude, we present a novel adaptive control method that is built upon the well-known PID architecture that successfully controls highly nonlinear industrial processes, even under conditions such as strong parameter variations, noise, and instabilities. © 2013 Published by ISA on behalf of ISA.

  5. A hidden Markov model approach for determining expression from genomic tiling micro arrays

    Directory of Open Access Journals (Sweden)

    Krogh Anders

    2006-05-01

    Full Text Available Abstract Background Genomic tiling micro arrays have great potential for identifying previously undiscovered coding as well as non-coding transcription. To-date, however, analyses of these data have been performed in an ad hoc fashion. Results We present a probabilistic procedure, ExpressHMM, that adaptively models tiling data prior to predicting expression on genomic sequence. A hidden Markov model (HMM is used to model the distributions of tiling array probe scores in expressed and non-expressed regions. The HMM is trained on sets of probes mapped to regions of annotated expression and non-expression. Subsequently, prediction of transcribed fragments is made on tiled genomic sequence. The prediction is accompanied by an expression probability curve for visual inspection of the supporting evidence. We test ExpressHMM on data from the Cheng et al. (2005 tiling array experiments on ten Human chromosomes 1. Results can be downloaded and viewed from our web site 2. Conclusion The value of adaptive modelling of fluorescence scores prior to categorisation into expressed and non-expressed probes is demonstrated. Our results indicate that our adaptive approach is superior to the previous analysis in terms of nucleotide sensitivity and transfrag specificity.

  6. The application of adaptive Luenberger observer concept in chemical process control: An algorithmic approach

    Science.gov (United States)

    Doko, Marthen Luther

    2017-05-01

    When developing a wide class of on-line parameter estimation scheme for estimating the unknown parameter vector that appears in certain general linear and bilinear parametric model will be parametrizations of LTI processes or plants as well as of some special classes of nonlinear processes or plants. The resuls is used to design one of the important tools in control, i.e., adaptive observer and for stable LTI processes or plants. In this paper it will consider the design of schemes that simultaneously estimate the plant state variables and parameters by processing the plant I/O measurements on-line and such schemes is refered to as adaptive observers. The design of an adaptive observer is based on the combination of a state observer that could be used to estimate the state variables of aparticular plant state-space representation with an on-line estimation scheme. The choice of the plant state-space representation is crucial for the design and stability analysis of the adaptive observer. The paper will discuss a class of observer called Adaptive Luenberger Observer and its application. Begin with observable canonical form one can find observability matrix of n linear independent rows. By using this fact or their linear combination chosen as a basis, various canonical forms known also as Luenberger canonical form can be obtained. Also,this formation will leads to various algorithm for computing including computation of observable canonical form, observable Hessenberg form and reduced-order state observer design.

  7. Acoustic array systems theory, implementation, and application

    CERN Document Server

    Bai, Mingsian R; Benesty, Jacob

    2013-01-01

    Presents a unified framework of far-field and near-field array techniques for noise source identification and sound field visualization, from theory to application. Acoustic Array Systems: Theory, Implementation, and Application provides an overview of microphone array technology with applications in noise source identification and sound field visualization. In the comprehensive treatment of microphone arrays, the topics covered include an introduction to the theory, far-field and near-field array signal processing algorithms, practical implementations, and common applic

  8. Networked Airborne Communications Using Adaptive Multi Beam Directional Links

    Science.gov (United States)

    2016-03-05

    Networked Airborne Communications Using Adaptive Multi-Beam Directional Links R. Bruce MacLeod Member, IEEE, and Adam Margetts Member, IEEE MIT...provide new techniques for increasing throughput in airborne adaptive directional net- works. By adaptive directional linking, we mean systems that can...techniques can dramatically increase the capacity in airborne networks. Advances in digital array technology are beginning to put these gains within reach

  9. Adaptive capacity and human cognition: the process of individual adaptation to climate change

    Energy Technology Data Exchange (ETDEWEB)

    Grothmann, T. [Potsdam Institute for Climate Impact Research, Potsdam (Germany). Department of Global Change and Social Systems; Patt, A. [Boston University (United States). Department of Geography

    2005-10-01

    Adaptation has emerged as an important area of research and assessment among climate change scientists. Most scholarly work has identified resource constraints as being the most significant determinants of adaptation. However, empirical research on adaptation has so far mostly not addressed the importance of measurable and alterable psychological factors in determining adaptation. Drawing from the literature in psychology and behavioural economics, we develop a socio-cognitive Model of Private Proactive Adaptation to Climate Change (MPPACC). MPPACC separates out the psychological steps to taking action in response to perception, and allows one to see where the most important bottlenecks occur - including risk perception and perceived adaptive capacity, a factor largely neglected in previous climate change research. We then examine two case studies - one from urban Germany and one from rural Zimbabwe - to explore the validity of MPPACC to explaining adaptation. In the German study, we find that MPPACC provides better statistical power than traditional socio-economic models. In the Zimbabwean case study, we find a qualitative match between MPPACC and adaptive behaviour. Finally, we discuss the important implications of our findings both on vulnerability and adaptation assessments, and on efforts to promote adaptation through outside intervention. (author)

  10. Solar array flight dynamic experiment

    Science.gov (United States)

    Schock, Richard W.

    1987-01-01

    The purpose of the Solar Array Flight Dynamic Experiment (SAFDE) is to demonstrate the feasibility of on-orbit measurement and ground processing of large space structures' dynamic characteristics. Test definition or verification provides the dynamic characteristic accuracy required for control systems use. An illumination/measurement system was developed to fly on space shuttle flight STS-41D. The system was designed to dynamically evaluate a large solar array called the Solar Array Flight Experiment (SAFE) that had been scheduled for this flight. The SAFDE system consisted of a set of laser diode illuminators, retroreflective targets, an intelligent star tracker receiver and the associated equipment to power, condition, and record the results. In six tests on STS-41D, data was successfully acquired from 18 retroreflector targets and ground processed, post flight, to define the solar array's dynamic characteristic. The flight experiment proved the viability of on-orbit test definition of large space structures dynamic characteristics. Future large space structures controllability should be greatly enhanced by this capability.

  11. Optimized Adaptive Perturb and Observe Maximum Power Point Tracking Control for Photovoltaic Generation

    Directory of Open Access Journals (Sweden)

    Luigi Piegari

    2015-04-01

    Full Text Available The power extracted from PV arrays is usually maximized using maximum power point tracking algorithms. One of the most widely used techniques is the perturb & observe algorithm, which periodically perturbs the operating point of the PV array, sometime with an adaptive perturbation step, and compares the PV power before and after the perturbation. This paper analyses the most suitable perturbation step to optimize maximum power point tracking performance and suggests a design criterion to select the parameters of the controller. Using this proposed adaptive step, the MPPT perturb & observe algorithm achieves an excellent dynamic response by adapting the perturbation step to the actual operating conditions of the PV array. The proposed algorithm has been validated and tested in a laboratory using a dual input inductor push-pull converter. This particular converter topology is an efficient interface to boost the low voltage of PV arrays and effectively control the power flow when input or output voltages are variable. The experimental results have proved the superiority of the proposed algorithm in comparison of traditional perturb & observe and incremental conductance techniques.

  12. Entrepreneural adaptation processes. An industry-geographic working model, illustrated by the example of Saarbergwerke AG

    International Nuclear Information System (INIS)

    Doerrenbaecher, P.

    1992-01-01

    The study has two goals: Solutions based in industrial geography and chronogeography are to be synthesized in order to develop a model of entrepreneurial adaptation processes. On the basis of this model, the development of Saarbergwerke AG in the first phase of the coal crisis (1957-1962) is reconstructed as an entrepreneurial adaptation process. (orig.) [de

  13. Hierarchical adaptive experimental design for Gaussian process emulators

    International Nuclear Information System (INIS)

    Busby, Daniel

    2009-01-01

    Large computer simulators have usually complex and nonlinear input output functions. This complicated input output relation can be analyzed by global sensitivity analysis; however, this usually requires massive Monte Carlo simulations. To effectively reduce the number of simulations, statistical techniques such as Gaussian process emulators can be adopted. The accuracy and reliability of these emulators strongly depend on the experimental design where suitable evaluation points are selected. In this paper a new sequential design strategy called hierarchical adaptive design is proposed to obtain an accurate emulator using the least possible number of simulations. The hierarchical design proposed in this paper is tested on various standard analytic functions and on a challenging reservoir forecasting application. Comparisons with standard one-stage designs such as maximin latin hypercube designs show that the hierarchical adaptive design produces a more accurate emulator with the same number of computer experiments. Moreover a stopping criterion is proposed that enables to perform the number of simulations necessary to obtain required approximation accuracy.

  14. Intelligent Adaptation Process for Case Based Systems

    International Nuclear Information System (INIS)

    Nassar, A.M.; Mohamed, A.H.; Mohamed, A.H.

    2014-01-01

    Case Based Reasoning (CBR) Systems is one of the important decision making systems applied in many fields all over the world. The effectiveness of any CBR system based on the quality of the storage cases in the case library. Similar cases can be retrieved and adapted to produce the solution for the new problem. One of the main issues faced the CBR systems is the difficulties of achieving the useful cases. The proposed system introduces a new approach that uses the genetic algorithm (GA) technique to automate constructing the cases into the case library. Also, it can optimize the best one to be stored in the library for the future uses. However, the proposed system can avoid the problems of the uncertain and noisy cases. Besides, it can simply the retrieving and adaptation processes. So, it can improve the performance of the CBR system. The suggested system can be applied for many real-time problems. It has been applied for diagnosis the faults of the wireless network, diagnosis of the cancer diseases, diagnosis of the debugging of a software as cases of study. The proposed system has proved its performance in this field

  15. Modelling and L1 Adaptive Control of pH in Bioethanol Enzymatic Process

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail; Blanke, Mogens; Sin, Gürkan

    2013-01-01

    for pH level regulation: one is a classical PI controller; the other an L1 adaptive output feedback controller. Model-based feed-forward terms are added to the controllers to enhance their performances. A new tuning method of the L1 adaptive controller is also proposed. Further, a new performance...... function is formulated and tailored to this type of processes and is used to monitor the performances of the process in closed loop. The L1 design is found to outperform the PI controller in all tests....

  16. ORGANIZATIONAL CULTURE AND LEADERSHIP STYLE: KEY FACTORS IN THE ORGANIZATIONAL ADAPTATION PROCESS

    Directory of Open Access Journals (Sweden)

    Ivona Vrdoljak Raguž

    2017-01-01

    Full Text Available This paper intends to theorize about how the specific leadership style affects the organizational adaptation in terms of its external environment through fostering the desired organizational culture. Adaptation success, the dimensions of organizational culture and the executive leadership role in fostering the desired corporate culture conducive to the organizational adaptation process are discussed in this paper. The objective of this paper is to highlight the top executive managers’ crucial role and their leadership style in creating such an internal climate within an organization that, in turn, encourages and strengthens the implementation of changes and adaptation to its environment. The limitations of this paper lie in the consideration that this subject matter is discussed only at a theoretical level and that its validity should be proved through practical application.

  17. Implementation of a real-time adaptive digital shaping for nuclear spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Regadío, Alberto, E-mail: aregadio@srg.aut.uah.es [Department of Computer Engineering, Space Research Group, Universidad de Alcalá, 28805 Alcalá de Henares (Spain); Electronic Technology Area, Instituto Nacional de Técnica Aeroespacial, 28850 Torrejón de Ardoz (Spain); Sánchez-Prieto, Sebastián, E-mail: ssanchez@srg.aut.uah.es [Department of Computer Engineering, Space Research Group, Universidad de Alcalá, 28805 Alcalá de Henares (Spain); Prieto, Manuel, E-mail: mprieto@srg.aut.uah.es [Department of Computer Engineering, Space Research Group, Universidad de Alcalá, 28805 Alcalá de Henares (Spain); Tabero, Jesús, E-mail: taberogj@inta.es [Electronic Technology Area, Instituto Nacional de Técnica Aeroespacial, 28850 Torrejón de Ardoz (Spain)

    2014-01-21

    This paper presents the structure, design and implementation of a new adaptive digital shaper for processing the pulses generated in nuclear particle detectors. The proposed adaptive algorithm has the capacity to automatically adjust the coefficients for shaping an input signal with a desired profile in real-time. Typical shapers such as triangular, trapezoidal or cusp-like ones can be generated, but more exotic unipolar shaping could also be performed. A practical prototype was designed, implemented and tested in a Field Programmable Gate Array (FPGA). Particular attention was paid to the amount of internal FPGA resources required and to the sampling rate, making the design as simple as possible in order to minimize power consumption. Lastly, its performance and capabilities were measured using simulations and a real benchmark.

  18. Implementation of a real-time adaptive digital shaping for nuclear spectroscopy

    International Nuclear Information System (INIS)

    Regadío, Alberto; Sánchez-Prieto, Sebastián; Prieto, Manuel; Tabero, Jesús

    2014-01-01

    This paper presents the structure, design and implementation of a new adaptive digital shaper for processing the pulses generated in nuclear particle detectors. The proposed adaptive algorithm has the capacity to automatically adjust the coefficients for shaping an input signal with a desired profile in real-time. Typical shapers such as triangular, trapezoidal or cusp-like ones can be generated, but more exotic unipolar shaping could also be performed. A practical prototype was designed, implemented and tested in a Field Programmable Gate Array (FPGA). Particular attention was paid to the amount of internal FPGA resources required and to the sampling rate, making the design as simple as possible in order to minimize power consumption. Lastly, its performance and capabilities were measured using simulations and a real benchmark

  19. Dumand-array data-acquisition system

    International Nuclear Information System (INIS)

    Brenner, A.E.; Theriot, D.; Dau, W.D.; Geelhood, B.D.; Harris, F.; Learned, J.G.; Stenger, V.; March, R.; Roos, C.; Shumard, E.

    1982-04-01

    An overall data acquisition approach for DUMAND is described. The scheme assumes one array to shore optical fiber transmission line for each string of the array. The basic event sampling period is approx. 13 μsec. All potentially interesting data is transmitted to shore where the major processing is performed

  20. A Novel Self-aligned and Maskless Process for Formation of Highly Uniform Arrays of Nanoholes and Nanopillars

    Directory of Open Access Journals (Sweden)

    Wu Wei

    2008-01-01

    Full Text Available AbstractFabrication of a large area of periodic structures with deep sub-wavelength features is required in many applications such as solar cells, photonic crystals, and artificial kidneys. We present a low-cost and high-throughput process for realization of 2D arrays of deep sub-wavelength features using a self-assembled monolayer of hexagonally close packed (HCP silica and polystyrene microspheres. This method utilizes the microspheres as super-lenses to fabricate nanohole and pillar arrays over large areas on conventional positive and negative photoresist, and with a high aspect ratio. The period and diameter of the holes and pillars formed with this technique can be controlled precisely and independently. We demonstrate that the method can produce HCP arrays of hole of sub-250 nm size using a conventional photolithography system with a broadband UV source centered at 400 nm. We also present our 3D FDTD modeling, which shows a good agreement with the experimental results.

  1. In situ synthesis of protein arrays.

    Science.gov (United States)

    He, Mingyue; Stoevesandt, Oda; Taussig, Michael J

    2008-02-01

    In situ or on-chip protein array methods use cell free expression systems to produce proteins directly onto an immobilising surface from co-distributed or pre-arrayed DNA or RNA, enabling protein arrays to be created on demand. These methods address three issues in protein array technology: (i) efficient protein expression and availability, (ii) functional protein immobilisation and purification in a single step and (iii) protein on-chip stability over time. By simultaneously expressing and immobilising many proteins in parallel on the chip surface, the laborious and often costly processes of DNA cloning, expression and separate protein purification are avoided. Recently employed methods reviewed are PISA (protein in situ array) and NAPPA (nucleic acid programmable protein array) from DNA and puromycin-mediated immobilisation from mRNA.

  2. Automatic Defect Detection for TFT-LCD Array Process Using Quasiconformal Kernel Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2011-09-01

    Full Text Available Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms.

  3. Enterprise System Adaptation: a Combination of Institutional Structures and Sensemaking Processes

    DEFF Research Database (Denmark)

    Svejvig, Per; Jensen, Tina Blegind

    2009-01-01

    In this paper we set out to investigate how an Enterprise System (ES) adaptation in a Scandinavian high-tech organization, SCANDI, can be understood using a combination of institutional and sensemaking theory. Institutional theory is useful in providing an account for the role that the social...... and historical structures play in ES adaptations, and sensemaking can help us investigate how organizational members make sense of and enact ES in their local context. Based on an analytical framework, where we combine institutional theory and sensemaking theory to provide rich insights into ES adaptation, we...... show: 1) how changing institutional structures provide a shifting context for the way users make sense of and enact ES, 2) how users' sensemaking processes of the ES are played out in practice, and 3) how sensemaking reinforces institutional structures....

  4. Phased arrays techniques and split spectrum processing for inspection of thick titanium casting components

    International Nuclear Information System (INIS)

    Banchet, J.; Chahbaz, A.; Sicard, R.; Zellouf, D.E.

    2003-01-01

    In aircraft structures, titanium parts and engine members are critical structural components, and their inspection crucial. However, these structures are very difficult to inspect ultrasonically because of their large grain structure that increases noise drastically. In this work, phased array inspection setups were developed to detected small defects such as simulated inclusions and porosity contained in thick titanium casting blocks, which are frequently used in the aerospace industry. A Cut Spectrum Processing (CSP)-based algorithm was then implemented on the acquired data by employing a set of parallel bandpass filters with different center frequencies. This process led in substantial improvement of the signal to noise ratio and thus, of detectability

  5. Catalyzing alignment processes - Impacts of local adaptations of EMS standards in Thailand

    DEFF Research Database (Denmark)

    Jørgensen, Ulrik; Lauridsen, Erik Hagelskjær

    2004-01-01

    ISO14000 as an EMS can be followed as a travelling standard that has to be adapted and domesticated in the local context, where it is applied. By following the processes of this adaptation and how it changes the coherence between the companies, the regulators and other stakeholders the role...... of the standard is identified. The article is based on a number of case-studies of implementation of EMS in Thai companies....

  6. Research in adaptive management: working relations and the research process.

    Science.gov (United States)

    Amanda C. Graham; Linda E. Kruger

    2002-01-01

    This report analyzes how a small group of Forest Service scientists participating in efforts to implement adaptive management approach working relations, and how they understand and apply the research process. Nine scientists completed a questionnaire to assess their preferred mode of thinking (the Herrmann Brain Dominance Instrument), engaged in a facilitated...

  7. Improving performance of natural language processing part-of-speech tagging on clinical narratives through domain adaptation.

    Science.gov (United States)

    Ferraro, Jeffrey P; Daumé, Hal; Duvall, Scott L; Chapman, Wendy W; Harkema, Henk; Haug, Peter J

    2013-01-01

    Natural language processing (NLP) tasks are commonly decomposed into subtasks, chained together to form processing pipelines. The residual error produced in these subtasks propagates, adversely affecting the end objectives. Limited availability of annotated clinical data remains a barrier to reaching state-of-the-art operating characteristics using statistically based NLP tools in the clinical domain. Here we explore the unique linguistic constructions of clinical texts and demonstrate the loss in operating characteristics when out-of-the-box part-of-speech (POS) tagging tools are applied to the clinical domain. We test a domain adaptation approach integrating a novel lexical-generation probability rule used in a transformation-based learner to boost POS performance on clinical narratives. Two target corpora from independent healthcare institutions were constructed from high frequency clinical narratives. Four leading POS taggers with their out-of-the-box models trained from general English and biomedical abstracts were evaluated against these clinical corpora. A high performing domain adaptation method, Easy Adapt, was compared to our newly proposed method ClinAdapt. The evaluated POS taggers drop in accuracy by 8.5-15% when tested on clinical narratives. The highest performing tagger reports an accuracy of 88.6%. Domain adaptation with Easy Adapt reports accuracies of 88.3-91.0% on clinical texts. ClinAdapt reports 93.2-93.9%. ClinAdapt successfully boosts POS tagging performance through domain adaptation requiring a modest amount of annotated clinical data. Improving the performance of critical NLP subtasks is expected to reduce pipeline error propagation leading to better overall results on complex processing tasks.

  8. Array capabilities and future arrays

    International Nuclear Information System (INIS)

    Radford, D.

    1993-01-01

    Early results from the new third-generation instruments GAMMASPHERE and EUROGAM are confirming the expectation that such arrays will have a revolutionary effect on the field of high-spin nuclear structure. When completed, GAMMASHPERE will have a resolving power am order of magnitude greater that of the best second-generation arrays. When combined with other instruments such as particle-detector arrays and fragment mass analysers, the capabilites of the arrays for the study of more exotic nuclei will be further enhanced. In order to better understand the limitations of these instruments, and to design improved future detector systems, it is important to have some intelligible and reliable calculation for the relative resolving power of different instrument designs. The derivation of such a figure of merit will be briefly presented, and the relative sensitivities of arrays currently proposed or under construction presented. The design of TRIGAM, a new third-generation array proposed for Chalk River, will also be discussed. It is instructive to consider how far arrays of Compton-suppressed Ge detectors could be taken. For example, it will be shown that an idealised open-quote perfectclose quotes third-generation array of 1000 detectors has a sensitivity an order of magnitude higher again than that of GAMMASPHERE. Less conventional options for new arrays will also be explored

  9. Solution-Processed Wide-Bandgap Organic Semiconductor Nanostructures Arrays for Nonvolatile Organic Field-Effect Transistor Memory.

    Science.gov (United States)

    Li, Wen; Guo, Fengning; Ling, Haifeng; Liu, Hui; Yi, Mingdong; Zhang, Peng; Wang, Wenjun; Xie, Linghai; Huang, Wei

    2018-01-01

    In this paper, the development of organic field-effect transistor (OFET) memory device based on isolated and ordered nanostructures (NSs) arrays of wide-bandgap (WBG) small-molecule organic semiconductor material [2-(9-(4-(octyloxy)phenyl)-9H-fluoren-2-yl)thiophene]3 (WG 3 ) is reported. The WG 3 NSs are prepared from phase separation by spin-coating blend solutions of WG 3 /trimethylolpropane (TMP), and then introduced as charge storage elements for nonvolatile OFET memory devices. Compared to the OFET memory device with smooth WG 3 film, the device based on WG 3 NSs arrays exhibits significant improvements in memory performance including larger memory window (≈45 V), faster switching speed (≈1 s), stable retention capability (>10 4 s), and reliable switching properties. A quantitative study of the WG 3 NSs morphology reveals that enhanced memory performance is attributed to the improved charge trapping/charge-exciton annihilation efficiency induced by increased contact area between the WG 3 NSs and pentacene layer. This versatile solution-processing approach to preparing WG 3 NSs arrays as charge trapping sites allows for fabrication of high-performance nonvolatile OFET memory devices, which could be applicable to a wide range of WBG organic semiconductor materials. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Cantilever arrays with self-aligned nanotips of uniform height

    International Nuclear Information System (INIS)

    Koelmans, W W; Peters, T; Berenschot, E; De Boer, M J; Siekman, M H; Abelmann, L

    2012-01-01

    Cantilever arrays are employed to increase the throughput of imaging and manipulation at the nanoscale. We present a fabrication process to construct cantilever arrays with nanotips that show a uniform tip–sample distance. Such uniformity is crucial, because in many applications the cantilevers do not feature individual tip–sample spacing control. Uniform cantilever arrays lead to very similar tip–sample interaction within an array, enable non-contact modes for arrays and give better control over the load force in contact modes. The developed process flow uses a single mask to define both tips and cantilevers. An additional mask is required for the back side etch. The tips are self-aligned in the convex corner at the free end of each cantilever. Although we use standard optical contact lithography, we show that the convex corner can be sharpened to a nanometre scale radius by an isotropic underetch step. The process is robust and wafer-scale. The resonance frequencies of the cantilevers within an array are shown to be highly uniform with a relative standard error of 0.26% or lower. The tip–sample distance within an array of up to ten cantilevers is measured to have a standard error around 10 nm. An imaging demonstration using the AFM shows that all cantilevers in the array have a sharp tip with a radius below 10 nm. The process flow for the cantilever arrays finds application in probe-based nanolithography, probe-based data storage, nanomanufacturing and parallel scanning probe microscopy. (paper)

  11. Adaptive Rationality, Adaptive Behavior and Institutions

    Directory of Open Access Journals (Sweden)

    Volchik Vyacheslav, V.

    2015-12-01

    Full Text Available The economic literature focused on understanding decision-making and choice processes reveals a vast collection of approaches to human rationality. Theorists’ attention has moved from absolutely rational, utility-maximizing individuals to boundedly rational and adaptive ones. A number of economists have criticized the concepts of adaptive rationality and adaptive behavior. One of the recent trends in the economic literature is to consider humans irrational. This paper offers an approach which examines adaptive behavior in the context of existing institutions and constantly changing institutional environment. It is assumed that adaptive behavior is a process of evolutionary adjustment to fundamental uncertainty. We emphasize the importance of actors’ engagement in trial and error learning, since if they are involved in this process, they obtain experience and are able to adapt to existing and new institutions. The paper aims at identifying relevant institutions, adaptive mechanisms, informal working rules and practices that influence actors’ behavior in the field of Higher Education in Russia (Rostov Region education services market has been taken as an example. The paper emphasizes the application of qualitative interpretative methods (interviews and discourse analysis in examining actors’ behavior.

  12. Cognitive and social processes predicting partner psychological adaptation to early stage breast cancer.

    Science.gov (United States)

    Manne, Sharon; Ostroff, Jamie; Fox, Kevin; Grana, Generosa; Winkel, Gary

    2009-02-01

    The diagnosis and subsequent treatment for early stage breast cancer is stressful for partners. Little is known about the role of cognitive and social processes predicting the longitudinal course of partners' psychosocial adaptation. This study evaluated the role of cognitive and social processing in partner psychological adaptation to early stage breast cancer, evaluating both main and moderator effect models. Moderating effects for meaning making, acceptance, and positive reappraisal on the predictive association of searching for meaning, emotional processing, and emotional expression on partner psychological distress were examined. Partners of women diagnosed with early stage breast cancer were evaluated shortly after the ill partner's diagnosis (N=253), 9 (N=167), and 18 months (N=149) later. Partners completed measures of emotional expression, emotional processing, acceptance, meaning making, and general and cancer-specific distress at all time points. Lower satisfaction with partner support predicted greater global distress, and greater use of positive reappraisal was associated with greater distress. The predicted moderator effects for found meaning on the associations between the search for meaning and cancer-specific distress were found and similar moderating effects for positive reappraisal on the associations between emotional expression and global distress and for acceptance on the association between emotional processing and cancer-specific distress were found. Results indicate several cognitive-social processes directly predict partner distress. However, moderator effect models in which the effects of partners' processing depends upon whether these efforts result in changes in perceptions of the cancer experience may add to the understanding of partners' adaptation to cancer.

  13. Contextualizing Individual Competencies for Managing the Corporate Social Responsibility Adaptation Process

    NARCIS (Netherlands)

    Osagie, E.R.; Wesselink, R.; Blok, V.; Mulder, M.

    2016-01-01

    Companies committed to corporate social responsibility (CSR) should ensure that their managers possess the appropriate competencies to effectively manage the CSR adaptation process. The literature provides insights into the individual competencies these managers need but fails to prioritize them and

  14. Configurable multiplier modules for an adaptive computing system

    Directory of Open Access Journals (Sweden)

    O. A. Pfänder

    2006-01-01

    Full Text Available The importance of reconfigurable hardware is increasing steadily. For example, the primary approach of using adaptive systems based on programmable gate arrays and configurable routing resources has gone mainstream and high-performance programmable logic devices are rivaling traditional application-specific hardwired integrated circuits. Also, the idea of moving from the 2-D domain into a 3-D design which stacks several active layers above each other is gaining momentum in research and industry, to cope with the demand for smaller devices with a higher scale of integration. However, optimized arithmetic blocks in course-grain reconfigurable arrays as well as field-programmable architectures still play an important role. In countless digital systems and signal processing applications, the multiplication is one of the critical challenges, where in many cases a trade-off between area usage and data throughput has to be made. But the a priori choice of word-length and number representation can also be replaced by a dynamic choice at run-time, in order to improve flexibility, area efficiency and the level of parallelism in computation. In this contribution, we look at an adaptive computing system called 3-D-SoftChip to point out what parameters are crucial to implement flexible multiplier blocks into optimized elements for accelerated processing. The 3-D-SoftChip architecture uses a novel approach to 3-dimensional integration based on flip-chip bonding with indium bumps. The modular construction, the introduction of interfaces to realize the exchange of intermediate data, and the reconfigurable sign handling approach will be explained, as well as a beneficial way to handle and distribute the numerous required control signals.

  15. Uniform Circular Antenna Array Applications in Coded DS-CDMA Mobile Communication Systems

    National Research Council Canada - National Science Library

    Seow, Tian

    2003-01-01

    ...) has greatly increased. This thesis examines the use of an equally spaced circular adaptive antenna array at the mobile station for a typical coded direct sequence code division multiple access (DS-CDMA...

  16. Developing Quality Assurance Processes for Image-Guided Adaptive Radiation Therapy

    International Nuclear Information System (INIS)

    Yan Di

    2008-01-01

    Quality assurance has long been implemented in radiation treatment as systematic actions necessary to provide adequate confidence that the radiation oncology service will satisfy the given requirements for quality care. The existing reports from the American Association of Physicists in Medicine Task Groups 40 and 53 have provided highly detailed QA guidelines for conventional radiotherapy and treatment planning. However, advanced treatment processes recently developed with emerging high technology have introduced new QA requirements that have not been addressed previously in the conventional QA program. Therefore, it is necessary to expand the existing QA guidelines to also include new considerations. Image-guided adaptive radiation therapy (IGART) is a closed-loop treatment process that is designed to include the individual treatment information, such as patient-specific anatomic variation and delivered dose assessed during the therapy course in treatment evaluation and planning optimization. Clinical implementation of IGART requires high levels of automation in image acquisition, registration, segmentation, treatment dose construction, and adaptive planning optimization, which brings new challenges to the conventional QA program. In this article, clinical QA procedures for IGART are outlined. The discussion focuses on the dynamic or four-dimensional aspects of the IGART process, avoiding overlap with conventional QA guidelines

  17. Application of Seismic Array Processing to Tsunami Early Warning

    Science.gov (United States)

    An, C.; Meng, L.

    2015-12-01

    Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800

  18. Modelling clustering of vertically aligned carbon nanotube arrays.

    Science.gov (United States)

    Schaber, Clemens F; Filippov, Alexander E; Heinlein, Thorsten; Schneider, Jörg J; Gorb, Stanislav N

    2015-08-06

    Previous research demonstrated that arrays of vertically aligned carbon nanotubes (VACNTs) exhibit strong frictional properties. Experiments indicated a strong decrease of the friction coefficient from the first to the second sliding cycle in repetitive measurements on the same VACNT spot, but stable values in consecutive cycles. VACNTs form clusters under shear applied during friction tests, and self-organization stabilizes the mechanical properties of the arrays. With increasing load in the range between 300 µN and 4 mN applied normally to the array surface during friction tests the size of the clusters increases, while the coefficient of friction decreases. To better understand the experimentally obtained results, we formulated and numerically studied a minimalistic model, which reproduces the main features of the system with a minimum of adjustable parameters. We calculate the van der Waals forces between the spherical friction probe and bunches of the arrays using the well-known Morse potential function to predict the number of clusters, their size, instantaneous and mean friction forces and the behaviour of the VACNTs during consecutive sliding cycles and at different normal loads. The data obtained by the model calculations coincide very well with the experimental data and can help in adapting VACNT arrays for biomimetic applications.

  19. Fully Integrated Linear Single Photon Avalanche Diode (SPAD) Array with Parallel Readout Circuit in a Standard 180 nm CMOS Process

    Science.gov (United States)

    Isaak, S.; Bull, S.; Pitter, M. C.; Harrison, Ian.

    2011-05-01

    This paper reports on the development of a SPAD device and its subsequent use in an actively quenched single photon counting imaging system, and was fabricated in a UMC 0.18 μm CMOS process. A low-doped p- guard ring (t-well layer) encircling the active area to prevent the premature reverse breakdown. The array is a 16×1 parallel output SPAD array, which comprises of an active quenched SPAD circuit in each pixel with the current value being set by an external resistor RRef = 300 kΩ. The SPAD I-V response, ID was found to slowly increase until VBD was reached at excess bias voltage, Ve = 11.03 V, and then rapidly increase due to avalanche multiplication. Digital circuitry to control the SPAD array and perform the necessary data processing was designed in VHDL and implemented on a FPGA chip. At room temperature, the dark count was found to be approximately 13 KHz for most of the 16 SPAD pixels and the dead time was estimated to be 40 ns.

  20. Adaptability and specificity of inhibition processes in distractor-induced blindness.

    Science.gov (United States)

    Winther, Gesche N; Niedeggen, Michael

    2017-12-01

    In a rapid serial visual presentation task, inhibition processes cumulatively impair processing of a target possessing distractor properties. This phenomenon-known as distractor-induced blindness-has thus far only been elicited using dynamic visual features, such as motion and orientation changes. In three ERP experiments, we used a visual object feature-color-to test for the adaptability and specificity of the effect. In Experiment I, participants responded to a color change (target) in the periphery whose onset was signaled by a central cue. Presentation of irrelevant color changes prior to the cue (distractors) led to reduced target detection, accompanied by a frontal ERP negativity that increased with increasing number of distractors, similar to the effects previously found for dynamic targets. This suggests that distractor-induced blindness is adaptable to color features. In Experiment II, the target consisted of coherent motion contrasting the color distractors. Correlates of distractor-induced blindness were found neither in the behavioral nor in the ERP data, indicating a feature specificity of the process. Experiment III confirmed the strict distinction between congruent and incongruent distractors: A single color distractor was embedded in a stream of motion distractors with the target consisting of a coherent motion. While behavioral performance was affected by the distractors, the color distractor did not elicit a frontal negativity. The experiments show that distractor-induced blindness is also triggered by visual stimuli predominantly processed in the ventral stream. The strict specificity of the central inhibition process also applies to these stimulus features. © 2017 Society for Psychophysiological Research.

  1. Fabrication of CoZn alloy nanowire arrays: Significant improvement in magnetic properties by annealing process

    International Nuclear Information System (INIS)

    Koohbor, M.; Soltanian, S.; Najafi, M.; Servati, P.

    2012-01-01

    Highlights: ► Increasing the Zn concentration changes the structure of NWs from hcp to amorphous. ► Increasing the Zn concentration significantly reduces the Hc value of NWs. ► Magnetic properties of CoZn NWs can be significantly enhanced by appropriate annealing. ► The pH of electrolyte has no significant effect on the properties of the NW arrays. ► Deposition frequency has considerable effects on the magnetic properties of NWs. - Abstract: Highly ordered arrays of Co 1−x Zn x (0 ≤ x ≤ 0.74) nanowires (NWs) with diameters of ∼35 nm and high length-to-diameter ratios (up to 150) were fabricated by co-electrodeposition of Co and Zn into pores of anodized aluminum oxide (AAO) templates. The Co and Zn contents of the NWs were adjusted by varying the ratio of Zn and Co ion concentrations in the electrolyte. The effect of the Zn content, electrodeposition conditions (frequency and pH) and annealing on the structural and magnetic properties (e.g., coercivity (Hc) and squareness (Sq)) of NW arrays were investigated using X-ray diffraction (XRD), scanning electron microscopy, electron diffraction, and alternating gradient force magnetometer (AGFM). XRD patterns reveal that an increase in the concentration of Zn ions of the electrolyte forces the hcp crystal structure of Co NWs to change into an amorphous phase, resulting in a significant reduction in Hc. It was found that the magnetic properties of NWs can be significantly improved by appropriate annealing process. The highest values for Hc (2050 Oe) and Sq (0.98) were obtained for NWs electrodeposited using 0.95/0.05 Co:Zn concentrations at 200 Hz and annealed at 575 °C. While the pH of electrolyte is found to have no significant effect on the structural and magnetic properties of the NW arrays, the electrodeposition frequency has considerable effects on the magnetic properties of the NW arrays. The changes in magnetic property of NWs are rooted in a competition between shape anisotropy and

  2. Road Sign Recognition with Fuzzy Adaptive Pre-Processing Models

    Science.gov (United States)

    Lin, Chien-Chuan; Wang, Ming-Shi

    2012-01-01

    A road sign recognition system based on adaptive image pre-processing models using two fuzzy inference schemes has been proposed. The first fuzzy inference scheme is to check the changes of the light illumination and rich red color of a frame image by the checking areas. The other is to check the variance of vehicle's speed and angle of steering wheel to select an adaptive size and position of the detection area. The Adaboost classifier was employed to detect the road sign candidates from an image and the support vector machine technique was employed to recognize the content of the road sign candidates. The prohibitory and warning road traffic signs are the processing targets in this research. The detection rate in the detection phase is 97.42%. In the recognition phase, the recognition rate is 93.04%. The total accuracy rate of the system is 92.47%. For video sequences, the best accuracy rate is 90.54%, and the average accuracy rate is 80.17%. The average computing time is 51.86 milliseconds per frame. The proposed system can not only overcome low illumination and rich red color around the road sign problems but also offer high detection rates and high computing performance. PMID:22778650

  3. Microlens array processor with programmable weight mask and direct optical input

    Science.gov (United States)

    Schmid, Volker R.; Lueder, Ernst H.; Bader, Gerhard; Maier, Gert; Siegordner, Jochen

    1999-03-01

    We present an optical feature extraction system with a microlens array processor. The system is suitable for online implementation of a variety of transforms such as the Walsh transform and DCT. Operating with incoherent light, our processor accepts direct optical input. Employing a sandwich- like architecture, we obtain a very compact design of the optical system. The key elements of the microlens array processor are a square array of 15 X 15 spherical microlenses on acrylic substrate and a spatial light modulator as transmissive mask. The light distribution behind the mask is imaged onto the pixels of a customized a-Si image sensor with adjustable gain. We obtain one output sample for each microlens image and its corresponding weight mask area as summation of the transmitted intensity within one sensor pixel. The resulting architecture is very compact and robust like a conventional camera lens while incorporating a high degree of parallelism. We successfully demonstrate a Walsh transform into the spatial frequency domain as well as the implementation of a discrete cosine transform with digitized gray values. We provide results showing the transformation performance for both synthetic image patterns and images of natural texture samples. The extracted frequency features are suitable for neural classification of the input image. Other transforms and correlations can be implemented in real-time allowing adaptive optical signal processing.

  4. Beam pattern improvement by compensating array nonuniformities in a guided wave phased array

    International Nuclear Information System (INIS)

    Kwon, Hyu-Sang; Lee, Seung-Seok; Kim, Jin-Yeon

    2013-01-01

    This paper presents a simple data processing algorithm which can improve the performance of a uniform circular array based on guided wave transducers. The algorithm, being intended to be used with the delay-and-sum beamformer, effectively eliminates the effects of nonuniformities that can significantly degrade the beam pattern. Nonuniformities can arise intrinsically from the array geometry when the circular array is transformed to a linear array for beam steering and extrinsically from unequal conditions of transducers such as element-to-element variations of sensitivity and directivity. The effects of nonuniformities are compensated by appropriately imposing weight factors on the elements in the projected linear array. Different cases are simulated, where the improvements of the beam pattern, especially the level of the highest sidelobe, are clearly seen, and related issues are discussed. An experiment is performed which uses A0 mode Lamb waves in a steel plate, to demonstrate the usefulness of the proposed method. The discrepancy between theoretical and experimental beam patterns is explained by accounting for near-field effects. (paper)

  5. Continuous catchment-scale monitoring of geomorphic processes with a 2-D seismological array

    Science.gov (United States)

    Burtin, A.; Hovius, N.; Milodowski, D.; Chen, Y.-G.; Wu, Y.-M.; Lin, C.-W.; Chen, H.

    2012-04-01

    The monitoring of geomorphic processes during extreme climatic events is of a primary interest to estimate their impact on the landscape dynamics. However, available techniques to survey the surface activity do not provide a relevant time and/or space resolution. Furthermore, these methods hardly investigate the dynamics of the events since their detection are made a posteriori. To increase our knowledge of the landscape evolution and the influence of extreme climatic events on a catchment dynamics, we need to develop new tools and procedures. In many past works, it has been shown that seismic signals are relevant to detect and locate surface processes (landslides, debris flows). During the 2010 typhoon season, we deployed a network of 12 seismometers dedicated to monitor the surface processes of the Chenyoulan catchment in Taiwan. We test the ability of a two dimensional array and small inter-stations distances (~ 11 km) to map in continuous and at a catchment-scale the geomorphic activity. The spectral analysis of continuous records shows a high-frequency (> 1 Hz) seismic energy that is coherent with the occurrence of hillslope and river processes. Using a basic detection algorithm and a location approach running on the analysis of seismic amplitudes, we manage to locate the catchment activity. We mainly observe short-time events (> 300 occurrences) associated with debris falls and bank collapses during daily convective storms, where 69% of occurrences are coherent with the time distribution of precipitations. We also identify a couple of debris flows during a large tropical storm. In contrast, the FORMOSAT imagery does not detect any activity, which somehow reflects the lack of extreme climatic conditions during the experiment. However, high resolution pictures confirm the existence of links between most of geomorphic events and existing structures (landslide scars, gullies...). We thus conclude to an activity that is dominated by reactivation processes. It

  6. Flexible eddy current coil arrays

    International Nuclear Information System (INIS)

    Krampfner, Y.; Johnson, D.P.

    1987-01-01

    A novel approach was devised to overcome certain limitations of conventional eddy current testing. The typical single-element hand-wound probe was replaced with a two dimensional array of spirally wound probe elements deposited on a thin, flexible polyimide substrate. This provides full and reliable coverage of the test area and eliminates the need for scanning. The flexible substrate construction of the array allows the probes to conform to irregular part geometries, such as turbine blades and tubing, thereby eliminating the need for specialized probes for each geometry. Additionally, the batch manufacturing process of the array can yield highly uniform and reproducible coil geometries. The array is driven by a portable computer-based eddy current instrument, smartEDDY/sup TM/, capable of two-frequency operation, and offers a great deal of versatility and flexibility due to its software-based architecture. The array is coupled to the instrument via an 80-switch multiplexer that can be configured to address up to 1600 probes. The individual array elements may be addressed in any desired sequence, as defined by the software

  7. DUAL POLARIZATION ANTENNA ARRAY WITH VERY LOW CROSS POLARIZATION AND LOW SIDE LOBES

    DEFF Research Database (Denmark)

    1997-01-01

    The present invention relates to an antenna array adapted to radiate or receive electromagnetic waves of one or two polarizations with very low cross polarization and low side lobes. An antenna array comprising many antenna elements, e.g. more than ten antenna elements, is provided in which...... formation of grating lobes are inhibited in selected directions of the radiation and cross polarization within the main lobe is suppressed at least 30 dB below the main lobe peak value. According to a preferred embodiment of the invention, the antenna elements of the antenna array comprise probe-fed patches...

  8. A mixed signal ECG processing platform with an adaptive sampling ADC for portable monitoring applications.

    Science.gov (United States)

    Kim, Hyejung; Van Hoof, Chris; Yazicioglu, Refet Firat

    2011-01-01

    This paper describes a mixed-signal ECG processing platform with an 12-bit ADC architecture that can adapt its sampling rate according to the input signals rate of change. This enables the sampling of ECG signals with significantly reduced data rate without loss of information. The presented adaptive sampling scheme reduces the ADC power consumption, enables the processing of ECG signals with lower power consumption, and reduces the power consumption of the radio while streaming the ECG signals. The test results show that running a CWT-based R peak detection algorithm using the adaptively sampled ECG signals consumes only 45.6 μW and it leads to 36% less overall system power consumption.

  9. When noise is beneficial for sensory encoding: Noise adaptation can improve face processing.

    Science.gov (United States)

    Menzel, Claudia; Hayn-Leichsenring, Gregor U; Redies, Christoph; Németh, Kornél; Kovács, Gyula

    2017-10-01

    The presence of noise usually impairs the processing of a stimulus. Here, we studied the effects of noise on face processing and show, for the first time, that adaptation to noise patterns has beneficial effects on face perception. We used noiseless faces that were either surrounded by random noise or presented on a uniform background as stimuli. In addition, the faces were either preceded by noise adaptors or not. Moreover, we varied the statistics of the noise so that its spectral slope either matched that of the faces or it was steeper or shallower. Results of parallel ERP recordings showed that the background noise reduces the amplitude of the face-evoked N170, indicating less intensive face processing. Adaptation to a noise pattern, however, led to reduced P1 and enhanced N170 amplitudes as well as to a better behavioral performance in two of the three noise conditions. This effect was also augmented by the presence of background noise around the target stimuli. Additionally, the spectral slope of the noise pattern affected the size of the P1, N170 and P2 amplitudes. We reason that the observed effects are due to the selective adaptation of noise-sensitive neurons present in the face-processing cortical areas, which may enhance the signal-to-noise-ratio. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Logical Qubit in a Linear Array of Semiconductor Quantum Dots

    Directory of Open Access Journals (Sweden)

    Cody Jones

    2018-06-01

    Full Text Available We design a logical qubit consisting of a linear array of quantum dots, we analyze error correction for this linear architecture, and we propose a sequence of experiments to demonstrate components of the logical qubit on near-term devices. To avoid the difficulty of fully controlling a two-dimensional array of dots, we adapt spin control and error correction to a one-dimensional line of silicon quantum dots. Control speed and efficiency are maintained via a scheme in which electron spin states are controlled globally using broadband microwave pulses for magnetic resonance, while two-qubit gates are provided by local electrical control of the exchange interaction between neighboring dots. Error correction with two-, three-, and four-qubit codes is adapted to a linear chain of qubits with nearest-neighbor gates. We estimate an error correction threshold of 10^{-4}. Furthermore, we describe a sequence of experiments to validate the methods on near-term devices starting from four coupled dots.

  11. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    NARCIS (Netherlands)

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic

  12. Maximum power point tracking of partially shaded solar photovoltaic arrays

    Energy Technology Data Exchange (ETDEWEB)

    Roy Chowdhury, Shubhajit; Saha, Hiranmay [IC Design and Fabrication Centre, Department of Electronics and Telecommunication Engineering, Jadavpur University (India)

    2010-09-15

    The paper presents the simulation and hardware implementation of maximum power point (MPP) tracking of a partially shaded solar photovoltaic (PV) array using a variant of Particle Swarm Optimization known as Adaptive Perceptive Particle Swarm Optimization (APPSO). Under partially shaded conditions, the photovoltaic (PV) array characteristics get more complex with multiple maxima in the power-voltage characteristic. The paper presents an algorithmic technique to accurately track the maximum power point (MPP) of a PV array using an APPSO. The APPSO algorithm has also been validated in the current work. The proposed technique uses only one pair of sensors to control multiple PV arrays. This result in lower cost and higher accuracy of 97.7% compared to earlier obtained accuracy of 96.41% using Particle Swarm Optimization. The proposed tracking technique has been mapped onto a MSP430FG4618 microcontroller for tracking and control purposes. The whole system based on the proposed has been realized on a standard two stage power electronic system configuration. (author)

  13. Improving Tumor Treating Fields Treatment Efficacy in Patients With Glioblastoma Using Personalized Array Layouts

    International Nuclear Information System (INIS)

    Wenger, Cornelia; Salvador, Ricardo; Basser, Peter J.; Miranda, Pedro C.

    2016-01-01

    Purpose: To investigate tumors of different size, shape, and location and the effect of varying transducer layouts on Tumor Treating Fields (TTFields) distribution in an anisotropic model. Methods and Materials: A realistic human head model was generated from MR images of 1 healthy subject. Four different virtual tumors were placed at separate locations. The transducer arrays were modeled to mimic the TTFields-delivering commercial device. For each tumor location, varying array layouts were tested. The finite element method was used to calculate the electric field distribution, taking into account tissue heterogeneity and anisotropy. Results: In all tumors, the average electric field induced by either of the 2 perpendicular array layouts exceeded the 1-V/cm therapeutic threshold value for TTFields effectiveness. Field strength within a tumor did not correlate with its size and shape but was higher in more superficial tumors. Additionally, it always increased when the array was adapted to the tumor's location. Compared with a default layout, the largest increase in field strength was 184%, and the highest average field strength induced in a tumor was 2.21 V/cm. Conclusions: These results suggest that adapting array layouts to specific tumor locations can significantly increase field strength within the tumor. Our findings support the idea of personalized treatment planning to increase TTFields efficacy for patients with GBM.

  14. Improving Tumor Treating Fields Treatment Efficacy in Patients With Glioblastoma Using Personalized Array Layouts

    Energy Technology Data Exchange (ETDEWEB)

    Wenger, Cornelia, E-mail: cwenger@fc.ul.pt [Institute of Biophysics and Biomedical Engineering, Faculdade de Ciências, Universidade de Lisboa, Lisbon (Portugal); Salvador, Ricardo [Institute of Biophysics and Biomedical Engineering, Faculdade de Ciências, Universidade de Lisboa, Lisbon (Portugal); Basser, Peter J. [Section on Tissue Biophysics and Biomimetics, Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health, Bethesda, Maryland (United States); Miranda, Pedro C. [Institute of Biophysics and Biomedical Engineering, Faculdade de Ciências, Universidade de Lisboa, Lisbon (Portugal)

    2016-04-01

    Purpose: To investigate tumors of different size, shape, and location and the effect of varying transducer layouts on Tumor Treating Fields (TTFields) distribution in an anisotropic model. Methods and Materials: A realistic human head model was generated from MR images of 1 healthy subject. Four different virtual tumors were placed at separate locations. The transducer arrays were modeled to mimic the TTFields-delivering commercial device. For each tumor location, varying array layouts were tested. The finite element method was used to calculate the electric field distribution, taking into account tissue heterogeneity and anisotropy. Results: In all tumors, the average electric field induced by either of the 2 perpendicular array layouts exceeded the 1-V/cm therapeutic threshold value for TTFields effectiveness. Field strength within a tumor did not correlate with its size and shape but was higher in more superficial tumors. Additionally, it always increased when the array was adapted to the tumor's location. Compared with a default layout, the largest increase in field strength was 184%, and the highest average field strength induced in a tumor was 2.21 V/cm. Conclusions: These results suggest that adapting array layouts to specific tumor locations can significantly increase field strength within the tumor. Our findings support the idea of personalized treatment planning to increase TTFields efficacy for patients with GBM.

  15. Adaptation of instruments developed to study the effectiveness of psychotherapeutic processes

    Directory of Open Access Journals (Sweden)

    Shushanikova, Anastasia A.

    2016-06-01

    Full Text Available The objective of the research was to adapt for use in Russian-language contexts a set of instruments that assess the effectiveness of psychotherapeutic practices. The instruments explore the effectiveness of different types of therapy, without evaluating the abstract, idealized characteristics or specifics of each approach, specialist, or therapeutic case. The adapted instruments are based on reflective data about the significance of therapeutic events, from the point of view of both the client and the therapist. We translated, edited, and adapted forms developed by John McLeod and Mick Cooper — a “Goals Form”, a “Goal Assessment Form”, a “Post-Session Form”, and a “Therapy Personalization Form”. The adaption was intended to cohere with the stylistic and cultural aspects of the Russian language. The research showed that the instruments and the methods have great potential for practical and theoretical application in qualitative studies to formulate hypotheses and to verify them in quantitative studies. The phenomenological analysis reveals the reliability, appropriateness, and validity of the adapted instruments for identifying specific meanings of the psychotherapeutic cases considered. The instruments can be used in studies exploring helpful aspects and effectiveness in different types of therapy (cognitive, existential, outdoor therapy, online counseling, etc. with different groups of clients. It is reasonable to continue the use of the Russian-language version of the instruments in further studies exploring the effectiveness of psychological practices. The adapted instruments facilitate comparison and cross-cultural studies, and formulation of meaningful hypotheses about the effectiveness and quality of the psychotherapeutic process.

  16. NECTAr: New electronics for the Cherenkov Telescope Array

    International Nuclear Information System (INIS)

    Vorobiov, S.; Bolmont, J.; Corona, P.; Delagnes, E.; Feinstein, F.; Gascon, D.; Glicenstein, J.-F.; Naumann, C.L.; Nayman, P.; Sanuy, A.; Toussenel, F.; Vincent, P.

    2011-01-01

    The European astroparticle physics community aims to design and build the next generation array of Imaging Atmospheric Cherenkov Telescopes (IACTs), that will benefit from the experience of the existing H.E.S.S. and MAGIC detectors, and further expand the very-high energy astronomy domain. In order to gain an order of magnitude in sensitivity in the 10 GeV to >100TeV range, the Cherenkov Telescope Array (CTA) will employ 50-100 mirrors of various sizes equipped with 1000-4000 channels per camera, to be compared with the 6000 channels of the final H.E.S.S. array. A 3-year program, started in 2009, aims to build and test a demonstrator module of a generic CTA camera. We present here the NECTAr design of front-end electronics for the CTA, adapted to the trigger and data acquisition of a large IACTs array, with simple production and maintenance. Cost and camera performances are optimized by maximizing integration of the front-end electronics (amplifiers, fast analog samplers, ADCs) in an ASIC, achieving several GS/s and a few μs readout dead-time. We present preliminary results and extrapolated performances from Monte Carlo simulations.

  17. NECTAr: New electronics for the Cherenkov Telescope Array

    Energy Technology Data Exchange (ETDEWEB)

    Vorobiov, S., E-mail: vorobiov@lpta.in2p3.f [LPTA, Universite Montpellier II and IN2P3/CNRS, Montpellier (France); Bolmont, J.; Corona, P. [LPNHE, Universite Paris VI and IN2P3/CNRS, Paris (France); Delagnes, E. [IRFU/DSM/CEA, Saclay, Gif-sur-Yvette (France); Feinstein, F. [LPTA, Universite Montpellier II and IN2P3/CNRS, Montpellier (France); Gascon, D. [ICC-UB, Universitat Barcelona, Barcelona (Spain); Glicenstein, J.-F. [IRFU/DSM/CEA, Saclay, Gif-sur-Yvette (France); Naumann, C.L.; Nayman, P. [LPNHE, Universite Paris VI and IN2P3/CNRS, Paris (France); Sanuy, A. [ICC-UB, Universitat Barcelona, Barcelona (Spain); Toussenel, F.; Vincent, P. [LPNHE, Universite Paris VI and IN2P3/CNRS, Paris (France)

    2011-05-21

    The European astroparticle physics community aims to design and build the next generation array of Imaging Atmospheric Cherenkov Telescopes (IACTs), that will benefit from the experience of the existing H.E.S.S. and MAGIC detectors, and further expand the very-high energy astronomy domain. In order to gain an order of magnitude in sensitivity in the 10 GeV to >100TeV range, the Cherenkov Telescope Array (CTA) will employ 50-100 mirrors of various sizes equipped with 1000-4000 channels per camera, to be compared with the 6000 channels of the final H.E.S.S. array. A 3-year program, started in 2009, aims to build and test a demonstrator module of a generic CTA camera. We present here the NECTAr design of front-end electronics for the CTA, adapted to the trigger and data acquisition of a large IACTs array, with simple production and maintenance. Cost and camera performances are optimized by maximizing integration of the front-end electronics (amplifiers, fast analog samplers, ADCs) in an ASIC, achieving several GS/s and a few {mu}s readout dead-time. We present preliminary results and extrapolated performances from Monte Carlo simulations.

  18. Chunking of Large Multidimensional Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar

    2007-02-28

    Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.

  19. [Problems in the process of adapting to change among the family caregivers of elderly people with dementia].

    Science.gov (United States)

    Moreno-Cámara, Sara; Palomino-Moral, Pedro Ángel; Moral-Fernández, Lourdes; Frías-Osuna, Antonio; Del-Pino-Casado, Rafael

    2016-01-01

    To identify and analyse problems in adapting to change among the family caregivers of relatives with dementia. Qualitative study based on the methodology of Charmaz's Constructivist Grounded Theory. Seven focus groups were conducted in different primary health care centres in the province of Jaen (Spain). Eighty-two primary family caregivers of relatives with dementia participated by purposeful maximum variation sampling and theoretical sampling. Triangulation analysis was carried out to increase internal validity. We obtained three main categories: 'Changing Care', 'Problems in the process of adapting to change' and 'Facilitators of the process of adapting to change'. Family caregivers perform their role in a context characterized by personal change, both in the person receiving the care and in the social and cultural context. The challenge of adaptation lies in the balance between the problems that hamper adaptation of the caregiver to new situations of care and the factors that facilitate the caregiver role. The adaptation of family caregivers to caring for a person with dementia is hindered by the lack of formal support and under-diagnosis of dementia. The adaptation process could be improved by strengthening formal support in the early stages of care to reduce the stress of family caregivers who must teach themselves about their task and by interventions adapted to each phase in the development of the caregiver role. Copyright © 2016 SESPAS. Published by Elsevier Espana. All rights reserved.

  20. Arraying proteins by cell-free synthesis.

    Science.gov (United States)

    He, Mingyue; Wang, Ming-Wei

    2007-10-01

    Recent advances in life science have led to great motivation for the development of protein arrays to study functions of genome-encoded proteins. While traditional cell-based methods have been commonly used for generating protein arrays, they are usually a time-consuming process with a number of technical challenges. Cell-free protein synthesis offers an attractive system for making protein arrays, not only does it rapidly converts the genetic information into functional proteins without the need for DNA cloning, but also presents a flexible environment amenable to production of folded proteins or proteins with defined modifications. Recent advancements have made it possible to rapidly generate protein arrays from PCR DNA templates through parallel on-chip protein synthesis. This article reviews current cell-free protein array technologies and their proteomic applications.

  1. Spacer capture and integration by a type I-F Cas1-Cas2-3 CRISPR adaptation complex.

    Science.gov (United States)

    Fagerlund, Robert D; Wilkinson, Max E; Klykov, Oleg; Barendregt, Arjan; Pearce, F Grant; Kieper, Sebastian N; Maxwell, Howard W R; Capolupo, Angela; Heck, Albert J R; Krause, Kurt L; Bostina, Mihnea; Scheltema, Richard A; Staals, Raymond H J; Fineran, Peter C

    2017-06-27

    CRISPR-Cas adaptive immune systems capture DNA fragments from invading bacteriophages and plasmids and integrate them as spacers into bacterial CRISPR arrays. In type I-E and II-A CRISPR-Cas systems, this adaptation process is driven by Cas1-Cas2 complexes. Type I-F systems, however, contain a unique fusion of Cas2, with the type I effector helicase and nuclease for invader destruction, Cas3. By using biochemical, structural, and biophysical methods, we present a structural model of the 400-kDa Cas1 4 -Cas2-3 2 complex from Pectobacterium atrosepticum with bound protospacer substrate DNA. Two Cas1 dimers assemble on a Cas2 domain dimeric core, which is flanked by two Cas3 domains forming a groove where the protospacer binds to Cas1-Cas2. We developed a sensitive in vitro assay and demonstrated that Cas1-Cas2-3 catalyzed spacer integration into CRISPR arrays. The integrase domain of Cas1 was necessary, whereas integration was independent of the helicase or nuclease activities of Cas3. Integration required at least partially duplex protospacers with free 3'-OH groups, and leader-proximal integration was stimulated by integration host factor. In a coupled capture and integration assay, Cas1-Cas2-3 processed and integrated protospacers independent of Cas3 activity. These results provide insight into the structure of protospacer-bound type I Cas1-Cas2-3 adaptation complexes and their integration mechanism.

  2. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S; Sedukhin, S [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  3. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    Science.gov (United States)

    2018-03-10

    circuit boards. A computational electromagnetics software package, FEKO [24], is used to model the antenna arrays, and the RMIM [12] is used to...Symposium on Intelligent Signal Processing and Communications Systems, Chengdu, China, 2010. [24] FEKO Suite 6.3, EM Software & Systems- S.A. (Pty) Ltd...including suggestions for reducing the burden, to Department of Defense, Washington Headquarters Services , Directorate for Information Operations and

  4. Basal ganglia-dependent processes in recalling learned visual-motor adaptations.

    Science.gov (United States)

    Bédard, Patrick; Sanes, Jerome N

    2011-03-01

    Humans learn and remember motor skills to permit adaptation to a changing environment. During adaptation, the brain develops new sensory-motor relationships that become stored in an internal model (IM) that may be retained for extended periods. How the brain learns new IMs and transforms them into long-term memory remains incompletely understood since prior work has mostly focused on the learning process. A current model suggests that basal ganglia, cerebellum, and their neocortical targets actively participate in forming new IMs but that a cerebellar cortical network would mediate automatization. However, a recent study (Marinelli et al. 2009) reported that patients with Parkinson's disease (PD), who have basal ganglia dysfunction, had similar adaptation rates as controls but demonstrated no savings at recall tests (24 and 48 h). Here, we assessed whether a longer training session, a feature known to increase long-term retention of IM in healthy individuals, could allow PD patients to demonstrate savings. We recruited PD patients and age-matched healthy adults and used a visual-motor adaptation paradigm similar to the study by Marinelli et al. (2009), doubling the number of training trials and assessed recall after a short and a 24-h delay. We hypothesized that a longer training session would allow PD patients to develop an enhanced representation of the IM as demonstrated by savings at the recall tests. Our results showed that PD patients had similar adaptation rates as controls but did not demonstrate savings at both recall tests. We interpret these results as evidence that fronto-striatal networks have involvement in the early to late phase of motor memory formation, but not during initial learning.

  5. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  6. Analyzing CMOS/SOS fabrication for LSI arrays

    Science.gov (United States)

    Ipri, A. C.

    1978-01-01

    Report discusses set of design rules that have been developed as result of work with test arrays. Set of optimum dimensions is given that would maximize process output and would correspondingly minimize costs in fabrication of large-scale integration (LSI) arrays.

  7. The adaptation process of mothers raising a child with complex congenital heart disease.

    Science.gov (United States)

    Ahn, Jeong-Ah; Lee, Sunhee

    2018-01-01

    Mothers of children with congenital heart disease (CHD) tend to be concerned about their child's normal life. The majority of these mothers tend to experience negative psychological problems. In this study, the adaptation process of mothers raising a child with complex CHD was investigated based on the sociocultural context of Korea. The data collection was conducted by in-depth interviews and theoretical sampling was performed until the data were saturated. The collected data were analyzed using continuous theoretical comparisons. The results of the present study showed that the core category in the mothers' adaptation process was 'anxiety regarding the future', and the mothers' adaptation process consisted of the impact phase, standing against phase, and accepting phase. In the impact phase, the participants emotionally fluctuated between 'feelings of abandonment' and 'entertaining hope'. In the standing against phase, participants tended to dedicate everything to child-rearing while being affected by 'being encouraged by support' and 'being frustrated by tasks beyond their limits'. In the accepting phase, the subjects attempted to 'accept the child as is', 'resist hard feelings', and 'share hope'. Health-care providers need to develop programs that include information regarding CHD, how to care for a child with CHD, and effective child-rearing behaviors.

  8. System Realization of Broad Band Digital Beam Forming for Digital Array Radar

    Directory of Open Access Journals (Sweden)

    Wang Feng

    2013-09-01

    Full Text Available Broad band Digital Beam Forming (DBF is the key technique for the realization of Digital Array Radar (DAR. We propose the method of combination realization of the channel equalization and DBF time delay filter function by using adaptive Sample Matrix Inversion algorithm. The broad band DBF function is realized on a new DBF module based on parallel fiber optic engines and Field Program Gate Array (FPGA. Good performance is achieved when it is used to some radar products.

  9. Comparison of candidate solar array maximum power utilization approaches. [for spacecraft propulsion

    Science.gov (United States)

    Costogue, E. N.; Lindena, S.

    1976-01-01

    A study was made of five potential approaches that can be utilized to detect the maximum power point of a solar array while sustaining operations at or near maximum power and without endangering stability or causing array voltage collapse. The approaches studied included: (1) dynamic impedance comparator, (2) reference array measurement, (3) onset of solar array voltage collapse detection, (4) parallel tracker, and (5) direct measurement. The study analyzed the feasibility and adaptability of these approaches to a future solar electric propulsion (SEP) mission, and, specifically, to a comet rendezvous mission. Such missions presented the most challenging requirements to a spacecraft power subsystem in terms of power management over large solar intensity ranges of 1.0 to 3.5 AU. The dynamic impedance approach was found to have the highest figure of merit, and the reference array approach followed closely behind. The results are applicable to terrestrial solar power systems as well as to other than SEP space missions.

  10. Flexible Description and Adaptive Processing of Earth Observation Data through the BigEarth Platform

    Science.gov (United States)

    Gorgan, Dorian; Bacu, Victor; Stefanut, Teodor; Nandra, Cosmin; Mihon, Danut

    2016-04-01

    The Earth Observation data repositories extending periodically by several terabytes become a critical issue for organizations. The management of the storage capacity of such big datasets, accessing policy, data protection, searching, and complex processing require high costs that impose efficient solutions to balance the cost and value of data. Data can create value only when it is used, and the data protection has to be oriented toward allowing innovation that sometimes depends on creative people, which achieve unexpected valuable results through a flexible and adaptive manner. The users need to describe and experiment themselves different complex algorithms through analytics in order to valorize data. The analytics uses descriptive and predictive models to gain valuable knowledge and information from data analysis. Possible solutions for advanced processing of big Earth Observation data are given by the HPC platforms such as cloud. With platforms becoming more complex and heterogeneous, the developing of applications is even harder and the efficient mapping of these applications to a suitable and optimum platform, working on huge distributed data repositories, is challenging and complex as well, even by using specialized software services. From the user point of view, an optimum environment gives acceptable execution times, offers a high level of usability by hiding the complexity of computing infrastructure, and supports an open accessibility and control to application entities and functionality. The BigEarth platform [1] supports the entire flow of flexible description of processing by basic operators and adaptive execution over cloud infrastructure [2]. The basic modules of the pipeline such as the KEOPS [3] set of basic operators, the WorDeL language [4], the Planner for sequential and parallel processing, and the Executor through virtual machines, are detailed as the main components of the BigEarth platform [5]. The presentation exemplifies the development

  11. Applying the sequential neural-network approximation and orthogonal array algorithm to optimize the axial-flow cooling system for rapid thermal processes

    International Nuclear Information System (INIS)

    Hung, Shih-Yu; Shen, Ming-Ho; Chang, Ying-Pin

    2009-01-01

    The sequential neural-network approximation and orthogonal array (SNAOA) were used to shorten the cooling time for the rapid cooling process such that the normalized maximum resolved stress in silicon wafer was always below one in this study. An orthogonal array was first conducted to obtain the initial solution set. The initial solution set was treated as the initial training sample. Next, a back-propagation sequential neural network was trained to simulate the feasible domain to obtain the optimal parameter setting. The size of the training sample was greatly reduced due to the use of the orthogonal array. In addition, a restart strategy was also incorporated into the SNAOA so that the searching process may have a better opportunity to reach a near global optimum. In this work, we considered three different cooling control schemes during the rapid thermal process: (1) downward axial gas flow cooling scheme; (2) upward axial gas flow cooling scheme; (3) dual axial gas flow cooling scheme. Based on the maximum shear stress failure criterion, the other control factors such as flow rate, inlet diameter, outlet width, chamber height and chamber diameter were also examined with respect to cooling time. The results showed that the cooling time could be significantly reduced using the SNAOA approach

  12. Host plant adaptation in Drosophila mettleri populations.

    Directory of Open Access Journals (Sweden)

    Sergio Castrezana

    Full Text Available The process of local adaptation creates diversity among allopatric populations, and may eventually lead to speciation. Plant-feeding insect populations that specialize on different host species provide an excellent opportunity to evaluate the causes of ecological specialization and the subsequent consequences for diversity. In this study, we used geographically separated Drosophila mettleri populations that specialize on different host cacti to examine oviposition preference for and larval performance on an array of natural and non-natural hosts (eight total. We found evidence of local adaptation in performance on saguaro cactus (Carnegiea gigantea for populations that are typically associated with this host, and to chemically divergent prickly pear species (Opuntia spp. in a genetically isolated population on Santa Catalina Island. Moreover, each population exhibited reduced performance on the alternative host. This finding is consistent with trade-offs associated with adaptation to these chemically divergent hosts, although we also discuss alternative explanations for this pattern. For oviposition preference, Santa Catalina Island flies were more likely to oviposit on some prickly pear species, but all populations readily laid eggs on saguaro. Experiments with non-natural hosts suggest that factors such as ecological opportunity may play a more important role than host plant chemistry in explaining the lack of natural associations with some hosts.

  13. Mosaic Process for the Fabrication of an Acoustic Transducer Array

    National Research Council Canada - National Science Library

    2005-01-01

    .... Deriving a geometric shape for the array based on the established performance level. Selecting piezoceramic materials based on considerations related to the performance level and derived geometry...

  14. Practical guidelines for implementing adaptive optics in fluorescence microscopy

    Science.gov (United States)

    Wilding, Dean; Pozzi, Paolo; Soloviev, Oleg; Vdovin, Gleb; Verhaegen, Michel

    2018-02-01

    In life sciences, interest in the microscopic imaging of increasingly complex three dimensional samples, such as cell spheroids, zebrafish embryos, and in vivo applications in small animals, is growing quickly. Due to the increasing complexity of samples, more and more life scientists are considering the implementation of adaptive optics in their experimental setups. While several approaches to adaptive optics in microscopy have been reported, it is often difficult and confusing for the microscopist to choose from the array of techniques and equipment. In this poster presentation we offer a small guide to adaptive optics providing general guidelines for successful adaptive optics implementation.

  15. A semi-automatic calibration method for seismic arrays applied to an Alaskan array

    Science.gov (United States)

    Lindquist, K. G.; Tibuleac, I. M.; Hansen, R. A.

    2001-12-01

    Well-calibrated, small (less than 22 km) aperture seismic arrays are of great importance for event location and characterization. We have implemented the crosscorrelation method of Tibuleac and Herrin (Seis. Res. Lett. 1997) as a semi-automatic procedure, applicable to any seismic array. With this we are able to process thousands of phases with several days of computer time on a Sun Blade 1000 workstation. Complicated geology beneath elements and elevation differences amonst the array stations made station corrections necessary. 328 core phases (including PcP, PKiKP, PKP, PKKP) were used in order to determine the static corrections. To demonstrate this application and method, we have analyzed P and PcP arrivals at the ILAR array (Eielson, Alaska) between years 1995-2000. The arrivals were picked by PIDC, for events (mb>4.0) well located by the USGS. We calculated backazimuth and horizontal velocity residuals for all events. We observed large backazimuth residuals for regional and near-regional phases. We are discussing the possibility of a dipping Moho (strike E-W, dip N) beneath the array versus other local structure that would produce the residuals.

  16. A biological inspired fuzzy adaptive window median filter (FAWMF) for enhancing DNA signal processing.

    Science.gov (United States)

    Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin

    2017-10-01

    Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary

  17. Hardware Implementation of LMS-Based Adaptive Noise Cancellation Core with Low Resource Utilization

    Directory of Open Access Journals (Sweden)

    Omid Sharifi Tehrani

    2011-10-01

    Full Text Available A hardware implementation of adaptive noise cancellation (ANC core is proposed. Adaptive filters are widely used in different applications such as adaptive noise cancellation, prediction, equalization, inverse modeling and system identification. FIR adaptive filters are mostly used because of their low computation costs and their linear phase. Least mean squared algorithm (LMS is used to train FIR adaptive filter weights. Advances in semiconductor technology especially in digital signal processors (DSP and field programmable gate arrays (FPGA with hundreds of mega hertz in speed, will allow digital designers to embed essential digital signal processing units in small chips. But designing a synthesizable core on an FPGA is not always as simple as DSP chips due to complexity and limitations of FPGAs. In this paper we design anLMS-based FIR adaptive filter for adaptive noise cancellation based on VHDL97 hardware description language (HDL and Xilinx SPARTAN3E (XC3S500E which utilizes low resources and is high performance and FPGA-brand independent so can be implemented on different FPGA brands (Xilinx, ALTERA, ACTEL. Simulations are done in MODELSIM and MATLAB and implementation is done with Xilinx ISE. Finally, result are compared with other papers for better judgment.

  18. The Owens Valley Millimeter Array

    International Nuclear Information System (INIS)

    Padin, S.; Scott, S.L.; Woody, D.P.; Scoville, N.Z.; Seling, T.V.

    1991-01-01

    The telescopes and signal processing systems of the Owens Valley Millimeter Array are considered, and improvements in the sensitivity and stability of the instrument are characterized. The instrument can be applied to map sources in the 85 to 115 GHz and 218 to 265 GHz bands with a resolution of about 1 arcsec in the higher frequency band. The operation of the array is fully automated. The current scientific programs for the array encompass high-resolution imaging of protoplanetary/protostellar disk structures, observations of molecular cloud complexes associated with spiral structure in nearby galaxies, and observations of molecular structures in the nuclei of spiral and luminous IRAS galaxies. 9 refs

  19. ADAPTATION OF TEACHING PROCESS BASED ON A STUDENTS INDIVIDUAL LEARNING NEEDS

    Directory of Open Access Journals (Sweden)

    TAKÁCS, Ondřej

    2011-03-01

    Full Text Available Development of current society requires integration of information technology to every sector, including education. The idea of adaptive teaching in e-learning environment is based on paying attention and giving support to various learning styles. More effective, user friendly thus better quality education can be achieved through such an environment. Learning can be influenced by many factors. In the paper we deal with such factors as student’s personality and qualities – particularly learning style and motivation. In addition we want to prepare study materials and study environment which respects students’ differences. Adaptive e-learning means an automated way of teaching which adapts to different qualities of students which are characteristic for their learning styles. In the last few years we can see a gradual individualization of study not only in distance forms of study but also with full-time study students. Instructional supports, namely those of e-learning, should take this trend into account and adapt the educational processes to individual students’ qualities. The present learning management systems (LMS offers this possibility only to a very limited extent. This paper deals with a design of intelligent virtual tutor behavior, which would adapt its learning ability to both static and dynamically changing student’s qualities. Virtual tutor, in order to manage all that, has to have a sufficiently rich supply of different styles and forms of teaching, with enough information about styles of learning, kinds of memory and other student’s qualities. This paper describes a draft adaptive education model and the results of the first part of the solution – definition of learning styles, pilot testing on students and an outline of further research.

  20. Fabrication of CoZn alloy nanowire arrays: Significant improvement in magnetic properties by annealing process

    Energy Technology Data Exchange (ETDEWEB)

    Koohbor, M. [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Soltanian, S., E-mail: s.soltanian@gmail.com [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver (Canada); Najafi, M. [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Department of Physics, Hamadan University of Technology, Hamadan (Iran, Islamic Republic of); Servati, P. [Department of Electrical and Computer Engineering, University of British Columbia, Vancouver (Canada)

    2012-01-05

    Highlights: Black-Right-Pointing-Pointer Increasing the Zn concentration changes the structure of NWs from hcp to amorphous. Black-Right-Pointing-Pointer Increasing the Zn concentration significantly reduces the Hc value of NWs. Black-Right-Pointing-Pointer Magnetic properties of CoZn NWs can be significantly enhanced by appropriate annealing. Black-Right-Pointing-Pointer The pH of electrolyte has no significant effect on the properties of the NW arrays. Black-Right-Pointing-Pointer Deposition frequency has considerable effects on the magnetic properties of NWs. - Abstract: Highly ordered arrays of Co{sub 1-x}Zn{sub x} (0 {<=} x {<=} 0.74) nanowires (NWs) with diameters of {approx}35 nm and high length-to-diameter ratios (up to 150) were fabricated by co-electrodeposition of Co and Zn into pores of anodized aluminum oxide (AAO) templates. The Co and Zn contents of the NWs were adjusted by varying the ratio of Zn and Co ion concentrations in the electrolyte. The effect of the Zn content, electrodeposition conditions (frequency and pH) and annealing on the structural and magnetic properties (e.g., coercivity (Hc) and squareness (Sq)) of NW arrays were investigated using X-ray diffraction (XRD), scanning electron microscopy, electron diffraction, and alternating gradient force magnetometer (AGFM). XRD patterns reveal that an increase in the concentration of Zn ions of the electrolyte forces the hcp crystal structure of Co NWs to change into an amorphous phase, resulting in a significant reduction in Hc. It was found that the magnetic properties of NWs can be significantly improved by appropriate annealing process. The highest values for Hc (2050 Oe) and Sq (0.98) were obtained for NWs electrodeposited using 0.95/0.05 Co:Zn concentrations at 200 Hz and annealed at 575 Degree-Sign C. While the pH of electrolyte is found to have no significant effect on the structural and magnetic properties of the NW arrays, the electrodeposition frequency has considerable effects on

  1. ESPRIT And Uniform Linear Arrays

    Science.gov (United States)

    Roy, R. H.; Goldburg, M.; Ottersten, B. E.; Swindlehurst, A. L.; Viberg, M.; Kailath, T.

    1989-11-01

    Abstract ¬â€?ESPRIT is a recently developed and patented technique for high-resolution estimation of signal parameters. It exploits an invariance structure designed into the sensor array to achieve a reduction in computational requirements of many orders of magnitude over previous techniques such as MUSIC, Burg's MEM, and Capon's ML, and in addition achieves performance improvement as measured by parameter estimate error variance. It is also manifestly more robust with respect to sensor errors (e.g. gain, phase, and location errors) than other methods as well. Whereas ESPRIT only requires that the sensor array possess a single invariance best visualized by considering two identical but other-wise arbitrary arrays of sensors displaced (but not rotated) with respect to each other, many arrays currently in use in various applications are uniform linear arrays of identical sensor elements. Phased array radars are commonplace in high-resolution direction finding systems, and uniform tapped delay lines (i.e., constant rate A/D converters) are the rule rather than the exception in digital signal processing systems. Such arrays possess many invariances, and are amenable to other types of analysis, which is one of the main reasons such structures are so prevalent. Recent developments in high-resolution algorithms of the signal/noise subspace genre including total least squares (TLS) ESPRIT applied to uniform linear arrays are summarized. ESPRIT is also shown to be a generalization of the root-MUSIC algorithm (applicable only to the case of uniform linear arrays of omni-directional sensors and unimodular cisoids). Comparisons with various estimator bounds, including CramerRao bounds, are presented.

  2. Big Data Challenges for Large Radio Arrays

    Science.gov (United States)

    Jones, Dayton L.; Wagstaff, Kiri; Thompson, David; D'Addario, Larry; Navarro, Robert; Mattmann, Chris; Majid, Walid; Lazio, Joseph; Preston, Robert; Rebbapragada, Umaa

    2012-01-01

    Future large radio astronomy arrays, particularly the Square Kilometre Array (SKA), will be able to generate data at rates far higher than can be analyzed or stored affordably with current practices. This is, by definition, a "big data" problem, and requires an end-to-end solution if future radio arrays are to reach their full scientific potential. Similar data processing, transport, storage, and management challenges face next-generation facilities in many other fields.

  3. Adapting adaptation: the English eco-town initiative as governance process

    Directory of Open Access Journals (Sweden)

    Daniel Tomozeiu

    2014-06-01

    Full Text Available Climate change adaptation and mitigation have become key policy drivers in the UK under its Climate Change Act of 2008. At the same time, urbanization has been high on the agenda, given the pressing need for substantial additional housing, particularly in southeast England. These twin policy objectives were brought together in the UK government's 'eco-town' initiative for England launched in 2007, which has since resulted in four eco-town projects currently under development. We critically analyze the eco-town initiative's policy evolution and early planning phase from a multilevel governance perspective by focusing on the following two interrelated aspects: (1 the evolving governance structures and resulting dynamics arising from the development of the eco-town initiative at UK governmental level, and the subsequent partial devolution to local stakeholders, including local authorities and nongovernmental actors, under the new 'localism' agenda; and (2 the effect of these governance dynamics on the conceptual and practical approach to adaptation through the emerging eco-town projects. As such, we problematize the impact of multilevel governance relations, and competing governance strategies and leadership, on shaping eco-town and related adaptation strategies and practice.

  4. The Process of Adaptation Following a New Diagnosis of Type 1 Diabetes in Adulthood

    DEFF Research Database (Denmark)

    Due-Christensen, Mette; Zoffmann, Vibeke; Willaing, Ingrid

    2018-01-01

    While Type 1 diabetes (T1D) is generally associated with childhood, half of all cases occur in adulthood. The adaptive strategies individuals employ during the initial adaptive phase may have an important impact on their risk of future diabetes complications and their psychosocial well-being. We...... conducted a systematic review of six databases and included nine qualitative studies in a meta-synthesis, the aims of which were to develop a better understanding of how adults newly diagnosed with T1D experience the diagnosis and the phenomena associated with the early process of adaptation to life...

  5. Process-morphology scaling relations quantify self-organization in capillary densified nanofiber arrays.

    Science.gov (United States)

    Kaiser, Ashley L; Stein, Itai Y; Cui, Kehang; Wardle, Brian L

    2018-02-07

    Capillary-mediated densification is an inexpensive and versatile approach to tune the application-specific properties and packing morphology of bulk nanofiber (NF) arrays, such as aligned carbon nanotubes. While NF length governs elasto-capillary self-assembly, the geometry of cellular patterns formed by capillary densified NFs cannot be precisely predicted by existing theories. This originates from the recently quantified orders of magnitude lower than expected NF array effective axial elastic modulus (E), and here we show via parametric experimentation and modeling that E determines the width, area, and wall thickness of the resulting cellular pattern. Both experiments and models show that further tuning of the cellular pattern is possible by altering the NF-substrate adhesion strength, which could enable the broad use of this facile approach to predictably pattern NF arrays for high value applications.

  6. Post-Irradiation Examination of Array Targets - Part I

    Energy Technology Data Exchange (ETDEWEB)

    Icenhour, A.S.

    2004-01-23

    During FY 2001, two arrays, each containing seven neptunium-loaded targets, were irradiated at the Advanced Test Reactor in Idaho to examine the influence of multi-target self-shielding on {sup 236}Pu content and to evaluate fission product release data. One array consisted of seven targets that contained 10 vol% NpO{sub 2} pellets, while the other array consisted of seven targets that contained 20 vol % NpO{sub 2} pellets. The arrays were located in the same irradiation facility but were axially separated to minimize the influence of one array on the other. Each target also contained a dosimeter package, which consisted of a small NpO{sub 2} wire that was inside a vanadium container. After completion of irradiation and shipment back to the Oak Ridge National Laboratory, nine of the targets (four from the 10 vol% array and five from the 20 vol% array) were punctured for pressure measurement and measurement of {sup 85}Kr. These nine targets and the associated dosimeters were then chemically processed to measure the residual neptunium, total plutonium production, {sup 238}Pu production, and {sup 236}Pu concentration at discharge. The amount and isotopic composition of fission products were also measured. This report provides the results of the processing and analysis of the nine targets.

  7. [Super sweet corn hybrids adaptability for industrial processing. I freezing].

    Science.gov (United States)

    Alfonzo, Braunnier; Camacho, Candelario; Ortiz de Bertorelli, Ligia; De Venanzi, Frank

    2002-09-01

    With the purpose of evaluating adaptability to the freezing process of super sweet corn sh2 hybrids Krispy King, Victor and 324, 100 cobs of each type were frozen at -18 degrees C. After 120 days of storage, their chemical, microbiological and sensorial characteristics were compared with a sweet corn su. Industrial quality of the process of freezing and length and number of rows in cobs were also determined. Results revealed yields above 60% in frozen corns. Length and number of rows in cobs were acceptable. Most of the chemical characteristics of super sweet hybrids were not different from the sweet corn assayed at the 5% significance level. Moisture content and soluble solids of hybrid Victor, as well as total sugars of hybrid 324 were statistically different. All sh2 corns had higher pH values. During freezing, soluble solids concentration, sugars and acids decreased whereas pH increased. Frozen cobs exhibited acceptable microbiological rank, with low activities of mesophiles and total coliforms, absence of psychrophiles and fecal coliforms, and an appreciable amount of molds. In conclusion, sh2 hybrids adapted with no problems to the freezing process, they had lower contents of soluble solids and higher contents of total sugars, which almost doubled the amount of su corn; flavor, texture, sweetness and appearance of kernels were also better. Hybrid Victor was preferred by the evaluating panel and had an outstanding performance due to its yield and sensorial characteristics.

  8. Nanofabrication and characterization of ZnO nanorod arrays and branched microrods by aqueous solution route and rapid thermal processing

    International Nuclear Information System (INIS)

    Lupan, Oleg; Chow, Lee; Chai, Guangyu; Roldan, Beatriz; Naitabdi, Ahmed; Schulte, Alfons; Heinrich, Helge

    2007-01-01

    This paper presents an inexpensive and fast fabrication method for one-dimensional (1D) ZnO nanorod arrays and branched two-dimensional (2D), three-dimensional (3D) - nanoarchitectures. Our synthesis technique includes the use of an aqueous solution route and post-growth rapid thermal annealing. It permits rapid and controlled growth of ZnO nanorod arrays of 1D - rods, 2D - crosses, and 3D - tetrapods without the use of templates or seeds. The obtained ZnO nanorods are uniformly distributed on the surface of Si substrates and individual or branched nano/microrods can be easily transferred to other substrates. Process parameters such as concentration, temperature and time, type of substrate and the reactor design are critical for the formation of nanorod arrays with thin diameter and transferable nanoarchitectures. X-ray diffraction, scanning electron microscopy, X-ray photoelectron spectroscopy, transmission electron microscopy and Micro-Raman spectroscopy have been used to characterize the samples

  9. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    Science.gov (United States)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  10. Two-dimensional random arrays for real time volumetric imaging

    DEFF Research Database (Denmark)

    Davidsen, Richard E.; Jensen, Jørgen Arendt; Smith, Stephen W.

    1994-01-01

    real time volumetric imaging system, which employs a wide transmit beam and receive mode parallel processing to increase image frame rate. Depth-of-field comparisons were made from simulated on-axis and off-axis beamplots at ranges from 30 to 160 mm for both coaxial and offset transmit and receive......Two-dimensional arrays are necessary for a variety of ultrasonic imaging techniques, including elevation focusing, 2-D phase aberration correction, and real time volumetric imaging. In order to reduce system cost and complexity, sparse 2-D arrays have been considered with element geometries...... selected ad hoc, by algorithm, or by random process. Two random sparse array geometries and a sparse array with a Mills cross receive pattern were simulated and compared to a fully sampled aperture with the same overall dimensions. The sparse arrays were designed to the constraints of the Duke University...

  11. Green Software Engineering Adaption In Requirement Elicitation Process

    Directory of Open Access Journals (Sweden)

    Umma Khatuna Jannat

    2015-08-01

    Full Text Available A recent technology investigates the role of concern in the environment software that is green software system. Now it is widely accepted that the green software can fit all process of software development. It is also suitable for the requirement elicitation process. Now a days software companies have used requirements elicitation techniques in an enormous majority. Because this process plays more and more important roles in software development. At the present time most of the requirements elicitation process is improved by using some techniques and tools. So that the intention of this research suggests to adapt green software engineering for the intention of existing elicitation technique and recommend suitable actions for improvement. This research being involved qualitative data. I used few keywords in my searching procedure then searched IEEE ACM Springer Elsevier Google scholar Scopus and Wiley. Find out articles which published in 2010 until 2016. Finding from the literature review Identify 15 traditional requirement elicitations factors and 23 improvement techniques to convert green engineering. Lastly The paper includes a squat review of the literature a description of the grounded theory and some of the identity issues related finding of the necessity for requirements elicitation improvement techniques.

  12. Study and Design of Differential Microphone Arrays

    CERN Document Server

    Benesty, Jacob

    2013-01-01

    Microphone arrays have attracted a lot of interest over the last few decades since they have the potential to solve many important problems such as noise reduction/speech enhancement, source separation, dereverberation, spatial sound recording, and source localization/tracking, to name a few. However, the design and implementation of microphone arrays with beamforming algorithms is not a trivial task when it comes to processing broadband signals such as speech. Indeed, in most sensor arrangements, the beamformer tends to have a frequency-dependent response. One exception, perhaps, is the family of differential microphone arrays (DMAs) that have the promise to form frequency-independent responses. Moreover, they have the potential to attain high directional gains with small and compact apertures. As a result, this type of microphone arrays has drawn much research and development attention recently. This book is intended to provide a systematic study of DMAs from a signal processing perspective. The primary obj...

  13. Seismometer array station processors

    International Nuclear Information System (INIS)

    Key, F.A.; Lea, T.G.; Douglas, A.

    1977-01-01

    A description is given of the design, construction and initial testing of two types of Seismometer Array Station Processor (SASP), one to work with data stored on magnetic tape in analogue form, the other with data in digital form. The purpose of a SASP is to detect the short period P waves recorded by a UK-type array of 20 seismometers and to edit these on to a a digital library tape or disc. The edited data are then processed to obtain a rough location for the source and to produce seismograms (after optimum processing) for analysis by a seismologist. SASPs are an important component in the scheme for monitoring underground explosions advocated by the UK in the Conference of the Committee on Disarmament. With digital input a SASP can operate at 30 times real time using a linear detection process and at 20 times real time using the log detector of Weichert. Although the log detector is slower, it has the advantage over the linear detector that signals with lower signal-to-noise ratio can be detected and spurious large amplitudes are less likely to produce a detection. It is recommended, therefore, that where possible array data should be recorded in digital form for input to a SASP and that the log detector of Weichert be used. Trial runs show that a SASP is capable of detecting signals down to signal-to-noise ratios of about two with very few false detections, and at mid-continental array sites it should be capable of detecting most, if not all, the signals with magnitude above msub(b) 4.5; the UK argues that, given a suitable network, it is realistic to hope that sources of this magnitude and above can be detected and identified by seismological means alone. (author)

  14. Self-adaptive Green-Ampt infiltration parameters obtained from measured moisture processes

    Directory of Open Access Journals (Sweden)

    Long Xiang

    2016-07-01

    Full Text Available The Green-Ampt (G-A infiltration model (i.e., the G-A model is often used to characterize the infiltration process in hydrology. The parameters of the G-A model are critical in applications for the prediction of infiltration and associated rainfall-runoff processes. Previous approaches to determining the G-A parameters have depended on pedotransfer functions (PTFs or estimates from experimental results, usually without providing optimum values. In this study, rainfall simulators with soil moisture measurements were used to generate rainfall in various experimental plots. Observed runoff data and soil moisture dynamic data were jointly used to yield the infiltration processes, and an improved self-adaptive method was used to optimize the G-A parameters for various types of soil under different rainfall conditions. The two G-A parameters, i.e., the effective hydraulic conductivity and the effective capillary drive at the wetting front, were determined simultaneously to describe the relationships between rainfall, runoff, and infiltration processes. Through a designed experiment, the method for determining the G-A parameters was proved to be reliable in reflecting the effects of pedologic background in G-A type infiltration cases and deriving the optimum G-A parameters. Unlike PTF methods, this approach estimates the G-A parameters directly from infiltration curves obtained from rainfall simulation experiments so that it can be used to determine site-specific parameters. This study provides a self-adaptive method of optimizing the G-A parameters through designed field experiments. The parameters derived from field-measured rainfall-infiltration processes are more reliable and applicable to hydrological models.

  15. The process of adapting a universal dating abuse prevention program to adolescents exposed to domestic violence.

    Science.gov (United States)

    Foshee, Vangie A; Dixon, Kimberly S; Ennett, Susan T; Moracco, Kathryn E; Bowling, J Michael; Chang, Ling-Yin; Moss, Jennifer L

    2015-07-01

    Adolescents exposed to domestic violence are at increased risk of dating abuse, yet no evaluated dating abuse prevention programs have been designed specifically for this high-risk population. This article describes the process of adapting Families for Safe Dates (FSD), an evidenced-based universal dating abuse prevention program, to this high-risk population, including conducting 12 focus groups and 107 interviews with the target audience. FSD includes six booklets of dating abuse prevention information, and activities for parents and adolescents to do together at home. We adapted FSD for mothers who were victims of domestic violence, but who no longer lived with the abuser, to do with their adolescents who had been exposed to the violence. Through the adaptation process, we learned that families liked the program structure and valued being offered the program and that some of our initial assumptions about this population were incorrect. We identified practices and beliefs of mother victims and attributes of these adolescents that might increase their risk of dating abuse that we had not previously considered. In addition, we learned that some of the content of the original program generated negative family interactions for some. The findings demonstrate the utility of using a careful process to adapt evidence-based interventions (EBIs) to cultural sub-groups, particularly the importance of obtaining feedback on the program from the target audience. Others can follow this process to adapt EBIs to groups other than the ones for which the original EBI was designed. © The Author(s) 2014.

  16. Development and testing of methods for adaptive image processing in odontology and medicine

    International Nuclear Information System (INIS)

    Sund, Torbjoern

    2005-01-01

    Medical diagnostic imaging has undergone radical changes during the last ten years. In the early 1990'ies, the medical imaging department was almost exclusively film-based. Today, all major hospitals have converted to digital acquisition and handling of their diagnostic imaging, or are in the process of conversion. It is therefore important to investigate whether diagnostic reading of digitally acquired images on computer display screens can match or even surpass film recording and viewing. At the same time, the digitalisation opens new possibilities for image processing, which may challenge the traditional way of studying medical images. The current work explores some of the possibilities of digital processing techniques, and evaluates the results both by quantitative methods (ROC analysis) and by subjective qualification by real users. Summary of papers: Paper I: Locally adaptive image binarization with a sliding window threshold was used for the detection of bone ridges in radiotherapy portal images. A new thresholding criterion suitable for incremental update within the sliding window was developed, and it was shown that the algorithm gave better results on difficult portal images than various publicly available adaptive thresholding routines. For small windows the routine was also faster than an adaptive implementation of the Otsu algorithm that uses interpolation between fixed tiles, and the resulting images had equal quality. Paper II: It was investigated whether contrast enhancement by non-interactive, sliding window adaptive histogram equalization could enhance the diagnostic quality of intra-oral radiographs in the dental clinic. Three dentists read 22 periapical and 12 bitewing storage phosphor (SP) radiographs. For the periapical readings they graded the quality of the examination with regard to visually locating the root apex. For the bitewing readings they registered all occurrences of approximal caries on a confidence scale. Each reading was first

  17. Development and testing of methods for adaptive image processing in odontology and medicine

    Energy Technology Data Exchange (ETDEWEB)

    Sund, Torbjoern

    2005-07-01

    Medical diagnostic imaging has undergone radical changes during the last ten years. In the early 1990'ies, the medical imaging department was almost exclusively film-based. Today, all major hospitals have converted to digital acquisition and handling of their diagnostic imaging, or are in the process of conversion. It is therefore important to investigate whether diagnostic reading of digitally acquired images on computer display screens can match or even surpass film recording and viewing. At the same time, the digitalisation opens new possibilities for image processing, which may challenge the traditional way of studying medical images. The current work explores some of the possibilities of digital processing techniques, and evaluates the results both by quantitative methods (ROC analysis) and by subjective qualification by real users. Summary of papers: Paper I: Locally adaptive image binarization with a sliding window threshold was used for the detection of bone ridges in radiotherapy portal images. A new thresholding criterion suitable for incremental update within the sliding window was developed, and it was shown that the algorithm gave better results on difficult portal images than various publicly available adaptive thresholding routines. For small windows the routine was also faster than an adaptive implementation of the Otsu algorithm that uses interpolation between fixed tiles, and the resulting images had equal quality. Paper II: It was investigated whether contrast enhancement by non-interactive, sliding window adaptive histogram equalization could enhance the diagnostic quality of intra-oral radiographs in the dental clinic. Three dentists read 22 periapical and 12 bitewing storage phosphor (SP) radiographs. For the periapical readings they graded the quality of the examination with regard to visually locating the root apex. For the bitewing readings they registered all occurrences of approximal caries on a confidence scale. Each reading was

  18. Deformable wire array: fiber drawn tunable metamaterials

    DEFF Research Database (Denmark)

    Fleming, Simon; Stefani, Alessio; Tang, Xiaoli

    2017-01-01

    By fiber drawing we fabricate a wire array metamaterial, the structure of which can be actively modified. The plasma frequency can be tuned by 50% by compressing the metamaterial; recovers when released and the process can be repeated.......By fiber drawing we fabricate a wire array metamaterial, the structure of which can be actively modified. The plasma frequency can be tuned by 50% by compressing the metamaterial; recovers when released and the process can be repeated....

  19. A real-time regional adaptive exposure method for saving dose-area product in x-ray fluoroscopy

    International Nuclear Information System (INIS)

    Burion, Steve; Funk, Tobias; Speidel, Michael A.

    2013-01-01

    Purpose: Reduction of radiation dose in x-ray imaging has been recognized as a high priority in the medical community. Here the authors show that a regional adaptive exposure method can reduce dose-area product (DAP) in x-ray fluoroscopy. The authors' method is particularly geared toward providing dose savings for the pediatric population. Methods: The scanning beam digital x-ray system uses a large-area x-ray source with 8000 focal spots in combination with a small photon-counting detector. An imaging frame is obtained by acquiring and reconstructing up to 8000 detector images, each viewing only a small portion of the patient. Regional adaptive exposure was implemented by varying the exposure of the detector images depending on the local opacity of the object. A family of phantoms ranging in size from infant to obese adult was imaged in anteroposterior view with and without adaptive exposure. The DAP delivered to each phantom was measured in each case, and noise performance was compared by generating noise arrays to represent regional noise in the images. These noise arrays were generated by dividing the image into regions of about 6 mm 2 , calculating the relative noise in each region, and placing the relative noise value of each region in a one-dimensional array (noise array) sorted from highest to lowest. Dose-area product savings were calculated as the difference between the ratio of DAP with adaptive exposure to DAP without adaptive exposure. The authors modified this value by a correction factor that matches the noise arrays where relative noise is the highest to report a final dose-area product savings. Results: The average dose-area product saving across the phantom family was (42 ± 8)% with the highest dose-area product saving in the child-sized phantom (50%) and the lowest in the phantom mimicking an obese adult (23%). Conclusions: Phantom measurements indicate that a regional adaptive exposure method can produce large DAP savings without compromising

  20. Active Micro structured Optical Arrays of Grazing Incidence Reflectors

    International Nuclear Information System (INIS)

    Willingale, R.; Feldman, Ch.; Michette, A.; Hart, D.; McFaul, Ch; Morrison, G.R.; Pfauntsch, S.; Powell, A.K.; Sahraei, Sh.; Shand, M.T.; Button, T.; Rodriguez-Sanmartin, D.; Zhang, D.; Dunare, C.; Parkes, W.; Stevenson, T.; Folkard, M.; Vojnovic, B.; Vojnovic, B.

    2011-01-01

    The UK Smart X-Ray Optics (SXO) programme is developing active/adaptive optics for terrestrial applications. One of the technologies proposed is micro structured optical arrays (MOAs), which focus X-rays using grazing incidence reflection through consecutive aligned arrays of microscopic channels. Although such arrays are similar in concept to poly capillary and microchannel plate optics, they can be bent and adjusted using piezoelectric actuators providing control over the focusing and inherent aberrations. Custom configurations can be designed, using ray tracing and finite element analysis, for applications from sub-keV to several-keV X-rays, and the channels of appropriate aspect ratios can be made using deep silicon etching. An exemplar application will be in the micro probing of biological cells and tissue samples using Ti Ka radiation (4.5?keV) in studies related to radiation-induced cancers. This paper discusses the optical design, modelling, and manufacture of such optics

  1. The influence of negative stimulus features on conflict adaption:Evidence from fluency of processing

    Directory of Open Access Journals (Sweden)

    Julia eFritz

    2015-02-01

    Full Text Available Cognitive control enables adaptive behavior in a dynamically changing environment. In this context, one prominent adaptation effect is the sequential conflict adjustment, i.e. the observation of reduced response interference on trials following conflict trials. Increasing evidence suggests that such response conflicts are registered as aversive signals. So far, however, the functional role of this aversive signal for conflict adaptation to occur has not been put to test directly. In two experiments, the affective valence of conflict stimuli was manipulated by fluency of processing (stimulus contrast. Experiment 1 used a flanker interference task, Experiment 2 a color-word Stroop task. In both experiments, conflict adaptation effects were only present in fluent, but absent in disfluent trials. Results thus speak against the simple idea that any aversive stimulus feature is suited to promote specific conflict adjustments. Two alternative but not mutually exclusive accounts, namely resource competition and adaptation-by-motivation, will be discussed.

  2. Is adaptation. Truly an adaptation?

    Directory of Open Access Journals (Sweden)

    Thais Flores Nogueira Diniz

    2006-04-01

    Full Text Available The article begins by historicizing film adaptation from the arrival of cinema, pointing out the many theoretical approaches under which the process has been seen: from the concept of “the same story told in a different medium” to a comprehensible definition such as “the process through which works can be transformed, forming an intersection of textual surfaces, quotations, conflations and inversions of other texts”. To illustrate this new concept, the article discusses Spike Jonze’s film Adaptation. according to James Naremore’s proposal which considers the study of adaptation as part of a general theory of repetition, joined with the study of recycling, remaking, and every form of retelling. The film deals with the attempt by the scriptwriter Charles Kaufman, cast by Nicholas Cage, to adapt/translate a non-fictional book to the cinema, but ends up with a kind of film which is by no means what it intended to be: a film of action in the model of Hollywood productions. During the process of creation, Charles and his twin brother, Donald, undergo a series of adventures involving some real persons from the world of film, the author and the protagonist of the book, all of them turning into fictional characters in the film. In the film, adaptation then signifies something different from itstraditional meaning.

  3. Network measures for characterising team adaptation processes

    NARCIS (Netherlands)

    Barth, S.K.; Schraagen, J.M.C.; Schmettow, M.

    2015-01-01

    The aim of this study was to advance the conceptualisation of team adaptation by applying social network analysis (SNA) measures in a field study of a paediatric cardiac surgical team adapting to changes in task complexity and ongoing dynamic complexity. Forty surgical procedures were observed by

  4. Micromirror array nanostructures for anticounterfeiting applications

    Science.gov (United States)

    Lee, Robert A.

    2004-06-01

    The optical characteristics of pixellated passive micro mirror arrays are derived and applied in the context of their use as reflective optically variable device (OVD) nanostructures for the protection of documents from counterfeiting. The traditional design variables of foil based diffractive OVDs are shown to be able to be mapped to a corresponding set of design parameters for reflective optical micro mirror array (OMMA) devices. The greatly increased depth characteristics of micro mirror array OVDs provides an opportunity for directly printing the OVD microstructure onto the security document in-line with the normal printing process. The micro mirror array OVD architecture therefore eliminates the need for hot stamping foil as the carrier of the OVD information, thereby reducing costs. The origination of micro mirror array devices via a palette based data format and a combination electron beam lithography and photolithography techniques is discussed via an artwork example and experimental tests. Finally the application of the technology to the design of a generic class of devices which have the interesting property of allowing for both application and customer specific OVD image encoding and data encoding at the end user stage of production is described. Because of the end user nature of the image and data encoding process these devices are particularly well suited to ID document applications and for this reason we refer this new OVD concept as biometric OVD technology.

  5. Lithographic manufacturing of adaptive optics components

    Science.gov (United States)

    Scott, R. Phillip; Jean, Madison; Johnson, Lee; Gatlin, Ridley; Bronson, Ryan; Milster, Tom; Hart, Michael

    2017-09-01

    Adaptive optics systems and their laboratory test environments call for a number of unusual optical components. Examples include lenslet arrays, pyramids, and Kolmogorov phase screens. Because of their specialized application, the availability of these parts is generally limited, with high cost and long lead time, which can also significantly drive optical system design. These concerns can be alleviated by a fast and inexpensive method of optical fabrication. To that end, we are exploring direct-write lithographic techniques to manufacture three different custom elements. We report results from a number of prototype devices including 1, 2, and 3 wave Multiple Order Diffractive (MOD) lenslet arrays with 0.75 mm pitch and phase screens with near Kolmogorov structure functions with a Fried length r0 around 1 mm. We also discuss plans to expand our research to include a diffractive pyramid that is smaller, lighter, and more easily manufactured than glass versions presently used in pyramid wavefront sensors. We describe how these components can be produced within the limited dynamic range of the lithographic process, and with a rapid prototyping and manufacturing cycle. We discuss exploratory manufacturing methods, including replication, and potential observing techniques enabled by the ready availability of custom components.

  6. Si Wire-Array Solar Cells

    Science.gov (United States)

    Boettcher, Shannon

    2010-03-01

    Micron-scale Si wire arrays are three-dimensional photovoltaic absorbers that enable orthogonalization of light absorption and carrier collection and hence allow for the utilization of relatively impure Si in efficient solar cell designs. The wire arrays are grown by a vapor-liquid-solid-catalyzed process on a crystalline (111) Si wafer lithographically patterned with an array of metal catalyst particles. Following growth, such arrays can be embedded in polymethyldisiloxane (PDMS) and then peeled from the template growth substrate. The result is an unusual photovoltaic material: a flexible, bendable, wafer-thickness crystalline Si absorber. In this paper I will describe: 1. the growth of high-quality Si wires with controllable doping and the evaluation of their photovoltaic energy-conversion performance using a test electrolyte that forms a rectifying conformal semiconductor-liquid contact 2. the observation of enhanced absorption in wire arrays exceeding the conventional light trapping limits for planar Si cells of equivalent material thickness and 3. single-wire and large-area solid-state Si wire-array solar cell results obtained to date with directions for future cell designs based on optical and device physics. In collaboration with Michael Kelzenberg, Morgan Putnam, Joshua Spurgeon, Daniel Turner-Evans, Emily Warren, Nathan Lewis, and Harry Atwater, California Institute of Technology.

  7. Compressive sensing-based electrostatic sensor array signal processing and exhausted abnormal debris detecting

    Science.gov (United States)

    Tang, Xin; Chen, Zhongsheng; Li, Yue; Yang, Yongmin

    2018-05-01

    When faults happen at gas path components of gas turbines, some sparsely-distributed and charged debris will be generated and released into the exhaust gas. The debris is called abnormal debris. Electrostatic sensors can detect the debris online and further indicate the faults. It is generally considered that, under a specific working condition, a more serious fault generates more and larger debris, and a piece of larger debris carries more charge. Therefore, the amount and charge of the abnormal debris are important indicators of the fault severity. However, because an electrostatic sensor can only detect the superposed effect on the electrostatic field of all the debris, it can hardly identify the amount and position of the debris. Moreover, because signals of electrostatic sensors depend on not only charge but also position of debris, and the position information is difficult to acquire, measuring debris charge accurately using the electrostatic detecting method is still a technical difficulty. To solve these problems, a hemisphere-shaped electrostatic sensors' circular array (HSESCA) is used, and an array signal processing method based on compressive sensing (CS) is proposed in this paper. To research in a theoretical framework of CS, the measurement model of the HSESCA is discretized into a sparse representation form by meshing. In this way, the amount and charge of the abnormal debris are described as a sparse vector. It is further reconstructed by constraining l1-norm when solving an underdetermined equation. In addition, a pre-processing method based on singular value decomposition and a result calibration method based on weighted-centroid algorithm are applied to ensure the accuracy of the reconstruction. The proposed method is validated by both numerical simulations and experiments. Reconstruction errors, characteristics of the results and some related factors are discussed.

  8. Microfabricated hollow microneedle array using ICP etcher

    Science.gov (United States)

    Ji, Jing; Tay, Francis E. H.; Miao, Jianmin

    2006-04-01

    This paper presents a developed process for fabrication of hollow silicon microneedle arrays. The inner hollow hole and the fluidic reservoir are fabricated in deep reactive ion etching. The profile of outside needles is achieved by the developed fabrication process, which combined isotropic etching and anisotropic etching with inductively coupled plasma (ICP) etcher. Using the combination of SF6/O2 isotropic etching chemistry and Bosch process, the high aspect ratio 3D and high density microneedle arrays are fabricated. The generated needle external geometry can be controlled by etching variables in the isotropic and anisotropic cases.

  9. Microfabricated hollow microneedle array using ICP etcher

    International Nuclear Information System (INIS)

    Ji Jing; Tay, Francis E H; Miao Jianmin

    2006-01-01

    This paper presents a developed process for fabrication of hollow silicon microneedle arrays. The inner hollow hole and the fluidic reservoir are fabricated in deep reactive ion etching. The profile of outside needles is achieved by the developed fabrication process, which combined isotropic etching and anisotropic etching with inductively coupled plasma (ICP) etcher. Using the combination of SF 6 /O 2 isotropic etching chemistry and Bosch process, the high aspect ratio 3D and high density microneedle arrays are fabricated. The generated needle external geometry can be controlled by etching variables in the isotropic and anisotropic cases

  10. Microfabricated hollow microneedle array using ICP etcher

    Energy Technology Data Exchange (ETDEWEB)

    Ji Jing [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Tay, Francis E H [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Miao Jianmin [MicroMachines Center, School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, 639798 (Singapore)

    2006-04-01

    This paper presents a developed process for fabrication of hollow silicon microneedle arrays. The inner hollow hole and the fluidic reservoir are fabricated in deep reactive ion etching. The profile of outside needles is achieved by the developed fabrication process, which combined isotropic etching and anisotropic etching with inductively coupled plasma (ICP) etcher. Using the combination of SF{sub 6}/O{sub 2} isotropic etching chemistry and Bosch process, the high aspect ratio 3D and high density microneedle arrays are fabricated. The generated needle external geometry can be controlled by etching variables in the isotropic and anisotropic cases.

  11. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    Science.gov (United States)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  12. Effects of practice schedule and task specificity on the adaptive process of motor learning.

    Science.gov (United States)

    Barros, João Augusto de Camargo; Tani, Go; Corrêa, Umberto Cesar

    2017-10-01

    This study investigated the effects of practice schedule and task specificity based on the perspective of adaptive process of motor learning. For this purpose, tasks with temporal and force control learning requirements were manipulated in experiments 1 and 2, respectively. Specifically, the task consisted of touching with the dominant hand the three sequential targets with specific movement time or force for each touch. Participants were children (N=120), both boys and girls, with an average age of 11.2years (SD=1.0). The design in both experiments involved four practice groups (constant, random, constant-random, and random-constant) and two phases (stabilisation and adaptation). The dependent variables included measures related to the task goal (accuracy and variability of error of the overall movement and force patterns) and movement pattern (macro- and microstructures). Results revealed a similar error of the overall patterns for all groups in both experiments and that they adapted themselves differently in terms of the macro- and microstructures of movement patterns. The study concludes that the effects of practice schedules on the adaptive process of motor learning were both general and specific to the task. That is, they were general to the task goal performance and specific regarding the movement pattern. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Configuration Considerations for Low Frequency Arrays

    Science.gov (United States)

    Lonsdale, C. J.

    2005-12-01

    The advance of digital signal processing capabilities has spurred a new effort to exploit the lowest radio frequencies observable from the ground, from ˜10 MHz to a few hundred MHz. Multiple scientifically and technically complementary instruments are planned, including the Mileura Widefield Array (MWA) in the 80-300 MHz range, and the Long Wavelength Array (LWA) in the 20-80 MHz range. The latter instrument will target relatively high angular resolution, and baselines up to a few hundred km. An important practical question for the design of such an array is how to distribute the collecting area on the ground. The answer to this question profoundly affects both cost and performance. In this contribution, the factors which determine the anticipated performance of any such array are examined, paying particular attention to the viability and accuracy of array calibration. It is argued that due to the severity of ionospheric effects in particular, it will be difficult or impossible to achieve routine, high dynamic range imaging with a geographically large low frequency array, unless a large number of physically separate array stations is built. This conclusion is general, is based on the need for adequate sampling of ionospheric irregularities, and is independent of the calibration algorithms and techniques that might be employed. It is further argued that array configuration figures of merit that are traditionally used for higher frequency arrays are inappropriate, and a different set of criteria are proposed.

  14. Improved chemical identification from sensor arrays using intelligent algorithms

    Science.gov (United States)

    Roppel, Thaddeus A.; Wilson, Denise M.

    2001-02-01

    Intelligent signal processing algorithms are shown to improve identification rates significantly in chemical sensor arrays. This paper focuses on the use of independently derived sensor status information to modify the processing of sensor array data by using a fast, easily-implemented "best-match" approach to filling in missing sensor data. Most fault conditions of interest (e.g., stuck high, stuck low, sudden jumps, excess noise, etc.) can be detected relatively simply by adjunct data processing, or by on-board circuitry. The objective then is to devise, implement, and test methods for using this information to improve the identification rates in the presence of faulted sensors. In one typical example studied, utilizing separately derived, a-priori knowledge about the health of the sensors in the array improved the chemical identification rate by an artificial neural network from below 10 percent correct to over 99 percent correct. While this study focuses experimentally on chemical sensor arrays, the results are readily extensible to other types of sensor platforms.

  15. Placental adaptations to the maternal-fetal environment: implications for fetal growth and developmental programming.

    Science.gov (United States)

    Sandovici, Ionel; Hoelle, Katharina; Angiolini, Emily; Constância, Miguel

    2012-07-01

    The placenta is a transient organ found in eutherian mammals that evolved primarily to provide nutrients for the developing fetus. The placenta exchanges a wide array of nutrients, endocrine signals, cytokines and growth factors with the mother and the fetus, thereby regulating intrauterine development. Recent studies show that the placenta is not just a passive organ mediating maternal-fetal exchange. It can adapt its capacity to supply nutrients in response to intrinsic and extrinsic variations in the maternal-fetal environment. These dynamic adaptations are thought to occur to maximize fetal growth and viability at birth in the prevailing conditions in utero. However, some of these adaptations may also affect the development of individual fetal tissues, with patho-physiological consequences long after birth. Here, this review summarizes current knowledge on the causes, possible mechanisms and consequences of placental adaptive responses, with a focus on the regulation of transporter-mediated processes for nutrients. This review also highlights the emerging roles that imprinted genes and epigenetic mechanisms of gene regulation may play in placental adaptations to the maternal-fetal environment. Copyright © 2012 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  16. Functional imaging of numerical processing in adults and 4-y-old children.

    Directory of Open Access Journals (Sweden)

    Jessica F Cantlon

    2006-05-01

    Full Text Available Adult humans, infants, pre-school children, and non-human animals appear to share a system of approximate numerical processing for non-symbolic stimuli such as arrays of dots or sequences of tones. Behavioral studies of adult humans implicate a link between these non-symbolic numerical abilities and symbolic numerical processing (e.g., similar distance effects in accuracy and reaction-time for arrays of dots and Arabic numerals. However, neuroimaging studies have remained inconclusive on the neural basis of this link. The intraparietal sulcus (IPS is known to respond selectively to symbolic numerical stimuli such as Arabic numerals. Recent studies, however, have arrived at conflicting conclusions regarding the role of the IPS in processing non-symbolic, numerosity arrays in adulthood, and very little is known about the brain basis of numerical processing early in development. Addressing the question of whether there is an early-developing neural basis for abstract numerical processing is essential for understanding the cognitive origins of our uniquely human capacity for math and science. Using functional magnetic resonance imaging (fMRI at 4-Tesla and an event-related fMRI adaptation paradigm, we found that adults showed a greater IPS response to visual arrays that deviated from standard stimuli in their number of elements, than to stimuli that deviated in local element shape. These results support previous claims that there is a neurophysiological link between non-symbolic and symbolic numerical processing in adulthood. In parallel, we tested 4-y-old children with the same fMRI adaptation paradigm as adults to determine whether the neural locus of non-symbolic numerical activity in adults shows continuity in function over development. We found that the IPS responded to numerical deviants similarly in 4-y-old children and adults. To our knowledge, this is the first evidence that the neural locus of adult numerical cognition takes form early in

  17. Prism adaptation does not alter configural processing of faces [v1; ref status: indexed, http://f1000r.es/1wk

    Directory of Open Access Journals (Sweden)

    Janet H. Bultitude

    2013-10-01

    Full Text Available Patients with hemispatial neglect (‘neglect’ following a brain lesion show difficulty responding or orienting to objects and events on the left side of space. Substantial evidence supports the use of a sensorimotor training technique called prism adaptation as a treatment for neglect. Reaching for visual targets viewed through prismatic lenses that induce a rightward shift in the visual image results in a leftward recalibration of reaching movements that is accompanied by a reduction of symptoms in patients with neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Interestingly, prism adaptation can also alter aspects of non-lateralised spatial attention. We previously demonstrated that prism adaptation alters the extent to which neglect patients and healthy participants process local features versus global configurations of visual stimuli. Since deficits in non-lateralised spatial attention are thought to contribute to the severity of neglect symptoms, it is possible that the effect of prism adaptation on these deficits contributes to its efficacy. This study examines the pervasiveness of the effects of prism adaptation on perception by examining the effect of prism adaptation on configural face processing using a composite face task. The composite face task is a persuasive demonstration of the automatic global-level processing of faces: the top and bottom halves of two familiar faces form a seemingly new, unknown face when viewed together. Participants identified the top or bottom halves of composite faces before and after prism adaptation. Sensorimotor adaptation was confirmed by significant pointing aftereffect, however there was no significant change in the extent to which the irrelevant face half interfered with processing. The results support the proposal that the therapeutic effects

  18. Adaptive Fault Detection for Complex Dynamic Processes Based on JIT Updated Data Set

    Directory of Open Access Journals (Sweden)

    Jinna Li

    2012-01-01

    Full Text Available A novel fault detection technique is proposed to explicitly account for the nonlinear, dynamic, and multimodal problems existed in the practical and complex dynamic processes. Just-in-time (JIT detection method and k-nearest neighbor (KNN rule-based statistical process control (SPC approach are integrated to construct a flexible and adaptive detection scheme for the control process with nonlinear, dynamic, and multimodal cases. Mahalanobis distance, representing the correlation among samples, is used to simplify and update the raw data set, which is the first merit in this paper. Based on it, the control limit is computed in terms of both KNN rule and SPC method, such that we can identify whether the current data is normal or not by online approach. Noted that the control limit obtained changes with updating database such that an adaptive fault detection technique that can effectively eliminate the impact of data drift and shift on the performance of detection process is obtained, which is the second merit in this paper. The efficiency of the developed method is demonstrated by the numerical examples and an industrial case.

  19. HIV-1 Adaptation to Antigen Processing Results in Population-Level Immune Evasion and Affects Subtype Diversification

    DEFF Research Database (Denmark)

    Tenzer, Stefan; Crawford, Hayley; Pymm, Phillip

    2014-01-01

    these regions encode epitopes presented by ~30 more common HLA variants. By combining epitope processing and computational analyses of the two HIV subtypes responsible for ~60% of worldwide infections, we identified a hitherto unrecognized adaptation to the antigen-processing machinery through substitutions...... of intrapatient adaptations, is predictable, facilitates viral subtype diversification, and increases global HIV diversity. Because low epitope abundance is associated with infrequent and weak T cell responses, this most likely results in both population-level immune evasion and inadequate responses in most...

  20. Parents of children with cerebral palsy : a review of factors related to the process of adaptation

    NARCIS (Netherlands)

    Rentinck, I. C. M.; Ketelaar, M.; Jongmans, M. J.; Gorter, J. W.

    Background Little is known about the way parents adapt to the situation when their child is diagnosed with cerebral palsy. Methods A literature search was performed to gain a deeper insight in the process of adaptation of parents with a child with cerebral palsy and on factors related to this

  1. Adapting the transtheoretical model of change to the bereavement process.

    Science.gov (United States)

    Calderwood, Kimberly A

    2011-04-01

    Theorists currently believe that bereaved people undergo some transformation of self rather than returning to their original state. To advance our understanding of this process, this article presents an adaptation of Prochaska and DiClemente's transtheoretical model of change as it could be applied to the journey that bereaved individuals experience. This theory is unique because it addresses attitudes, intentions, and behavioral processes at each stage; it allows for a focus on a broader range of emotions than just anger and depression; it allows for the recognition of two periods of regression during the bereavement process; and it adds a maintenance stage, which other theories lack. This theory can benefit bereaved individuals directly and through the increased awareness among counselors, family, friends, employers, and society at large. This theory may also be used as a tool for bereavement programs to consider whether they are meeting clients' needs throughout the transformation change bereavement process rather than only focusing on the initial stages characterized by intense emotion.

  2. Improvement of detection of stress corrosion cracks with ultrasonic phased array probes

    International Nuclear Information System (INIS)

    Wustenberg, H.; Mohrle, W.; Wegner, W.; Schenk, G.; Erhard, A.

    1986-01-01

    Probes with linear arrays can be used for the detection of stress corrosion cracks especially if the variability of the sound field is used to change the skewing angle of angle beam probes. The phased array concept can be used to produce a variable skewing angle or a variable angle of incidence depending on the orientation of the linear array on the wedge. This helps to adapt the direction of the ultrasonic beam to probable crack orientations. It has been demonstrated with artificial reflectors as well as with corrosion cracks, that the detection of misoriented cracks can be improved by this approach. The experiences gained during the investigations are encouraging the application of phased array probes for stress corrosion phenomena close to the heat effected zone of welds. Probes with variable skewing angles may find some interesting applications on welds in tubular structures e.g., at off shore constructions and on some difficult geometries within the primary circuit of nuclear power plants

  3. Next-Generation Microshutter Arrays for Large-Format Imaging and Spectroscopy

    Science.gov (United States)

    Moseley, Samuel; Kutyrev, Alexander; Brown, Ari; Li, Mary

    2012-01-01

    A next-generation microshutter array, LArge Microshutter Array (LAMA), was developed as a multi-object field selector. LAMA consists of small-scaled microshutter arrays that can be combined to form large-scale microshutter array mosaics. Microshutter actuation is accomplished via electrostatic attraction between the shutter and a counter electrode, and 2D addressing can be accomplished by applying an electrostatic potential between a row of shutters and a column, orthogonal to the row, of counter electrodes. Microelectromechanical system (MEMS) technology is used to fabricate the microshutter arrays. The main feature of the microshutter device is to use a set of standard surface micromachining processes for device fabrication. Electrostatic actuation is used to eliminate the need for macromechanical magnet actuating components. A simplified electrostatic actuation with no macro components (e.g. moving magnets) required for actuation and latching of the shutters will make the microshutter arrays robust and less prone to mechanical failure. Smaller-size individual arrays will help to increase the yield and thus reduce the cost and improve robustness of the fabrication process. Reducing the size of the individual shutter array to about one square inch and building the large-scale mosaics by tiling these smaller-size arrays would further help to reduce the cost of the device due to the higher yield of smaller devices. The LAMA development is based on prior experience acquired while developing microshutter arrays for the James Webb Space Telescope (JWST), but it will have different features. The LAMA modular design permits large-format mosaicking to cover a field of view at least 50 times larger than JWST MSA. The LAMA electrostatic, instead of magnetic, actuation enables operation cycles at least 100 times faster and a mass significantly smaller compared to JWST MSA. Also, standard surface micromachining technology will simplify the fabrication process, increasing

  4. Preliminary Investigation of Transmedia Narratives and the Process of Narrative Brand Expansion: Transmedia Adaptation in Picturebooks

    Directory of Open Access Journals (Sweden)

    Yu-Chai Lai

    2016-01-01

    Full Text Available Transmedia narrators can use the intermediacy of images and text as a foundation to develop story networks. These narrators can also use various forms of technology to recreate a variety of aesthetic responses in readers. In this study, we analyzed the narrative strategies of adaptation in examples of transmedia adaptation among winners of international picture book awards. In artistic terms, the horizons of expectation of adapters, the readers of fiction, and the inviting structures extended from intermediacy play key roles in aesthetic communication. How adapters use the materials of intermediacy as filler or to expand on negative speculation also influences the relaying process. In this study, we clarified that in addition to considering aesthetic judgments, adaptation must also adhere to the economy of aesthetics.

  5. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    OpenAIRE

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic illumination pattern, i.e., the light pattern of a single LED, and the setting of LED illumination levels, respectively. Thereafter, we propose to use the relative mean squared error as the cost function ...

  6. Automated installation methods for photovoltaic arrays

    Science.gov (United States)

    Briggs, R.; Daniels, A.; Greenaway, R.; Oster, J., Jr.; Racki, D.; Stoeltzing, R.

    1982-11-01

    Since installation expenses constitute a substantial portion of the cost of a large photovoltaic power system, methods for reduction of these costs were investigated. The installation of the photovoltaic arrays includes all areas, starting with site preparation (i.e., trenching, wiring, drainage, foundation installation, lightning protection, grounding and installation of the panel) and concluding with the termination of the bus at the power conditioner building. To identify the optimum combination of standard installation procedures and automated/mechanized techniques, the installation process was investigated including the equipment and hardware available, the photovoltaic array structure systems and interfaces, and the array field and site characteristics. Preliminary designs of hardware for both the standard installation method, the automated/mechanized method, and a mix of standard installation procedures and mechanized procedures were identified to determine which process effectively reduced installation costs. In addition, costs associated with each type of installation method and with the design, development and fabrication of new installation hardware were generated.

  7. Sensor selection and chemo-sensory optimization: toward an adaptable chemo-sensory system

    Directory of Open Access Journals (Sweden)

    Alexander eVergara

    2012-01-01

    Full Text Available Over the past two decades, despite the tremendous research effort performed on chemical sensors and machine olfaction to develop micro-sensory systems that will accomplish the growing existent needs in personal health (implantable sensors, environment monitoring (widely distributed sensor networks, and security/threat detection (chemo/bio warfare agents, simple, low-cost molecular sensing platforms capable of long-term autonomous operation remain beyond the current state-of-the-art of chemical sensing. A fundamental issue within this context is that most of the chemical sensors depend on interactions between the targeted species and the surfaces functionalized with receptors that bind the target species selectively, and that these binding events are coupled with transduction processes that begin to change when they are exposed to the messy world of real samples. With the advent of fundamental breakthroughs at the intersection of materials science, micro/nano-technology, and signal processing, hybrid chemo-sensory systems have incorporated tunable, optimizable operating parameters, through which changes in the response characteristics can be modeled and compensated as the environmental conditions or application needs change.The objective of this article, in this context, is to bring together the key advances at the device, data processing, and system levels that enable chemo-sensory systems to adapt in response to their environments. Accordingly, in this review we will feature the research effort made by selected experts on chemical sensing and information theory, whose work has been devoted to develop strategies that provide tunability and adaptability to single sensor devices or sensory array systems. Particularly, we consider sensor-array selection, modulation of internal sensing parameters, and active sensing. The article ends with some conclusions drawn from the results presented and a visionary look toward the future in terms of how the

  8. Phased Array Ultrasonic Inspection of Titanium Forgings

    International Nuclear Information System (INIS)

    Howard, P.; Klaassen, R.; Kurkcu, N.; Barshinger, J.; Chalek, C.; Nieters, E.; Sun, Zongqi; Fromont, F. de

    2007-01-01

    Aerospace forging inspections typically use multiple, subsurface-focused sound beams in combination with digital C-scan image acquisition and display. Traditionally, forging inspections have been implemented using multiple single element, fixed focused transducers. Recent advances in phased array technology have made it possible to perform an equivalent inspection using a single phased array transducer. General Electric has developed a system to perform titanium forging inspection based on medical phased array technology and advanced image processing techniques. The components of that system and system performance for titanium inspection will be discussed

  9. Timed arrays wideband and time varying antenna arrays

    CERN Document Server

    Haupt, Randy L

    2015-01-01

    Introduces timed arrays and design approaches to meet the new high performance standards The author concentrates on any aspect of an antenna array that must be viewed from a time perspective. The first chapters briefly introduce antenna arrays and explain the difference between phased and timed arrays. Since timed arrays are designed for realistic time-varying signals and scenarios, the book also reviews wideband signals, baseband and passband RF signals, polarization and signal bandwidth. Other topics covered include time domain, mutual coupling, wideband elements, and dispersion. The auth

  10. Low-Noise CMOS Circuits for On-Chip Signal Processing in Focal-Plane Arrays

    Science.gov (United States)

    Pain, Bedabrata

    The performance of focal-plane arrays can be significantly enhanced through the use of on-chip signal processing. Novel, in-pixel, on-focal-plane, analog signal-processing circuits for high-performance imaging are presented in this thesis. The presence of a high background-radiation is a major impediment for infrared focal-plane array design. An in-pixel, background-suppression scheme, using dynamic analog current memory circuit, is described. The scheme also suppresses spatial noise that results from response non-uniformities of photo-detectors, leading to background limited infrared detector readout performance. Two new, low-power, compact, current memory circuits, optimized for operation at ultra-low current levels required in infrared-detection, are presented. The first one is a self-cascading current memory that increases the output impedance, and the second one is a novel, switch feed-through reducing current memory, implemented using error-current feedback. This circuit can operate with a residual absolute -error of less than 0.1%. The storage-time of the memory is long enough to also find applications in neural network circuits. In addition, a voltage-mode, accurate, low-offset, low-power, high-uniformity, random-access sample-and-hold cell, implemented using a CCD with feedback, is also presented for use in background-suppression and neural network applications. A new, low noise, ultra-low level signal readout technique, implemented by individually counting photo-electrons within the detection pixel, is presented. The output of each unit-cell is a digital word corresponding to the intensity of the photon flux, and the readout is noise free. This technique requires the use of unit-cell amplifiers that feature ultra-high-gain, low-power, self-biasing capability and noise in sub-electron levels. Both single-input and differential-input implementations of such amplifiers are investigated. A noise analysis technique is presented for analyzing sampled

  11. Lithography-free centimeter-long nanochannel fabrication method using an electrospun nanofiber array

    International Nuclear Information System (INIS)

    Park, Suk Hee; Shin, Hyun-Jun; Lee, Sangyoup; Kim, Yong-Hwan; Yang, Dong-Yol; Lee, Jong-Chul

    2012-01-01

    Novel cost-effective methods for polymeric and metallic nanochannel fabrication have been demonstrated using an electrospun nanofiber array. Like other electrospun nanofiber-based nanofabrication methods, our system also showed high throughput as well as cost-effective performances. Unlike other systems, however, our fabrication scheme provides a pseudo-parallel nanofiber array a few centimeters long at a speed of several tens of fibers per second based on our unique inclined-gap fiber collecting system. Pseudo-parallel nanofiber arrays were used either directly for the PDMS molding process or for the metal lift-off process followed by the SiO 2 deposition process to produce the nanochannel array. While the PDMS molding process was a simple fabrication based on one-step casting, the metal lift-off process followed by SiO 2 deposition allowed finetuning on height and width of nanogrooves down to subhundred nanometers from a few micrometers. Nanogrooves were covered either with cover glass or with PDMS slab and nanochannel connectivity was investigated with a fluorescent dye. Also, nanochannel arrays were used to investigate mobility and conformations of λ-DNA. (paper)

  12. The adaptation process following acute onset disability: an interactive two-dimensional approach applied to acquired brain injury.

    Science.gov (United States)

    Brands, Ingrid M H; Wade, Derick T; Stapert, Sven Z; van Heugten, Caroline M

    2012-09-01

    To describe a new model of the adaptation process following acquired brain injury, based on the patient's goals, the patient's abilities and the emotional response to the changes and the possible discrepancy between goals and achievements. The process of adaptation after acquired brain injury is characterized by a continuous interaction of two processes: achieving maximal restoration of function and adjusting to the alterations and losses that occur in the various domains of functioning. Consequently, adaptation requires a balanced mix of restoration-oriented coping and loss-oriented coping. The commonly used framework to explain adaptation and coping, 'The Theory of Stress and Coping' of Lazarus and Folkman, does not capture this interactive duality. This model additionally considers theories concerned with self-regulation of behaviour, self-awareness and self-efficacy, and with the setting and achievement of goals. THE TWO-DIMENSIONAL MODEL: Our model proposes the simultaneous and continuous interaction of two pathways; goal pursuit (short term and long term) or revision as a result of success and failure in reducing distance between current state and expected future state and an affective response that is generated by the experienced goal-performance discrepancies. This affective response, in turn, influences the goals set. This two-dimensional representation covers the processes mentioned above: restoration of function and consideration of long-term limitations. We propose that adaptation centres on readjustment of long-term goals to new achievable but desired and important goals, and that this adjustment underlies re-establishing emotional stability. We discuss how the proposed model is related to actual rehabilitation practice.

  13. Subarray Processing for Projection-based RFI Mitigation in Radio Astronomical Interferometers

    Science.gov (United States)

    Burnett, Mitchell C.; Jeffs, Brian D.; Black, Richard A.; Warnick, Karl F.

    2018-04-01

    Radio Frequency Interference (RFI) is a major problem for observations in Radio Astronomy (RA). Adaptive spatial filtering techniques such as subspace projection are promising candidates for RFI mitigation; however, for radio interferometric imaging arrays, these have primarily been used in engineering demonstration experiments rather than mainstream scientific observations. This paper considers one reason that adoption of such algorithms is limited: RFI decorrelates across the interferometric array because of long baseline lengths. This occurs when the relative RFI time delay along a baseline is large compared to the frequency channel inverse bandwidth used in the processing chain. Maximum achievable excision of the RFI is limited by covariance matrix estimation error when identifying interference subspace parameters, and decorrelation of the RFI introduces errors that corrupt the subspace estimate, rendering subspace projection ineffective over the entire array. In this work, we present an algorithm that overcomes this challenge of decorrelation by applying subspace projection via subarray processing (SP-SAP). Each subarray is designed to have a set of elements with high mutual correlation in the interferer for better estimation of subspace parameters. In an RFI simulation scenario for the proposed ngVLA interferometric imaging array with 15 kHz channel bandwidth for correlator processing, we show that compared to the former approach of applying subspace projection on the full array, SP-SAP improves mitigation of the RFI on the order of 9 dB. An example of improved image synthesis and reduced RFI artifacts for a simulated image “phantom” using the SP-SAP algorithm is presented.

  14. Developing a gate-array capability at a research and development laboratory

    Science.gov (United States)

    Balch, J. W.; Current, K. W.; Magnuson, W. G., Jr.; Pocha, M. D.

    1983-03-01

    Experiences in developing a gate array capability for low volume applications in a research and development (R and D) laboratory are described. By purchasing unfinished wafers and doing the customization steps in-house. Turnaround time was shortened to as little as one week and the direct costs reduced to as low as $5K per design. Designs generally require fast turnaround (a few weeks to a few months) and very low volumes (1 to 25). Design costs must be kept at a minimum. After reviewing available commercial gate array design and fabrication services, it was determined that objectives would best be met by using existing internal integrated circuit fabrication facilities, the COMPUTERVISION interactive graphics layout system, and extensive computational capabilities. The reasons and the approach taken for; selection for a particular gate array wafer, adapting a particular logic simulation program, and how layout aids were enhanced are discussed. Testing of the customized chips is described. The content, schedule, and results of the internal gate array course recently completed are discussed. Finally, problem areas and near term plans are presented.

  15. Extending CPN tools with ontologies to support the management of context-adaptive business processes

    OpenAIRE

    Serral Asensio, Estefanía; De Smedt, Johannes; Vanthienen, Jan

    2015-01-01

    Colored Petri Nets (CPN) are a widely used graphical modeling language to manage business processes. Business processes often appear in dynamic environments; therefore, context adaptation has recently emerged as a new challenge to explicitly addressfitness between business process modeling and its execution environment. Although CPN can introduce data by dedefining internal data records, this is not enough to capture the complexity and dynamics of the execution context data. This paper ext...

  16. DNA electrophoresis through microlithographic arrays

    International Nuclear Information System (INIS)

    Sevick, E.M.; Williams, D.R.M.

    1996-01-01

    Electrophoresis is one of the most widely used techniques in biochemistry and genetics for size-separating charged molecular chains such as DNA or synthetic polyelectrolytes. The separation is achieved by driving the chains through a gel with an external electric field. As a result of the field and the obstacles that the medium provides, the chains have different mobilities and are physically separated after a given process time. The macroscopically observed mobility scales inversely with chain size: small molecules move through the medium quickly while larger molecules move more slowly. However, electrophoresis remains a tool that has yet to be optimised for most efficient size separation of polyelectrolytes, particularly large polyelectrolytes, e.g. DNA in excess of 30-50 kbp. Microlithographic arrays etched with an ordered pattern of obstacles provide an attractive alternative to gel media and provide wider avenues for size separation of polyelectrolytes and promote a better understanding of the separation process. Its advantages over gels are (1) the ordered array is durable and can be re-used, (2) the array morphology is ordered and can be standardized for specific separation, and (3) calibration with a marker polyelectrolyte is not required as the array is reproduced to high precision. Most importantly, the array geometry can be graduated along the chip so as to expand the size-dependent regime over larger chain lengths and postpone saturation. In order to predict the effect of obstacles upon the chain-length dependence in mobility and hence, size separation, we study the dynamics of single chains using theory and simulation. We present recent work describing: 1) the release kinetics of a single DNA molecule hooked around a point, frictionless obstacle and in both weak and strong field limits, 2) the mobility of a chain impinging upon point obstacles in an ordered array of obstacles, demonstrating the wide range of interactions possible between the chain and

  17. Processing Optimization of Typed Resources with Synchronized Storage and Computation Adaptation in Fog Computing

    Directory of Open Access Journals (Sweden)

    Zhengyang Song

    2018-01-01

    Full Text Available Wide application of the Internet of Things (IoT system has been increasingly demanding more hardware facilities for processing various resources including data, information, and knowledge. With the rapid growth of generated resource quantity, it is difficult to adapt to this situation by using traditional cloud computing models. Fog computing enables storage and computing services to perform at the edge of the network to extend cloud computing. However, there are some problems such as restricted computation, limited storage, and expensive network bandwidth in Fog computing applications. It is a challenge to balance the distribution of network resources. We propose a processing optimization mechanism of typed resources with synchronized storage and computation adaptation in Fog computing. In this mechanism, we process typed resources in a wireless-network-based three-tier architecture consisting of Data Graph, Information Graph, and Knowledge Graph. The proposed mechanism aims to minimize processing cost over network, computation, and storage while maximizing the performance of processing in a business value driven manner. Simulation results show that the proposed approach improves the ratio of performance over user investment. Meanwhile, conversions between resource types deliver support for dynamically allocating network resources.

  18. Design of an effective energy receiving adapter for microwave wireless power transmission application

    Directory of Open Access Journals (Sweden)

    Peng Xu

    2016-10-01

    Full Text Available In this paper, we demonstrate the viability of an energy receiving adapter in a 8×8 array form with high power reception efficiency with the resonator of artificial electromagnetic absorber being used as the element. Unlike the conventional reported rectifying antenna resonators, both the size of the element and the separations between the elements are electrically small in our design. The energy collecting process is explained with an equivalent circuit model, and a RF combining network is designed to combine the captured AC power from each element to one main terminal for AC-to-DC conversion. The energy receiving adapter yields a total reception efficiency of 67% (including the wave capture efficiency of 86% and the AC-to-DC conversion efficiency of 78%, which is quite promising for microwave wireless power transmission.

  19. An Improved Manufacturing Approach for Discrete Silicon Microneedle Arrays with Tunable Height-Pitch Ratio

    Directory of Open Access Journals (Sweden)

    Renxin Wang

    2016-10-01

    Full Text Available Silicon microneedle arrays (MNAs have been widely studied due to their potential in various transdermal applications. However, discrete MNAs, as a preferred choice to fabricate flexible penetrating devices that could adapt curved and elastic tissue, are rarely reported. Furthermore, the reported discrete MNAs have disadvantages lying in uniformity and height-pitch ratio. Therefore, an improved technique is developed to manufacture discrete MNA with tunable height-pitch ratio, which involves KOH-dicing-KOH process. The detailed process is sketched and simulated to illustrate the formation of microneedles. Furthermore, the undercutting of convex mask in two KOH etching steps are mathematically analyzed, in order to reveal the relationship between etching depth and mask dimension. Subsequently, fabrication results demonstrate KOH-dicing-KOH process. {321} facet is figured out as the surface of octagonal pyramid microneedle. MNAs with diverse height and pitch are also presented to identify the versatility of this approach. At last, the metallization is realized via successive electroplating.

  20. Adapted diffusion processes for effective forging dies

    Science.gov (United States)

    Paschke, H.; Nienhaus, A.; Brunotte, K.; Petersen, T.; Siegmund, M.; Lippold, L.; Weber, M.; Mejauschek, M.; Landgraf, P.; Braeuer, G.; Behrens, B.-A.; Lampke, T.

    2018-05-01

    Hot forging is an effective production method producing safety relevant parts with excellent mechanical properties. The economic efficiency directly depends on the occurring wear of the tools, which limits service lifetime. Several approaches of the presenting research group aim at minimizing the wear caused by interacting mechanical and thermal loads by using enhanced nitriding technology. Thus, by modifying the surface zone layer it is possible to create a resistance against thermal softening provoking plastic deformation and pronounced abrasive wear. As a disadvantage, intensely nitrided surfaces may possibly include the risk of increased crack sensitivity and therefore feature the chipping of material at the treated surface. Recent projects (evaluated in several industrial applications) show the high technological potential of adapted treatments: A first approach evaluated localized treatments by preventing areas from nitrogen diffusion with applied pastes or other coverages. Now, further ideas are to use this principle to structure the surface with differently designed patterns generating smaller ductile zones beneath nitrided ones. The selection of suitable designs is subject to certain geo-metrical requirements though. The intention of this approach is to prevent the formation and propagation of cracks under thermal shock conditions. Analytical characterization methods for crack sensitivity of surface zone layers and an accurate system of testing rigs for thermal shock conditions verified the treatment concepts. Additionally, serial forging tests using adapted testing geometries and finally, tests in the industrial production field were performed. Besides stabilizing the service lifetime and decreasing specific wear mechanisms caused by thermal influences, the crack behavior was influenced positively. This leads to a higher efficiency of the industrial production process and enables higher output in forging campaigns of industrial partners.

  1. Adaptive PCA based fault diagnosis scheme in imperial smelting process.

    Science.gov (United States)

    Hu, Zhikun; Chen, Zhiwen; Gui, Weihua; Jiang, Bin

    2014-09-01

    In this paper, an adaptive fault detection scheme based on a recursive principal component analysis (PCA) is proposed to deal with the problem of false alarm due to normal process changes in real process. Our further study is also dedicated to develop a fault isolation approach based on Generalized Likelihood Ratio (GLR) test and Singular Value Decomposition (SVD) which is one of general techniques of PCA, on which the off-set and scaling fault can be easily isolated with explicit off-set fault direction and scaling fault classification. The identification of off-set and scaling fault is also applied. The complete scheme of PCA-based fault diagnosis procedure is proposed. The proposed scheme is first applied to Imperial Smelting Process, and the results show that the proposed strategies can be able to mitigate false alarms and isolate faults efficiently. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Integration of Antibody Array Technology into Drug Discovery and Development.

    Science.gov (United States)

    Huang, Wei; Whittaker, Kelly; Zhang, Huihua; Wu, Jian; Zhu, Si-Wei; Huang, Ruo-Pan

    Antibody arrays represent a high-throughput technique that enables the parallel detection of multiple proteins with minimal sample volume requirements. In recent years, antibody arrays have been widely used to identify new biomarkers for disease diagnosis or prognosis. Moreover, many academic research laboratories and commercial biotechnology companies are starting to apply antibody arrays in the field of drug discovery. In this review, some technical aspects of antibody array development and the various platforms currently available will be addressed; however, the main focus will be on the discussion of antibody array technologies and their applications in drug discovery. Aspects of the drug discovery process, including target identification, mechanisms of drug resistance, molecular mechanisms of drug action, drug side effects, and the application in clinical trials and in managing patient care, which have been investigated using antibody arrays in recent literature will be examined and the relevance of this technology in progressing this process will be discussed. Protein profiling with antibody array technology, in addition to other applications, has emerged as a successful, novel approach for drug discovery because of the well-known importance of proteins in cell events and disease development.

  3. Backshort-Under-Grid arrays for infrared astronomy

    Science.gov (United States)

    Allen, C. A.; Benford, D. J.; Chervenak, J. A.; Chuss, D. T.; Miller, T. M.; Moseley, S. H.; Staguhn, J. G.; Wollack, E. J.

    2006-04-01

    We are developing a kilopixel, filled bolometer array for space infrared astronomy. The array consists of three individual components, to be merged into a single, working unit; (1) a transition edge sensor bolometer array, operating in the milliKelvin regime, (2) a quarter-wave backshort grid, and (3) superconducting quantum interference device multiplexer readout. The detector array is designed as a filled, square grid of suspended, silicon bolometers with superconducting sensors. The backshort arrays are fabricated separately and will be positioned in the cavities created behind each detector during fabrication. The grids have a unique interlocking feature machined into the walls for positioning and mechanical stability. The spacing of the backshort beneath the detector grid can be set from ˜30 300 μm, by independently adjusting two process parameters during fabrication. The ultimate goal is to develop a large-format array architecture with background-limited sensitivity, suitable for a wide range of wavelengths and applications, to be directly bump bonded to a multiplexer circuit. We have produced prototype two-dimensional arrays having 8×8 detector elements. We present detector design, fabrication overview, and assembly technologies.

  4. Detailed Diagnostics of the BIOMASS Feed Array Prototype

    DEFF Research Database (Denmark)

    Cappellin, C.; Pivnenko, Sergey; Pontoppidan, K.

    2013-01-01

    of the array had a significant influence on the measured feed pattern. The 3D reconstruction and further post-processing is therefore applied both to the feed array measured data, and a set of simulated data generated by the GRASP software which replicate the series of measurements. The results...

  5. Silver Nanowire Arrays : Fabrication and Applications

    OpenAIRE

    Feng, Yuyi

    2016-01-01

    Nanowire arrays have increasingly received attention for their use in a variety of applications such as surface-enhanced Raman scattering (SERS), plasmonic sensing, and electrodes for photoelectric devices. However, until now, large scale fabrication of device-suitable metallic nanowire arrays on supporting substrates has seen very limited success. This thesis describes my work rst on the development of a novel successful processing route for the fabrication of uniform noble metallic (e.g. A...

  6. Apparatus-Program Complexes Processing and Creation of Essentially non-Format Documents on the Basis of Technology Auto-Adaptive Fonts

    Directory of Open Access Journals (Sweden)

    E. G. Andrianova

    2014-01-01

    Full Text Available The need to translate paper documents into electronic form demanded a development of methods and algorithms for automatic processing systems and web publishing unformatted graphic documents of on-line libraries. Translation of scanned images into modern formats of electronic documents using OCR programmes faces serious difficulties. These difficulties are connected with the standardization set of fonts and design of printed documents. There is also a need to maintain the original form of electronic format of such documents. The article discusses the possibility for building an extensible adaptive dictionary of graphic objects, which constitute unformatted graphics documents. Dictionary automatically adjusted as graphics processing and accumulation of statistical information for each new document. This adaptive extensible dictionary of graphic letters, fonts, and other objects of automated particular document processing is called "auto-adaptive font", and a set of its application methods is named "auto-adaptive font technology."Based on the theory of estimation algorithms, a mathematical model is designed. It allows us to represent all objects of unformatted graphic document in a unified manner to build a feature vector for each object, and evaluate a similarity of these objects in the selected metric. The algorithm of the adaptive models of graphic images is developed and a criterion for combining similar properties in one element to build an auto-adaptive font is offered thus allowing us to build a software core of hardware-software complex for processing the unformatted graphic documents. A standard block diagram of hardware-software complex is developed to process the unformatted graphic documents. The article presents a description of all the blocks of this complex, including document processing station and its interaction with the web server of publishing electronic documents.

  7. Compensated readout for high-density MOS-gated memristor crossbar array

    KAUST Repository

    Zidan, Mohammed A.

    2015-01-01

    Leakage current is one of the main challenges facing high-density MOS-gated memristor arrays. In this study, we show that leakage current ruins the memory readout process for high-density arrays, and analyze the tradeoff between the array density and its power consumption. We propose a novel readout technique and its underlying circuitry, which is able to compensate for the transistor leakage-current effect in the high-density gated memristor array.

  8. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    Directory of Open Access Journals (Sweden)

    Peter Kampmann

    2014-04-01

    Full Text Available With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach.

  9. Suppression of 3D coherent noise by areal geophone array; Menteki jushinki array ni yoru sanjigen coherent noise no yokusei

    Energy Technology Data Exchange (ETDEWEB)

    Murayama, R; Nakagami, K; Tanaka, H [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1996-05-01

    For improving the quality of data collected by reflection seismic exploration, a lattice was deployed at one point of a traverse line, and the data therefrom were used to study the 3D coherent noise suppression effect of the areal array. The test was conducted at a Japan National Oil Corporation test field in Kashiwazaki City, Niigata Prefecture. The deployed lattice had 144 vibration receiving points arrayed at intervals of 8m composing an areal array, and 187 vibration generating points arrayed at intervals of 20m extending over 6.5km. Data was collected at the vibration receiving points in the lattice, each point acting independently from the others, and processed for the composition of a large areal array, with the said data from plural vibration receiving points added up therein. As the result of analysis of the records covering the data collected at the receiving points in the lattice, it is noted that an enlarged areal array leads to a higher S/N ratio and that different reflection waves are emphasized when the array direction is changed. 1 ref., 6 figs.

  10. The National NeuroAIDS Tissue Consortium brain gene array: two types of HIV-associated neurocognitive impairment.

    Directory of Open Access Journals (Sweden)

    Benjamin B Gelman

    Full Text Available The National NeuroAIDS Tissue Consortium (NNTC performed a brain gene expression array to elucidate pathophysiologies of Human Immunodeficiency Virus type 1 (HIV-1-associated neurocognitive disorders.Twenty-four human subjects in four groups were examined A Uninfected controls; B HIV-1 infected subjects with no substantial neurocognitive impairment (NCI; C Infected with substantial NCI without HIV encephalitis (HIVE; D Infected with substantial NCI and HIVE. RNA from neocortex, white matter, and neostriatum was processed with the Affymetrix® array platform.With HIVE the HIV-1 RNA load in brain tissue was three log(10 units higher than other groups and over 1,900 gene probes were regulated. Interferon response genes (IFRGs, antigen presentation, complement components and CD163 antigen were strongly upregulated. In frontal neocortex downregulated neuronal pathways strongly dominated in HIVE, including GABA receptors, glutamate signaling, synaptic potentiation, axon guidance, clathrin-mediated endocytosis and 14-3-3 protein. Expression was completely different in neuropsychologically impaired subjects without HIVE. They had low brain HIV-1 loads, weak brain immune responses, lacked neuronally expressed changes in neocortex and exhibited upregulation of endothelial cell type transcripts. HIV-1-infected subjects with normal neuropsychological test results had upregulation of neuronal transcripts involved in synaptic transmission of neostriatal circuits.Two patterns of brain gene expression suggest that more than one pathophysiological process occurs in HIV-1-associated neurocognitive impairment. Expression in HIVE suggests that lowering brain HIV-1 replication might improve NCI, whereas NCI without HIVE may not respond in kind; array results suggest that modulation of transvascular signaling is a potentially promising approach. Striking brain regional differences highlighted the likely importance of circuit level disturbances in HIV/AIDS. In

  11. Optimised 'on demand' protein arraying from DNA by cell free expression with the 'DNA to Protein Array' (DAPA) technology.

    Science.gov (United States)

    Schmidt, Ronny; Cook, Elizabeth A; Kastelic, Damjana; Taussig, Michael J; Stoevesandt, Oda

    2013-08-02

    We have previously described a protein arraying process based on cell free expression from DNA template arrays (DNA Array to Protein Array, DAPA). Here, we have investigated the influence of different array support coatings (Ni-NTA, Epoxy, 3D-Epoxy and Polyethylene glycol methacrylate (PEGMA)). Their optimal combination yields an increased amount of detected protein and an optimised spot morphology on the resulting protein array compared to the previously published protocol. The specificity of protein capture was improved using a tag-specific capture antibody on a protein repellent surface coating. The conditions for protein expression were optimised to yield the maximum amount of protein or the best detection results using specific monoclonal antibodies or a scaffold binder against the expressed targets. The optimised DAPA system was able to increase by threefold the expression of a representative model protein while conserving recognition by a specific antibody. The amount of expressed protein in DAPA was comparable to those of classically spotted protein arrays. Reaction conditions can be tailored to suit the application of interest. DAPA represents a cost effective, easy and convenient way of producing protein arrays on demand. The reported work is expected to facilitate the application of DAPA for personalized medicine and screening purposes. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Hydrogen Detection With a Gas Sensor ArrayProcessing and Recognition of Dynamic Responses Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Gwiżdż Patryk

    2015-03-01

    Full Text Available An array consisting of four commercial gas sensors with target specifications for hydrocarbons, ammonia, alcohol, explosive gases has been constructed and tested. The sensors in the array operate in the dynamic mode upon the temperature modulation from 350°C to 500°C. Changes in the sensor operating temperature lead to distinct resistance responses affected by the gas type, its concentration and the humidity level. The measurements are performed upon various hydrogen (17-3000 ppm, methane (167-3000 ppm and propane (167-3000 ppm concentrations at relative humidity levels of 0-75%RH. The measured dynamic response signals are further processed with the Discrete Fourier Transform. Absolute values of the dc component and the first five harmonics of each sensor are analysed by a feed-forward back-propagation neural network. The ultimate aim of this research is to achieve a reliable hydrogen detection despite an interference of the humidity and residual gases.

  13. rasdaman Array Database: current status

    Science.gov (United States)

    Merticariu, George; Toader, Alexandru

    2015-04-01

    rasdaman (Raster Data Manager) is a Free Open Source Array Database Management System which provides functionality for storing and processing massive amounts of raster data in the form of multidimensional arrays. The user can access, process and delete the data using SQL. The key features of rasdaman are: flexibility (datasets of any dimensionality can be processed with the help of SQL queries), scalability (rasdaman's distributed architecture enables it to seamlessly run on cloud infrastructures while offering an increase in performance with the increase of computation resources), performance (real-time access, processing, mixing and filtering of arrays of any dimensionality) and reliability (legacy communication protocol replaced with a new one based on cutting edge technology - Google Protocol Buffers and ZeroMQ). Among the data with which the system works, we can count 1D time series, 2D remote sensing imagery, 3D image time series, 3D geophysical data, and 4D atmospheric and climate data. Most of these representations cannot be stored only in the form of raw arrays, as the location information of the contents is also important for having a correct geoposition on Earth. This is defined by ISO 19123 as coverage data. rasdaman provides coverage data support through the Petascope service. Extensions were added on top of rasdaman in order to provide support for the Geoscience community. The following OGC standards are currently supported: Web Map Service (WMS), Web Coverage Service (WCS), and Web Coverage Processing Service (WCPS). The Web Map Service is an extension which provides zoom and pan navigation over images provided by a map server. Starting with version 9.1, rasdaman supports WMS version 1.3. The Web Coverage Service provides capabilities for downloading multi-dimensional coverage data. Support is also provided for several extensions of this service: Subsetting Extension, Scaling Extension, and, starting with version 9.1, Transaction Extension, which

  14. A novel joint-processing adaptive nonlinear equalizer using a modular recurrent neural network for chaotic communication systems.

    Science.gov (United States)

    Zhao, Haiquan; Zeng, Xiangping; Zhang, Jiashu; Liu, Yangguang; Wang, Xiaomin; Li, Tianrui

    2011-01-01

    To eliminate nonlinear channel distortion in chaotic communication systems, a novel joint-processing adaptive nonlinear equalizer based on a pipelined recurrent neural network (JPRNN) is proposed, using a modified real-time recurrent learning (RTRL) algorithm. Furthermore, an adaptive amplitude RTRL algorithm is adopted to overcome the deteriorating effect introduced by the nesting process. Computer simulations illustrate that the proposed equalizer outperforms the pipelined recurrent neural network (PRNN) and recurrent neural network (RNN) equalizers. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Simulating the Sky as Seen by the Square Kilometer Array using the MIT Array Performance Simulator (MAPS)

    Science.gov (United States)

    Matthews, Lynn D.; Cappallo, R. J.; Doeleman, S. S.; Fish, V. L.; Lonsdale, C. J.; Oberoi, D.; Wayth, R. B.

    2009-05-01

    The Square Kilometer Array (SKA) is a proposed next-generation radio telescope that will operate at frequencies of 0.1-30 GHz and be 50-100 times more sensitive than existing radio arrays. Meeting the performance goals of this instrument will require innovative new hardware and software developments, a variety of which are now under consideration. Key to evaluating the performance characteristics of proposed SKA designs and testing the feasibility of new data calibration and processing algorithms is the ability to carry out realistic simulations of radio wavelength arrays under a variety of observing conditions. The MIT Array Performance Simulator (MAPS) (http://www.haystack.mit.edu/ast/arrays/maps/index.html) is an observations simulation package designed to achieve this goal. MAPS accepts an input source list or sky model and generates a model visibility set for a user-defined "virtual observatory'', incorporating such factors as array geometry, primary beam shape, field-of-view, and time and frequency resolution. Optionally, effects such as thermal noise, out-of-beam sources, variable station beams, and time/location-dependent ionospheric effects can be included. We will showcase current capabilities of MAPS for SKA applications by presenting results from an analysis of the effects of realistic sky backgrounds on the achievable image fidelity and dynamic range of SKA-like arrays comprising large numbers of small-diameter antennas.

  16. Simulation Based Investigation of Focusing Phased Array Ultrasound in Dissimilar Metal Welds

    Directory of Open Access Journals (Sweden)

    Hun-Hee Kim

    2016-02-01

    Full Text Available Flaws at dissimilar metal welds (DMWs, such as reactor coolant systems components, Control Rod Drive Mechanism (CRDM, Bottom Mounted Instrumentation (BMI etc., in nuclear power plants have been found. Notably, primary water stress corrosion cracking (PWSCC in the DMWs could cause significant reliability problems at nuclear power plants. Therefore, phased array ultrasound is widely used for inspecting surface break cracks and stress corrosion cracks in DMWs. However, inspection of DMWs using phased array ultrasound has a relatively low probability of detection of cracks, because the crystalline structure of welds causes distortion and splitting of the ultrasonic beams which propagates anisotropic medium. Therefore, advanced evaluation techniques of phased array ultrasound are needed for improvement in the probability of detection of flaws in DMWs. Thus, in this study, an investigation of focusing and steering phased array ultrasound in DMWs was carried out using a time reversal technique, and an adaptive focusing technique based on finite element method (FEM simulation. Also, evaluation of focusing performance of three different focusing techniques was performed by comparing amplitude of phased array ultrasonic signals scattered from the targeted flaw with three different time delays.

  17. Multicoil resonance-based parallel array for smart wireless power delivery.

    Science.gov (United States)

    Mirbozorgi, S A; Sawan, M; Gosselin, B

    2013-01-01

    This paper presents a novel resonance-based multicoil structure as a smart power surface to wirelessly power up apparatus like mobile, animal headstage, implanted devices, etc. The proposed powering system is based on a 4-coil resonance-based inductive link, the resonance coil of which is formed by an array of several paralleled coils as a smart power transmitter. The power transmitter employs simple circuit connections and includes only one power driver circuit per multicoil resonance-based array, which enables higher power transfer efficiency and power delivery to the load. The power transmitted by the driver circuit is proportional to the load seen by the individual coil in the array. Thus, the transmitted power scales with respect to the load of the electric/electronic system to power up, and does not divide equally over every parallel coils that form the array. Instead, only the loaded coils of the parallel array transmit significant part of total transmitted power to the receiver. Such adaptive behavior enables superior power, size and cost efficiency then other solutions since it does not need to use complex detection circuitry to find the location of the load. The performance of the proposed structure is verified by measurement results. Natural load detection and covering 4 times bigger area than conventional topologies with a power transfer efficiency of 55% are the novelties of presented paper.

  18. Investigation of cold extrusion process using coupled thermo-mechanical FEM analysis and adaptive friction modeling

    Science.gov (United States)

    Görtan, Mehmet Okan

    2017-10-01

    Cold extrusion processes are known for their excellent material usage as well as high efficiency in the production of large batches. Although the process starts at room temperature, workpiece temperatures may rise above 200°C. Moreover, contact normal stresses can exceed 2500 MPa, whereas surface enlargement values can reach up to 30. These changes affects friction coefficients in cold extrusion processes. In the current study, friction coefficients between a plain carbon steel C4C (1.0303) and a tool steel (1.2379) are determined dependent on temperature and contact pressure using the sliding compression test (SCT). In order to represent contact normal stress and temperature effects on friction coefficients, an empirical adaptive friction model has been proposed. The validity of the model has been tested with experiments and finite element simulations for a cold forward extrusion process. By using the proposed adaptive friction model together with thermo-mechanical analysis, the deviation in the process loads between numerical simulations and model experiments could be reduced from 18.6% to 3.3%.

  19. Fabrication of large NbSi bolometer arrays for CMB applications

    International Nuclear Information System (INIS)

    Ukibe, M.; Belier, B.; Camus, Ph.; Dobrea, C.; Dumoulin, L.; Fernandez, B.; Fournier, T.; Guillaudin, O.; Marnieros, S.; Yates, S.J.C.

    2006-01-01

    Future cosmic microwave background experiments for high-resolution anisotropy mapping and polarisation detection require large arrays of bolometers at low temperature. We have developed a process to build arrays of antenna-coupled bolometers for that purpose. With adjustment of the Nb x Si 1-x alloy composition, the array can be made of high impedance or superconductive (TES) sensors

  20. Batch fabrication of disposable screen printed SERS arrays.

    Science.gov (United States)

    Qu, Lu-Lu; Li, Da-Wei; Xue, Jin-Qun; Zhai, Wen-Lei; Fossey, John S; Long, Yi-Tao

    2012-03-07

    A novel facile method of fabricating disposable and highly reproducible surface-enhanced Raman spectroscopy (SERS) arrays using screen printing was explored. The screen printing ink containing silver nanoparticles was prepared and printed on supporting materials by a screen printing process to fabricate SERS arrays (6 × 10 printed spots) in large batches. The fabrication conditions, SERS performance and application of these arrays were systematically investigated, and a detection limit of 1.6 × 10(-13) M for rhodamine 6G could be achieved. Moreover, the screen printed SERS arrays exhibited high reproducibility and stability, the spot-to-spot SERS signals showed that the intensity variation was less than 10% and SERS performance could be maintained over 12 weeks. Portable high-throughput analysis of biological samples was accomplished using these disposable screen printed SERS arrays.

  1. A Climate Change Adaptation Planning Process for Low-Lying, Communities Vulnerable to Sea Level Rise

    Directory of Open Access Journals (Sweden)

    Kristi Tatebe

    2012-09-01

    Full Text Available While the province of British Columbia (BC, Canada, provides guidelines for flood risk management, it is local governments’ responsibility to delineate their own flood vulnerability, assess their risk, and integrate these with planning policies to implement adaptive action. However, barriers such as the lack of locally specific data and public perceptions about adaptation options mean that local governments must address the need for adaptation planning within a context of scientific uncertainty, while building public support for difficult choices on flood-related climate policy and action. This research demonstrates a process to model, visualize and evaluate potential flood impacts and adaptation options for the community of Delta, in Metro Vancouver, across economic, social and environmental perspectives. Visualizations in 2D and 3D, based on hydrological modeling of breach events for existing dike infrastructure, future sea level rise and storm surges, are generated collaboratively, together with future adaptation scenarios assessed against quantitative and qualitative indicators. This ‘visioning package’ is being used with staff and a citizens’ Working Group to assess the performance, policy implications and social acceptability of the adaptation strategies. Recommendations based on the experience of the initiative are provided that can facilitate sustainable future adaptation actions and decision-making in Delta and other jurisdictions.

  2. Multi-Beam Radio Frequency (RF) Aperture Arrays Using Multiplierless Approximate Fast Fourier Transform (FFT)

    Science.gov (United States)

    2017-08-01

    Fourier transform, discrete Fourier transform, digital array processing , antenna beamformers 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...125 3.7 Simulation of 2-D Beams Cross Sections .................................................................... 125 3.7.1 8...unlimited. List of Figures Figure Page Figure 1: N-beam Array Processing System using a Linear Array

  3. Layout Optimisation of Wave Energy Converter Arrays

    DEFF Research Database (Denmark)

    Ruiz, Pau Mercadé; Nava, Vincenzo; Topper, Mathew B. R.

    2017-01-01

    This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC) arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation......, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA), a genetic algorithm (GA) and the glowworm swarm optimisation (GSO) algorithm...

  4. Using a Modified ADAPTE Process to Enable Effective Implementation of Electrical Stimulation Therapy for Treating Pressure Ulcers in Persons With Spinal Cord Injury

    Directory of Open Access Journals (Sweden)

    Deena Lala

    2017-12-01

    Full Text Available Objectives: To apply a modified ADAPTE process to adapt best practices to a local context for successful implementation of electrical stimulation therapy (EST for treating pressure injuries in persons with spinal cord injury (SCI. Participants: An expert team of health care professionals and a consumer participated in a 2-day workshop to assist in the development of the locally adapted EST document in Southwest Ontario, Canada. Results: A process map illustrating the flow of activities to initiate EST for treating pressure injuries in persons with SCI based on the challenges and opportunities existing within this region was created. The team also developed a summary of roles and responsibilities delineating tasks specific to providing EST and identified a set of challenges likely to be encountered. Conclusions: The modified ADAPTE process provided a clear and flexible structure to adaptation when used for implementation planning. This article shares some challenges associated with using this process for local adaptation and shares strategies of improvement for future studies aimed at adapting a practice to their local environment.

  5. Microneedles array with biodegradable tips for transdermal drug delivery

    Science.gov (United States)

    Iliescu, Ciprian; Chen, Bangtao; Wei, Jiashen; Tay, Francis E. H.

    2008-12-01

    The paper presented an enhancement solution for transdermal drug delivery using microneedles array with biodegradable tips. The microneedles array was fabricated by using deep reactive ion etching (DRIE) and the biodegradable tips were made to be porous by electrochemical etching process. The porous silicon microneedle tips can greatly enhance the transdermal drug delivery in a minimum invasion, painless, and convenient manner, at the same time; they are breakable and biodegradable. Basically, the main problem of the silicon microneedles consists of broken microneedles tips during the insertion. The solution proposed is to fabricate the microneedle tip from a biodegradable material - porous silicon. The silicon microneedles are fabricated using DRIE notching effect of reflected charges on mask. The process overcomes the difficulty in the undercut control of the tips during the classical isotropic silicon etching process. When the silicon tips were formed, the porous tips were then generated using a classical electrochemical anodization process in MeCN/HF/H2O solution. The paper presents the experimental results of in vitro release of calcein and BSA with animal skins using a microneedle array with biodegradable tips. Compared to the transdermal drug delivery without any enhancer, the microneedle array had presented significant enhancement of drug release.

  6. Hybrid Arrays for Chemical Sensing

    Science.gov (United States)

    Kramer, Kirsten E.; Rose-Pehrsson, Susan L.; Johnson, Kevin J.; Minor, Christian P.

    In recent years, multisensory approaches to environment monitoring for chemical detection as well as other forms of situational awareness have become increasingly popular. A hybrid sensor is a multimodal system that incorporates several sensing elements and thus produces data that are multivariate in nature and may be significantly increased in complexity compared to data provided by single-sensor systems. Though a hybrid sensor is itself an array, hybrid sensors are often organized into more complex sensing systems through an assortment of network topologies. Part of the reason for the shift to hybrid sensors is due to advancements in sensor technology and computational power available for processing larger amounts of data. There is also ample evidence to support the claim that a multivariate analytical approach is generally superior to univariate measurements because it provides additional redundant and complementary information (Hall, D. L.; Linas, J., Eds., Handbook of Multisensor Data Fusion, CRC, Boca Raton, FL, 2001). However, the benefits of a multisensory approach are not automatically achieved. Interpretation of data from hybrid arrays of sensors requires the analyst to develop an application-specific methodology to optimally fuse the disparate sources of data generated by the hybrid array into useful information characterizing the sample or environment being observed. Consequently, multivariate data analysis techniques such as those employed in the field of chemometrics have become more important in analyzing sensor array data. Depending on the nature of the acquired data, a number of chemometric algorithms may prove useful in the analysis and interpretation of data from hybrid sensor arrays. It is important to note, however, that the challenges posed by the analysis of hybrid sensor array data are not unique to the field of chemical sensing. Applications in electrical and process engineering, remote sensing, medicine, and of course, artificial

  7. Analysis of an M/G/1 queue with customer impatience and an adaptive arrival process

    NARCIS (Netherlands)

    Boxma, O.J.; Prabhu, B.J.

    2009-01-01

    We study an M/G/1 queue with impatience and an adaptive arrival process. The rate of the arrival process changes according to whether an incoming customer is accepted or rejected. We analyse two different models for impatience : (i) based on workload, and (ii) based on queue length. For the

  8. Application of Non-Kolmogorovian Probability and Quantum Adaptive Dynamics to Unconscious Inference in Visual Perception Process

    Science.gov (United States)

    Accardi, Luigi; Khrennikov, Andrei; Ohya, Masanori; Tanaka, Yoshiharu; Yamato, Ichiro

    2016-07-01

    Recently a novel quantum information formalism — quantum adaptive dynamics — was developed and applied to modelling of information processing by bio-systems including cognitive phenomena: from molecular biology (glucose-lactose metabolism for E.coli bacteria, epigenetic evolution) to cognition, psychology. From the foundational point of view quantum adaptive dynamics describes mutual adapting of the information states of two interacting systems (physical or biological) as well as adapting of co-observations performed by the systems. In this paper we apply this formalism to model unconscious inference: the process of transition from sensation to perception. The paper combines theory and experiment. Statistical data collected in an experimental study on recognition of a particular ambiguous figure, the Schröder stairs, support the viability of the quantum(-like) model of unconscious inference including modelling of biases generated by rotation-contexts. From the probabilistic point of view, we study (for concrete experimental data) the problem of contextuality of probability, its dependence on experimental contexts. Mathematically contextuality leads to non-Komogorovness: probability distributions generated by various rotation contexts cannot be treated in the Kolmogorovian framework. At the same time they can be embedded in a “big Kolmogorov space” as conditional probabilities. However, such a Kolmogorov space has too complex structure and the operational quantum formalism in the form of quantum adaptive dynamics simplifies the modelling essentially.

  9. Self-assembly and optical properties of patterned ZnO nanodot arrays

    International Nuclear Information System (INIS)

    Song Yijian; Zheng Maojun; Ma Li

    2007-01-01

    Patterned ZnO nanodot (ND) arrays and a ND-cavity microstructure were realized on an anodic alumina membrane (AAM) surface through a spin-coating sol-gel process, which benefits from the morphology and localized negative charge surface of AAM as well as the optimized sol concentration. The growth mechanism is believed to be a self-assembly process. This provides a simple approach to fabricate semiconductor quantum dot (QD) arrays and a QD-cavity system with its advantage in low cost and mass production. Strong ultra-violet emission, a multi-phonon process, and its special structure-related properties were observed in the patterned ZnO ND arrays

  10. A Field Programmable Gate Array-Based Reconfigurable Smart-Sensor Network for Wireless Monitoring of New Generation Computer Numerically Controlled Machines

    Directory of Open Access Journals (Sweden)

    Ion Stiharu

    2010-08-01

    Full Text Available Computer numerically controlled (CNC machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA-based sensor node.

  11. A Field Programmable Gate Array-Based Reconfigurable Smart-Sensor Network for Wireless Monitoring of New Generation Computer Numerically Controlled Machines

    Science.gov (United States)

    Moreno-Tapia, Sandra Veronica; Vera-Salas, Luis Alberto; Osornio-Rios, Roque Alfredo; Dominguez-Gonzalez, Aurelio; Stiharu, Ion; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node. PMID:22163602

  12. Development of a scalable generic platform for adaptive optics real time control

    Science.gov (United States)

    Surendran, Avinash; Burse, Mahesh P.; Ramaprakash, A. N.; Parihar, Padmakar

    2015-06-01

    The main objective of the present project is to explore the viability of an adaptive optics control system based exclusively on Field Programmable Gate Arrays (FPGAs), making strong use of their parallel processing capability. In an Adaptive Optics (AO) system, the generation of the Deformable Mirror (DM) control voltages from the Wavefront Sensor (WFS) measurements is usually through the multiplication of the wavefront slopes with a predetermined reconstructor matrix. The ability to access several hundred hard multipliers and memories concurrently in an FPGA allows performance far beyond that of a modern CPU or GPU for tasks with a well-defined structure such as Adaptive Optics control. The target of the current project is to generate a signal for a real time wavefront correction, from the signals coming from a Wavefront Sensor, wherein the system would be flexible to accommodate all the current Wavefront Sensing techniques and also the different methods which are used for wavefront compensation. The system should also accommodate for different data transmission protocols (like Ethernet, USB, IEEE 1394 etc.) for transmitting data to and from the FPGA device, thus providing a more flexible platform for Adaptive Optics control. Preliminary simulation results for the formulation of the platform, and a design of a fully scalable slope computer is presented.

  13. Adaptive interpolation of discrete-time signals that can be modeled as autoregressive processes

    NARCIS (Netherlands)

    Janssen, A.J.E.M.; Veldhuis, R.N.J.; Vries, L.B.

    1986-01-01

    The authors present an adaptive algorithm for the restoration of lost sample values in discrete-time signals that can locally be described by means of autoregressive processes. The only restrictions are that the positions of the unknown samples should be known and that they should be embedded in a

  14. Adaptive interpolation of discrete-time signals that can be modeled as autoregressive processes

    NARCIS (Netherlands)

    Janssen, A.J.E.M.; Veldhuis, Raymond N.J.; Vries, Lodewijk B.

    1986-01-01

    This paper presents an adaptive algorithm for the restoration of lost sample values in discrete-time signals that can locally be described by means of autoregressive processes. The only restrictions are that the positions of the unknown samples should be known and that they should be embedded in a

  15. Multilevel photonic modules for millimeter-wave phased-array antennas

    Science.gov (United States)

    Paolella, Arthur C.; Bauerle, Athena; Joshi, Abhay M.; Wright, James G.; Coryell, Louis A.

    2000-09-01

    Millimeter wave phased array systems have antenna element sizes and spacings similar to MMIC chip dimensions by virtue of the operating wavelength. Designing modules in traditional planar packaing techniques are therefore difficult to implement. An advantageous way to maintain a small module footprint compatible with Ka-Band and high frequency systems is to take advantage of two leading edge technologies, opto- electronic integrated circuits (OEICs) and multilevel packaging technology. Under a Phase II SBIR these technologies are combined to form photonic modules for optically controlled millimeter wave phased array antennas. The proposed module, consisting of an OEIC integrated with a planar antenna array will operate on the 40GHz region. The OEIC consists of an InP based dual-depletion PIN photodetector and distributed amplifier. The multi-level module will be fabricated using an enhanced circuit processing thick film process. Since the modules are batch fabricated using an enhanced circuit processing thick film process. Since the modules are batch fabricated, using standard commercial processes, it has the potential to be low cost while maintaining high performance, impacting both military and commercial communications systems.

  16. Three-dimensional lithographically-defined organotypic tissue arrays for quantitative analysis of morphogenesis and neoplastic progression

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, Celeste M.; Inman, Jamie L.; Bissell, Mina J.

    2008-02-13

    Here we describe a simple micromolding method to construct three-dimensional arrays of organotypic epithelial tissue structures that approximate in vivo histology. An elastomeric stamp containing an array of posts of defined geometry and spacing is used to mold microscale cavities into the surface of type I collagen gels. Epithelial cells are seeded into the cavities and covered with a second layer of collagen. The cells reorganize into hollow tissues corresponding to the geometry of the cavities. Patterned tissue arrays can be produced in 3-4 h and will undergo morphogenesis over the following one to three days. The protocol can easily be adapted to study a variety of tissues and aspects of normal and neoplastic development.

  17. Country, climate change adaptation and colonisation: insights from an Indigenous adaptation planning process, Australia.

    Science.gov (United States)

    Nursey-Bray, Melissa; Palmer, Robert

    2018-03-01

    Indigenous peoples are going to be disproportionately affected by climate change. Developing tailored, place based, and culturally appropriate solutions will be necessary. Yet finding cultural and institutional 'fit' within and between competing values-based climate and environmental management governance regimes remains an ongoing challenge. This paper reports on a collaborative research project with the Arabana people of central Australia, that resulted in the production of the first Indigenous community-based climate change adaptation strategy in Australia. We aimed to try and understand what conditions are needed to support Indigenous driven adaptation initiatives, if there are any cultural differences that need accounting for and how, once developed they be integrated into existing governance arrangements. Our analysis found that climate change adaptation is based on the centrality of the connection to 'country' (traditional land), it needs to be aligned with cultural values, and focus on the building of adaptive capacity. We find that the development of climate change adaptation initiatives cannot be divorced from the historical context of how the Arabana experienced and collectively remember colonisation. We argue that in developing culturally responsive climate governance for and with Indigenous peoples, that that the history of colonisation and the ongoing dominance of entrenched Western governance regimes needs acknowledging and redressing into contemporary environmental/climate management.

  18. Fabrication of large NbSi bolometer arrays for CMB applications

    Energy Technology Data Exchange (ETDEWEB)

    Ukibe, M. [AIST, Tsukuba Central 2, Tsukuba, Ibaraki 305-8568 (Japan); CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France); Belier, B. [CNRS-IEF, Bat 220, Orsay Campus F-91405 (France); Camus, Ph. [CNRS-CRTBT, 25 avenue des Martyrs, Grenoble F-38042 (France)]. E-mail: philippe.camus@grenoble.cnrs.fr; Dobrea, C. [CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France); Dumoulin, L. [CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France); Fernandez, B. [CNRS-CRTBT, 25 avenue des Martyrs, Grenoble F-38042 (France); Fournier, T. [CNRS-CRTBT, 25 avenue des Martyrs, Grenoble F-38042 (France); Guillaudin, O. [CNRS-LPSC, 53 avenue des Martyrs, Grenoble F-38042 (France); Marnieros, S. [CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France); Yates, S.J.C. [CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France)

    2006-04-15

    Future cosmic microwave background experiments for high-resolution anisotropy mapping and polarisation detection require large arrays of bolometers at low temperature. We have developed a process to build arrays of antenna-coupled bolometers for that purpose. With adjustment of the Nb{sub x}Si{sub 1-x} alloy composition, the array can be made of high impedance or superconductive (TES) sensors.

  19. Adaptive Jacobian Fuzzy Attitude Control for Flexible Spacecraft Combined Attitude and Sun Tracking System

    Science.gov (United States)

    Chak, Yew-Chung; Varatharajoo, Renuganth

    2016-07-01

    Many spacecraft attitude control systems today use reaction wheels to deliver precise torques to achieve three-axis attitude stabilization. However, irrecoverable mechanical failure of reaction wheels could potentially lead to mission interruption or total loss. The electrically-powered Solar Array Drive Assemblies (SADA) are usually installed in the pitch axis which rotate the solar arrays to track the Sun, can produce torques to compensate for the pitch-axis wheel failure. In addition, the attitude control of a flexible spacecraft poses a difficult problem. These difficulties include the strong nonlinear coupled dynamics between the rigid hub and flexible solar arrays, and the imprecisely known system parameters, such as inertia matrix, damping ratios, and flexible mode frequencies. In order to overcome these drawbacks, the adaptive Jacobian tracking fuzzy control is proposed for the combined attitude and sun-tracking control problem of a flexible spacecraft during attitude maneuvers in this work. For the adaptation of kinematic and dynamic uncertainties, the proposed scheme uses an adaptive sliding vector based on estimated attitude velocity via approximate Jacobian matrix. The unknown nonlinearities are approximated by deriving the fuzzy models with a set of linguistic If-Then rules using the idea of sector nonlinearity and local approximation in fuzzy partition spaces. The uncertain parameters of the estimated nonlinearities and the Jacobian matrix are being adjusted online by an adaptive law to realize feedback control. The attitude of the spacecraft can be directly controlled with the Jacobian feedback control when the attitude pointing trajectory is designed with respect to the spacecraft coordinate frame itself. A significant feature of this work is that the proposed adaptive Jacobian tracking scheme will result in not only the convergence of angular position and angular velocity tracking errors, but also the convergence of estimated angular velocity to

  20. Cascadia Subduction Zone Earthquake Source Spectra from an Array of Arrays

    Science.gov (United States)

    Gomberg, J. S.; Vidale, J. E.

    2011-12-01

    It is generally accepted that spectral characteristics distinguish 'slow' seismic sources from those of 'ordinary' or 'fast' earthquakes. To explore this difference, we measure ordinary earthquake spectra of about 30 seismic events located near the Cascadia plate interface where ETS regularly occurs. We separate the affects of local site response, regional propagation (attenuation and spreading), and processes near or at the source for a dense dataset recorded on an array of eight seismic micro-arrays. The arrays have apertures of 1-2 km with 21-31 seismographs in each, and are separated by 10-20 km. We assume that the spectrum of each recorded signal may be described by the product of 1) frequency-dependent site response, 2) propagation effects that include geometric spreading and an exponential decay that varies with distance, frequency, and 3) a frequency-dependent source spectrum. Using more than1000 seismograms from all events recorded at all sites simultaneously, we solve for frequency-dependent site response and source spectra, as well as a single regional Q value. We interpret only the slope of the source terms because most earthquakes have magnitudes less than 0, so we expect that their corner frequencies are higher frequency than the recorded passband. The amplitude variation in the site response within the same array sometimes exceeds a factor of 3, which is consistent with the variation seen visually. We see variability in the slopes of the source spectra comparable to the difference between 'slow' and 'fast' events observed in other studies, and which show a strong correlation with source location. Spectral slopes of spatially clustered sources are nearly identical but usually differ from those of clusters at a distance of a few tens of km, and spectral content varies systematically with location within the distribution of events. While these differences may reflect varying source processes (e.g., rupture velocity, stress drop), the strong correlation

  1. Process Development of Gallium Nitride Phosphide Core-Shell Nanowire Array Solar Cell

    Science.gov (United States)

    Chuang, Chen

    Dilute Nitride GaNP is a promising materials for opto-electronic applications due to its band gap tunability. The efficiency of GaNxP1-x /GaNyP1-y core-shell nanowire solar cell (NWSC) is expected to reach as high as 44% by 1% N and 9% N in the core and shell, respectively. By developing such high efficiency NWSCs on silicon substrate, a further reduction of the cost of solar photovoltaic can be further reduced to 61$/MWh, which is competitive to levelized cost of electricity (LCOE) of fossil fuels. Therefore, a suitable NWSC structure and fabrication process need to be developed to achieve this promising NWSC. This thesis is devoted to the study on the development of fabrication process of GaNxP 1-x/GaNyP1-y core-shell Nanowire solar cell. The thesis is divided into two major parts. In the first parts, previously grown GaP/GaNyP1-y core-shell nanowire samples are used to develop the fabrication process of Gallium Nitride Phosphide nanowire solar cell. The design for nanowire arrays, passivation layer, polymeric filler spacer, transparent col- lecting layer and metal contact are discussed and fabricated. The property of these NWSCs are also characterized to point out the future development of Gal- lium Nitride Phosphide NWSC. In the second part, a nano-hole template made by nanosphere lithography is studied for selective area growth of nanowires to improve the structure of core-shell NWSC. The fabrication process of nano-hole templates and the results are presented. To have a consistent features of nano-hole tem- plate, the Taguchi Method is used to optimize the fabrication process of nano-hole templates.

  2. Brain computer interface learning for systems based on electrocorticography and intracortical microelectrode arrays.

    Science.gov (United States)

    Hiremath, Shivayogi V; Chen, Weidong; Wang, Wei; Foldes, Stephen; Yang, Ying; Tyler-Kabara, Elizabeth C; Collinger, Jennifer L; Boninger, Michael L

    2015-01-01

    A brain-computer interface (BCI) system transforms neural activity into control signals for external devices in real time. A BCI user needs to learn to generate specific cortical activity patterns to control external devices effectively. We call this process BCI learning, and it often requires significant effort and time. Therefore, it is important to study this process and develop novel and efficient approaches to accelerate BCI learning. This article reviews major approaches that have been used for BCI learning, including computer-assisted learning, co-adaptive learning, operant conditioning, and sensory feedback. We focus on BCIs based on electrocorticography and intracortical microelectrode arrays for restoring motor function. This article also explores the possibility of brain modulation techniques in promoting BCI learning, such as electrical cortical stimulation, transcranial magnetic stimulation, and optogenetics. Furthermore, as proposed by recent BCI studies, we suggest that BCI learning is in many ways analogous to motor and cognitive skill learning, and therefore skill learning should be a useful metaphor to model BCI learning.

  3. Developing infrared array controller with software real time operating system

    Science.gov (United States)

    Sako, Shigeyuki; Miyata, Takashi; Nakamura, Tomohiko; Motohara, Kentaro; Uchimoto, Yuka Katsuno; Onaka, Takashi; Kataza, Hirokazu

    2008-07-01

    Real-time capabilities are required for a controller of a large format array to reduce a dead-time attributed by readout and data transfer. The real-time processing has been achieved by dedicated processors including DSP, CPLD, and FPGA devices. However, the dedicated processors have problems with memory resources, inflexibility, and high cost. Meanwhile, a recent PC has sufficient resources of CPUs and memories to control the infrared array and to process a large amount of frame data in real-time. In this study, we have developed an infrared array controller with a software real-time operating system (RTOS) instead of the dedicated processors. A Linux PC equipped with a RTAI extension and a dual-core CPU is used as a main computer, and one of the CPU cores is allocated to the real-time processing. A digital I/O board with DMA functions is used for an I/O interface. The signal-processing cores are integrated in the OS kernel as a real-time driver module, which is composed of two virtual devices of the clock processor and the frame processor tasks. The array controller with the RTOS realizes complicated operations easily, flexibly, and at a low cost.

  4. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    Science.gov (United States)

    2018-03-01

    to develop novel co-prime sampling and array design strategies that achieve high-resolution estimation of spectral power distributions and signal...by the array geometry and the frequency offset. We overcome this limitation by introducing a novel sparsity-based multi-target localization approach...estimation using a sparse uniform linear array with two CW signals of co-prime frequencies,” IEEE International Workshop on Computational Advances

  5. THE FORMATION OF SUBJECTIVITY AND NORMS IN THE PROCESS OF ADAPTATION OF YOUNG EMPLOYEES AT THE ENTERPRISE

    Directory of Open Access Journals (Sweden)

    Natalia V. Popova

    2016-01-01

    Full Text Available The aim of the publication is to determine the interrelation of the formation of subjective qualities and norms process of adaptation of young employees at the enterprise.Methods. The research methodology involves a comprehensive combination of the theoretical analysis and the results of applied research at the enterprises of the Sverdlovsk region. The dialectical method and comparative analysis are used.Results and theoretical novelty. The questions of adaptation of young employees at the enterprise are considered. The concepts of «subjectivity» and «norms» in philosophy are analyzed. Subjectivity is presented as a personal basis of social activity of the young worker at the entity; regulations – as a method of adaptation of the personality, individual to that community in which it emerged to be. The characteristics of the youth working at the industrial enterprise are disclosed on the basis of socio-philosophical analysis; youth policy at the industrial enterprises is described; the formation of values and norms of young workers in the process of adapting the enterprise is observed. The personal subjectivity as the basis of social activity of the young worker in the enterprise is demonstrated. It is shown that relevance of subject qualities forming and regulations at youth is caused not only by the need of development of the identity of young workers, but also by economic safety of industrial enterprises wellbeing where their working career begins.Practical significance consists in the social-philosophical substantiation of interrelation of formation of subjective qualities and norms in the process of adaptation of young employees in the company, of the main provisions for the development of programs of adaptation of young employees at the enterprise; in providing the teaching social and humanitarian disciplines for bachelors and masters majoring in «Organization of Work with Youth». 

  6. X-ray microcalorimeter arrays fabricated by surface micromachining

    International Nuclear Information System (INIS)

    Hilton, G.C.; Beall, J.A.; Deiker, S.; Vale, L.R.; Doriese, W.B.; Beyer, Joern; Ullom, J.N.; Reintsema, C.D.; Xu, Y.; Irwin, K.D.

    2004-01-01

    We are developing arrays of Mo/Cu transition edge sensor-based detectors for use as X-ray microcalorimeters and sub-millimeter bolometers. We have fabricated 8x8 pixel X-ray microcalorimeter arrays using surface micromachining. Surface-micromachining techniques hold the promise of scalability to much larger arrays and may allow for the integration of in-plane multiplexer elements. In this paper we describe the surface micromachining process and recent improvements in the device geometry that provide for increased mechanical strength. We also present X-ray and heat pulse spectra collected using these detectors

  7. Fabrication of microlens arrays using a CO2-assisted embossing technique

    International Nuclear Information System (INIS)

    Huang, Tzu-Chien; Chan, Bin-Da; Ciou, Jyun-Kai; Yang, Sen-Yeu

    2009-01-01

    This paper reports a method to fabricate microlens arrays with a low processing temperature and a low pressure. The method is based on embossing a softened polymeric substrate over a mold with micro-hole arrays. Due to the effect of capillary and surface tension, microlens arrays can be formed. The embossing medium is CO 2 gas, which supplies a uniform pressing pressure so that large-area microlens arrays can be fabricated. CO 2 gas also acts as a solvent to plasticize the polymer substrates. With the special dissolving ability and isotropic pressing capacity of CO 2 gas, microlens arrays can be fabricated at a low temperature (lower than T g ) and free of thermal-induced residual stress. Such a combined mechanism of dissolving and embossing with CO 2 gas makes the fabrication of microlens arrays direct with complex processes, and is more compatible for optical usage. In the study, it is also found that the sag height of microlens changes when different CO 2 dissolving pressure and time are used. This makes it easy to fabricate microlens arrays of different geometries without using different molds. The quality, uniformity and optical property of the fabricated microlens arrays have been verified with measurements of the dimensions, surface smoothness, focal length, transmittance and light intensity through the fabricated microlens arrays

  8. Phased-array technology for automatic pipeline inspection; Phased Array-Technologie fuer automatisierte Pipeline-Inspektion

    Energy Technology Data Exchange (ETDEWEB)

    Bosch, J.; Hugger, A.; Franz, J. [GE Energy, PII Pipetronix GmbH, Stutensee (Germany); Falter, S.; Oberdoerfer, Y. [GE Inspection Technology Systems, Huerth (Germany)

    2004-07-01

    Pipeline inspection pigs with individual test probes are limited in their function due to the fixed arrangement of sensors on the support. In contrast, the phased-array technology enables multitasking of tests, e.g. stress and corrosion testing which formerly required two different test runs with different sensor set-ups. The angles of inclination can be adapted to the test medium, and virtual sensors can be matched in size and overlap so that, e.g., small pittings will be detected. The sensor set-up presented here enables higher test speed and improved flaw detection. The contribution describes the measuring principle, the inspection pig (UltraScan DUO), and some results of prototype measurements. [German] Pruefmolche fuer die Pipelinepruefung mit Einzelpruefkoepfen sind in ihrem Funktionsumfang aufgrund der festliegenden Anordnung der Sensoren im Sensortraeger eingeschraenkt. Die Phased-Array-Technologie gestattet die simultane Durchfuehrung verschiedener Pruefaufgaben, wie beispielsweise der Rissund der Korrosionspruefung, die vorher zwei Prueflaeufe mit verschiedenen Sensortraegern erforderten. Die Einfallswinkel koennen auf das jeweilige Medium angepasst werden, und es besteht die Moeglichkeit, virtuelle Sensoren bezueglich ihrer Groesse und der gegenseitigen Ueberlappung so anzupassen, dass beispielsweise kleine Pittings gefunden werden koennen. Die ausgefuehrte Form gestattet hoehere Pruefgeschwindigkeit und verbesserte Fehlerauffindung. In diesem Artikel werden das Messprinzip und der Inspektionsmolch (UltraScan DUO) beschrieben sowie einige Prototyp-Messergebnisse vorgestellt.

  9. Microfabricated Silicon Microneedle Array for Transdermal Drug Delivery

    International Nuclear Information System (INIS)

    Ji, J; Tay, F E; Miao Jianmin; Iliescu, C

    2006-01-01

    This paper presents developed processes for silicon microneedle arrays microfabrication. Three types of microneedles structures were achieved by isotropic etching in inductively coupled plasma (ICP) using SF 6 /O 2 gases, combination of isotropic etching with deep etching, and wet etching, respectively. A microneedle array with biodegradable porous tips was further developed based on the fabricated microneedles

  10. Microfabricated Silicon Microneedle Array for Transdermal Drug Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Ji, J [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Tay, F E [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Miao Jianmin [MicroMachines Center, School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, 639798 (Singapore); Iliescu, C [Institute of Bioengineering and Nanotechnology, 31 Biopolis Way, Nanos, 04-01, 138669 (Singapore)

    2006-04-01

    This paper presents developed processes for silicon microneedle arrays microfabrication. Three types of microneedles structures were achieved by isotropic etching in inductively coupled plasma (ICP) using SF{sub 6}/O{sub 2} gases, combination of isotropic etching with deep etching, and wet etching, respectively. A microneedle array with biodegradable porous tips was further developed based on the fabricated microneedles.

  11. Assessing the Depth of Cognitive Processing as the Basis for Potential User-State Adaptation

    Directory of Open Access Journals (Sweden)

    Irina-Emilia Nicolae

    2017-10-01

    Full Text Available Objective: Decoding neurocognitive processes on a single-trial basis with Brain-Computer Interface (BCI techniques can reveal the user's internal interpretation of the current situation. Such information can potentially be exploited to make devices and interfaces more user aware. In this line of research, we took a further step by studying neural correlates of different levels of cognitive processes and developing a method that allows to quantify how deeply presented information is processed in the brain.Methods/Approach: Seventeen participants took part in an EEG study in which we evaluated different levels of cognitive processing (no processing, shallow, and deep processing within three distinct domains (memory, language, and visual imagination. Our investigations showed gradual differences in the amplitudes of event-related potentials (ERPs and in the extend and duration of event-related desynchronization (ERD which both correlate with task difficulty. We performed multi-modal classification to map the measured correlates of neurocognitive processing to the corresponding level of processing.Results: Successful classification of the neural components was achieved, which reflects the level of cognitive processing performed by the participants. The results show performances above chance level for each participant and a mean performance of 70–90% for all conditions and classification pairs.Significance: The successful estimation of the level of cognition on a single-trial basis supports the feasibility of user-state adaptation based on ongoing neural activity. There is a variety of potential use cases such as: a user-friendly adaptive design of an interface or the development of assistance systems in safety critical workplaces.

  12. Assessing the Depth of Cognitive Processing as the Basis for Potential User-State Adaptation

    Science.gov (United States)

    Nicolae, Irina-Emilia; Acqualagna, Laura; Blankertz, Benjamin

    2017-01-01

    Objective: Decoding neurocognitive processes on a single-trial basis with Brain-Computer Interface (BCI) techniques can reveal the user's internal interpretation of the current situation. Such information can potentially be exploited to make devices and interfaces more user aware. In this line of research, we took a further step by studying neural correlates of different levels of cognitive processes and developing a method that allows to quantify how deeply presented information is processed in the brain. Methods/Approach: Seventeen participants took part in an EEG study in which we evaluated different levels of cognitive processing (no processing, shallow, and deep processing) within three distinct domains (memory, language, and visual imagination). Our investigations showed gradual differences in the amplitudes of event-related potentials (ERPs) and in the extend and duration of event-related desynchronization (ERD) which both correlate with task difficulty. We performed multi-modal classification to map the measured correlates of neurocognitive processing to the corresponding level of processing. Results: Successful classification of the neural components was achieved, which reflects the level of cognitive processing performed by the participants. The results show performances above chance level for each participant and a mean performance of 70–90% for all conditions and classification pairs. Significance: The successful estimation of the level of cognition on a single-trial basis supports the feasibility of user-state adaptation based on ongoing neural activity. There is a variety of potential use cases such as: a user-friendly adaptive design of an interface or the development of assistance systems in safety critical workplaces. PMID:29046625

  13. Assessing the Depth of Cognitive Processing as the Basis for Potential User-State Adaptation.

    Science.gov (United States)

    Nicolae, Irina-Emilia; Acqualagna, Laura; Blankertz, Benjamin

    2017-01-01

    Objective: Decoding neurocognitive processes on a single-trial basis with Brain-Computer Interface (BCI) techniques can reveal the user's internal interpretation of the current situation. Such information can potentially be exploited to make devices and interfaces more user aware. In this line of research, we took a further step by studying neural correlates of different levels of cognitive processes and developing a method that allows to quantify how deeply presented information is processed in the brain. Methods/Approach: Seventeen participants took part in an EEG study in which we evaluated different levels of cognitive processing (no processing, shallow, and deep processing) within three distinct domains (memory, language, and visual imagination). Our investigations showed gradual differences in the amplitudes of event-related potentials (ERPs) and in the extend and duration of event-related desynchronization (ERD) which both correlate with task difficulty. We performed multi-modal classification to map the measured correlates of neurocognitive processing to the corresponding level of processing. Results: Successful classification of the neural components was achieved, which reflects the level of cognitive processing performed by the participants. The results show performances above chance level for each participant and a mean performance of 70-90% for all conditions and classification pairs. Significance: The successful estimation of the level of cognition on a single-trial basis supports the feasibility of user-state adaptation based on ongoing neural activity. There is a variety of potential use cases such as: a user-friendly adaptive design of an interface or the development of assistance systems in safety critical workplaces.

  14. Riems influenza a typing array (RITA): An RT-qPCR-based low density array for subtyping avian and mammalian influenza a viruses.

    Science.gov (United States)

    Hoffmann, Bernd; Hoffmann, Donata; Henritzi, Dinah; Beer, Martin; Harder, Timm C

    2016-06-03

    Rapid and sensitive diagnostic approaches are of the utmost importance for the detection of humans and animals infected by specific influenza virus subtype(s). Cascade-like diagnostics starting with the use of pan-influenza assays and subsequent subtyping devices are normally used. Here, we demonstrated a novel low density array combining 32 TaqMan(®) real-time RT-PCR systems in parallel for the specific detection of the haemagglutinin (HA) and neuraminidase (NA) subtypes of avian and porcine hosts. The sensitivity of the newly developed system was compared with that of the pan-influenza assay, and the specificity of all RT-qPCRs was examined using a broad panel of 404 different influenza A virus isolates representing 45 different subtypes. Furthermore, we analysed the performance of the RT-qPCR assays with diagnostic samples obtained from wild birds and swine. Due to the open format of the array, adaptations to detect newly emerging influenza A virus strains can easily be integrated. The RITA array represents a competitive, fast and sensitive subtyping tool that requires neither new machinery nor additional training of staff in a lab where RT-qPCR is already established.

  15. Array processors: an introduction to their architecture, software, and applications in nuclear medicine

    International Nuclear Information System (INIS)

    King, M.A.; Doherty, P.W.; Rosenberg, R.J.; Cool, S.L.

    1983-01-01

    Array processors are ''number crunchers'' that dramatically enhance the processing power of nuclear medicine computer systems for applicatons dealing with the repetitive operations involved in digital image processing of large segments of data. The general architecture and the programming of array processors are introduced, along with some applications of array processors to the reconstruction of emission tomographic images, digital image enhancement, and functional image formation

  16. Multiengine Speech Processing Using SNR Estimator in Variable Noisy Environments

    Directory of Open Access Journals (Sweden)

    Ahmad R. Abu-El-Quran

    2012-01-01

    Full Text Available We introduce a multiengine speech processing system that can detect the location and the type of audio signal in variable noisy environments. This system detects the location of the audio source using a microphone array; the system examines the audio first, determines if it is speech/nonspeech, then estimates the value of the signal to noise (SNR using a Discrete-Valued SNR Estimator. Using this SNR value, instead of trying to adapt the speech signal to the speech processing system, we adapt the speech processing system to the surrounding environment of the captured speech signal. In this paper, we introduced the Discrete-Valued SNR Estimator and a multiengine classifier, using Multiengine Selection or Multiengine Weighted Fusion. Also we use the SI as example of the speech processing. The Discrete-Valued SNR Estimator achieves an accuracy of 98.4% in characterizing the environment's SNR. Compared to a conventional single engine SI system, the improvement in accuracy was as high as 9.0% and 10.0% for the Multiengine Selection and Multiengine Weighted Fusion, respectively.

  17. Simulation tools for industrial applications of phased array inspection techniques

    International Nuclear Information System (INIS)

    Mahaut, St.; Roy, O.; Chatillon, S.; Calmon, P.

    2001-01-01

    Ultrasonic phased arrays techniques have been developed at the French Atomic Energy Commission in order to improve defects characterization and adaptability to various inspection configuration (complex geometry specimen). Such transducers allow 'standard' techniques - adjustable beam-steering and focusing -, or more 'advanced' techniques - self-focusing on defects for instance -. To estimate the performances of those techniques, models have been developed, which allows to compute the ultrasonic field radiated by an arbitrary phased array transducer through any complex specimen, and to predict the ultrasonic response of various defects inspected with a known beam. Both modeling applications are gathered in the Civa software, dedicated to NDT expertise. The use of those complementary models allows to evaluate the ability of a phased array to steer and focus the ultrasonic beam, and therefore its relevancy to detect and characterize defects. These models are specifically developed to give accurate solutions to realistic inspection applications. This paper briefly describes the CIVA models, and presents some applications dedicated to the inspection of complex specimen containing various defects with a phased array used to steer and focus the beam. Defect detection and characterization performances are discussed for the various configurations. Some experimental validation of both models are also presented. (authors)

  18. Automated Array Assembly, Phase 2. Quarterly technical progress report, April-June 1979

    Energy Technology Data Exchange (ETDEWEB)

    Carbajal, B.G.

    1979-07-01

    The Automated Array Assembly Task, Phase 2 of the Low Cost Solar Array (LSA) Project is a process development task. This contract provides for the fabrication of modules from large area Tandem Junction Cells (TJC). The key activities in this contract effort are (a) Large Area TJC including cell design, process verification and cell fabrication and (b) Tandem Junction Module (TJM) including definition of the cell-module interfaces, substrate fabrication, interconnect fabrication and module assembly. The overall goal is to advance solar cell module process technology to meet the 1986 goal of a production capability of 500 megawatts per year at a cost of less than $500 per peak kilowatt. This contract will focus on the Tandem Junction Module process. During this quarter, effort was focused on design and process verification. The large area TJC design was completed and the design verification was completed. Process variation experiments led to refinements in the baseline TJC process. Formed steel substrates were porcelainized. Cell array assembly techniques using infrared soldering are being checked out. Dummy cell arrays up to 5 cell by 5 cell have been assembled using all backside contacts.

  19. Adaptive model predictive process control using neural networks

    Science.gov (United States)

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  20. Processes for design, construction and utilisation of arrays of light-emitting diodes and light-emitting diode-coupled optical fibres for multi-site brain light delivery.

    Science.gov (United States)

    Bernstein, Jacob G; Allen, Brian D; Guerra, Alexander A; Boyden, Edward S

    2015-05-01

    Optogenetics enables light to be used to control the activity of genetically targeted cells in the living brain. Optical fibers can be used to deliver light to deep targets, and LEDs can be spatially arranged to enable patterned light delivery. In combination, arrays of LED-coupled optical fibers can enable patterned light delivery to deep targets in the brain. Here we describe the process flow for making LED arrays and LED-coupled optical fiber arrays, explaining key optical, electrical, thermal, and mechanical design principles to enable the manufacturing, assembly, and testing of such multi-site targetable optical devices. We also explore accessory strategies such as surgical automation approaches as well as innovations to enable low-noise concurrent electrophysiology.

  1. Dynamical analysis of surface-insulated planar wire array Z-pinches

    Science.gov (United States)

    Li, Yang; Sheng, Liang; Hei, Dongwei; Li, Xingwen; Zhang, Jinhai; Li, Mo; Qiu, Aici

    2018-05-01

    The ablation and implosion dynamics of planar wire array Z-pinches with and without surface insulation are compared and discussed in this paper. This paper first presents a phenomenological model named the ablation and cascade snowplow implosion (ACSI) model, which accounts for the ablation and implosion phases of a planar wire array Z-pinch in a single simulation. The comparison between experimental data and simulation results shows that the ACSI model could give a fairly good description about the dynamical characteristics of planar wire array Z-pinches. Surface insulation introduces notable differences in the ablation phase of planar wire array Z-pinches. The ablation phase is divided into two stages: insulation layer ablation and tungsten wire ablation. The two-stage ablation process of insulated wires is simulated in the ACSI model by updating the formulas describing the ablation process.

  2. Ecological opportunity and predator-prey interactions: linking eco-evolutionary processes and diversification in adaptive radiations.

    Science.gov (United States)

    Pontarp, Mikael; Petchey, Owen L

    2018-03-14

    Much of life's diversity has arisen through ecological opportunity and adaptive radiations, but the mechanistic underpinning of such diversification is not fully understood. Competition and predation can affect adaptive radiations, but contrasting theoretical and empirical results show that they can both promote and interrupt diversification. A mechanistic understanding of the link between microevolutionary processes and macroevolutionary patterns is thus needed, especially in trophic communities. Here, we use a trait-based eco-evolutionary model to investigate the mechanisms linking competition, predation and adaptive radiations. By combining available micro-evolutionary theory and simulations of adaptive radiations we show that intraspecific competition is crucial for diversification as it induces disruptive selection, in particular in early phases of radiation. The diversification rate is however decreased in later phases owing to interspecific competition as niche availability, and population sizes are decreased. We provide new insight into how predation tends to have a negative effect on prey diversification through decreased population sizes, decreased disruptive selection and through the exclusion of prey from parts of niche space. The seemingly disparate effects of competition and predation on adaptive radiations, listed in the literature, may thus be acting and interacting in the same adaptive radiation at different relative strength as the radiation progresses. © 2018 The Authors.

  3. OFDM Radar Space-Time Adaptive Processing by Exploiting Spatio-Temporal Sparsity

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL

    2013-01-01

    We propose a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly-moving target using an orthogonal frequency division multiplexing (OFDM) radar. We observe that the target and interference spectra are inherently sparse in the spatio-temporal domain. Hence, we exploit that sparsity to develop an efficient STAP technique that utilizes considerably lesser number of secondary data and produces an equivalent performance as the other existing STAP techniques. In addition, the use of an OFDM signal increases the frequency diversity of our system, as different scattering centers of a target resonate at different frequencies, and thus improves the target detectability. First, we formulate a realistic sparse-measurement model for an OFDM radar considering both the clutter and jammer as the interfering sources. Then, we apply a residual sparse-recovery technique based on the LASSO estimator to estimate the target and interference covariance matrices, and subsequently compute the optimal STAP-filter weights. Our numerical results demonstrate a comparative performance analysis of the proposed sparse-STAP algorithm with four other existing STAP methods. Furthermore, we discover that the OFDM-STAP filter-weights are adaptable to the frequency-variabilities of the target and interference responses, in addition to the spatio-temporal variabilities. Hence, by better utilizing the frequency variabilities, we propose an adaptive OFDM-waveform design technique, and consequently gain a significant amount of STAP-performance improvement.

  4. Navigating Earthquake Physics with High-Resolution Array Back-Projection

    Science.gov (United States)

    Meng, Lingsen

    Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The

  5. The data array, a tool to interface the user to a large data base

    Science.gov (United States)

    Foster, G. H.

    1974-01-01

    Aspects of the processing of spacecraft data is considered. Use of the data array in a large address space as an intermediate form in data processing for a large scientific data base is advocated. Techniques for efficient indexing in data arrays are reviewed and the data array method for mapping an arbitrary structure onto linear address space is shown. A compromise between the two forms is given. The impact of the data array on the user interface are considered along with implementation.

  6. SNP Arrays

    Directory of Open Access Journals (Sweden)

    Jari Louhelainen

    2016-10-01

    Full Text Available The papers published in this Special Issue “SNP arrays” (Single Nucleotide Polymorphism Arrays focus on several perspectives associated with arrays of this type. The range of papers vary from a case report to reviews, thereby targeting wider audiences working in this field. The research focus of SNP arrays is often human cancers but this Issue expands that focus to include areas such as rare conditions, animal breeding and bioinformatics tools. Given the limited scope, the spectrum of papers is nothing short of remarkable and even from a technical point of view these papers will contribute to the field at a general level. Three of the papers published in this Special Issue focus on the use of various SNP array approaches in the analysis of three different cancer types. Two of the papers concentrate on two very different rare conditions, applying the SNP arrays slightly differently. Finally, two other papers evaluate the use of the SNP arrays in the context of genetic analysis of livestock. The findings reported in these papers help to close gaps in the current literature and also to give guidelines for future applications of SNP arrays.

  7. Time in Redox Adaptation Processes: From Evolution to Hormesis

    Directory of Open Access Journals (Sweden)

    Mireille M. J. P. E. Sthijns

    2016-09-01

    Full Text Available Life on Earth has to adapt to the ever changing environment. For example, due to introduction of oxygen in the atmosphere, an antioxidant network evolved to cope with the exposure to oxygen. The adaptive mechanisms of the antioxidant network, specifically the glutathione (GSH system, are reviewed with a special focus on the time. The quickest adaptive response to oxidative stress is direct enzyme modification, increasing the GSH levels or activating the GSH-dependent protective enzymes. After several hours, a hormetic response is seen at the transcriptional level by up-regulating Nrf2-mediated expression of enzymes involved in GSH synthesis. In the long run, adaptations occur at the epigenetic and genomic level; for example, the ability to synthesize GSH by phototrophic bacteria. Apparently, in an adaptive hormetic response not only the dose or the compound, but also time, should be considered. This is essential for targeted interventions aimed to prevent diseases by successfully coping with changes in the environment e.g., oxidative stress.

  8. Stakeholder participation and sustainable fisheries: an integrative framework for assessing adaptive comanagement processes

    Directory of Open Access Journals (Sweden)

    Christian Stöhr

    2014-09-01

    Full Text Available Adaptive comanagement (ACM has been suggested as the way to successfully achieve sustainable environmental governance. Despite excellent research, the field still suffers from underdeveloped frameworks of causality. To address this issue, we suggest a framework that integrates the structural frame of Plummer and Fitzgibbons' "adaptive comanagement" with the specific process characteristics of Senecah's "Trinity of Voice." The resulting conceptual hybrid is used to guide the comparison of two cases of stakeholder participation in fisheries management - the Swedish Co-management Initiative and the Polish Fisheries Roundtable. We examine how different components of preconditions and the process led to the observed outcomes. The analysis shows that despite the different cultural and ecological contexts, the cases developed similar results. Triggered by a crisis, the participating stakeholders were successful in developing trust and better communication and enhanced learning. This can be traced back to a combination of respected leadership, skilled mediation, and a strong focus on deliberative approaches and the creation of respectful dialogue. We also discuss the difficulties of integrating outcomes of the work of such initiatives into the actual decision-making process. Finally, we specify the lessons learned for the cases and the benefits of applying our integrated framework.

  9. Adapting high-level language programs for parallel processing using data flow

    Science.gov (United States)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  10. Physical Limitations To Nonuniformity Correction In IR Focal Plane Arrays

    Science.gov (United States)

    Scribner, D. A.; Kruer, M. R.; Gridley, J. C.; Sarkady, K.

    1988-05-01

    Simple nonuniformity correction algorithms currently in use can be severely limited by nonlinear response characteristics of the individual pixels in an IR focal plane array. Although more complicated multi-point algorithms improve the correction process they too can be limited by nonlinearities. Furthermore, analysis of single pixel noise power spectrums usually show some level of 1 /f noise. This in turn causes pixel outputs to drift independent of each other thus causing the spatial noise (often called fixed pattern noise) of the array to increase as a function of time since the last calibration. Measurements are presented for two arrays (a HgCdTe hybrid and a Pt:Si CCD) describing pixel nonlinearities, 1/f noise, and residual spatial noise (after nonuniforming correction). Of particular emphasis is spatial noise as a function of the lapsed time since the last calibration and the calibration process selected. The resulting spatial noise is examined in terms of its effect on the NEAT performance of each array tested and comparisons are made. Finally, a discussion of implications for array developers is given.

  11. A novel method to design sparse linear arrays for ultrasonic phased array.

    Science.gov (United States)

    Yang, Ping; Chen, Bin; Shi, Ke-Ren

    2006-12-22

    In ultrasonic phased array testing, a sparse array can increase the resolution by enlarging the aperture without adding system complexity. Designing a sparse array involves choosing the best or a better configuration from a large number of candidate arrays. We firstly designed sparse arrays by using a genetic algorithm, but found that the arrays have poor performance and poor consistency. So, a method based on the Minimum Redundancy Linear Array was then adopted. Some elements are determined by the minimum-redundancy array firstly in order to ensure spatial resolution and then a genetic algorithm is used to optimize the remaining elements. Sparse arrays designed by this method have much better performance and consistency compared to the arrays designed only by a genetic algorithm. Both simulation and experiment confirm the effectiveness.

  12. Robotic inspection of fiber reinforced composites using phased array UT

    Science.gov (United States)

    Stetson, Jeffrey T.; De Odorico, Walter

    2014-02-01

    Ultrasound is the current NDE method of choice to inspect large fiber reinforced airframe structures. Over the last 15 years Cartesian based scanning machines using conventional ultrasound techniques have been employed by all airframe OEMs and their top tier suppliers to perform these inspections. Technical advances in both computing power and commercially available, multi-axis robots now facilitate a new generation of scanning machines. These machines use multiple end effector tools taking full advantage of phased array ultrasound technologies yielding substantial improvements in inspection quality and productivity. This paper outlines the general architecture for these new robotic scanning systems as well as details the variety of ultrasonic techniques available for use with them including advances such as wide area phased array scanning and sound field adaptation for non-flat, non-parallel surfaces.

  13. Free-floating epithelial micro-tissue arrays: a low cost and versatile technique.

    Science.gov (United States)

    Flood, P; Alvarez, L; Reynaud, E G

    2016-10-11

    Three-dimensional (3D) tissue models are invaluable tools that can closely reflect the in vivo physiological environment. However, they are usually difficult to develop, have a low throughput and are often costly; limiting their utility to most laboratories. The recent availability of inexpensive additive manufacturing printers and open source 3D design software offers us the possibility to easily create affordable 3D cell culture platforms. To demonstrate this, we established a simple, inexpensive and robust method for producing arrays of free-floating epithelial micro-tissues. Using a combination of 3D computer aided design and 3D printing, hydrogel micro-moulding and collagen cell encapsulation we engineered microenvironments that consistently direct the growth of micro-tissue arrays. We described the adaptability of this technique by testing several immortalised epithelial cell lines (MDCK, A549, Caco-2) and by generating branching morphology and micron to millimetre scaled micro-tissues. We established by fluorescence and electron microscopy that micro-tissues are polarised, have cell type specific differentiated phenotypes and regain native in vivo tissue qualities. Finally, using Salmonella typhimurium we show micro-tissues display a more physiologically relevant infection response compared to epithelial monolayers grown on permeable filter supports. In summary, we have developed a robust and adaptable technique for producing arrays of epithelial micro-tissues. This in vitro model has the potential to be a valuable tool for studying epithelial cell and tissue function/architecture in a physiologically relevant context.

  14. Optical technology for microwave applications VI and optoelectronic signal processing for phased-array antennas III; Proceedings of the Meeting, Orlando, FL, Apr. 20-23, 1992

    Science.gov (United States)

    Yao, Shi-Kay; Hendrickson, Brian M.

    The following topics related to optical technology for microwave applications are discussed: advanced acoustooptic devices, signal processing device technologies, optical signal processor technologies, microwave and optomicrowave devices, advanced lasers and sources, wideband electrooptic modulators, and wideband optical communications. The topics considered in the discussion of optoelectronic signal processing for phased-array antennas include devices, signal processing, and antenna systems.

  15. Towards Measuring the Adaptability of an AO4BPEL Process

    OpenAIRE

    Botangen, Khavee Agustus; Yu, Jian; Sheng, Michael

    2017-01-01

    Adaptability is a significant property which enables software systems to continuously provide the required functionality and achieve optimal performance. The recognised importance of adaptability makes its evaluation an essential task. However, the various adaptability dimensions and implementation mechanisms make adaptive strategies difficult to evaluate. In service oriented computing, several frameworks that extend the WS-BPEL, the de facto standard in composing distributed business applica...

  16. A low-density SNP array for analyzing differential selection in freshwater and marine populations of threespine stickleback (Gasterosteus aculeatus).

    Science.gov (United States)

    Ferchaud, Anne-Laure; Pedersen, Susanne H; Bekkevold, Dorte; Jian, Jianbo; Niu, Yongchao; Hansen, Michael M

    2014-10-06

    The threespine stickleback (Gasterosteus aculeatus) has become an important model species for studying both contemporary and parallel evolution. In particular, differential adaptation to freshwater and marine environments has led to high differentiation between freshwater and marine stickleback populations at the phenotypic trait of lateral plate morphology and the underlying candidate gene Ectodysplacin (EDA). Many studies have focused on this trait and candidate gene, although other genes involved in marine-freshwater adaptation may be equally important. In order to develop a resource for rapid and cost efficient analysis of genetic divergence between freshwater and marine sticklebacks, we generated a low-density SNP (Single Nucleotide Polymorphism) array encompassing markers of chromosome regions under putative directional selection, along with neutral markers for background. RAD (Restriction site Associated DNA) sequencing of sixty individuals representing two freshwater and one marine population led to the identification of 33,993 SNP markers. Ninety-six of these were chosen for the low-density SNP array, among which 70 represented SNPs under putatively directional selection in freshwater vs. marine environments, whereas 26 SNPs were assumed to be neutral. Annotation of these regions revealed several genes that are candidates for affecting stickleback phenotypic variation, some of which have been observed in previous studies whereas others are new. We have developed a cost-efficient low-density SNP array that allows for rapid screening of polymorphisms in threespine stickleback. The array provides a valuable tool for analyzing adaptive divergence between freshwater and marine stickleback populations beyond the well-established candidate gene Ectodysplacin (EDA).

  17. Learning to Adapt. Organisational Adaptation to Climate Change Impacts

    International Nuclear Information System (INIS)

    Berkhout, F.; Hertin, J.; Gann, D.M.

    2006-01-01

    Analysis of human adaptation to climate change should be based on realistic models of adaptive behaviour at the level of organisations and individuals. The paper sets out a framework for analysing adaptation to the direct and indirect impacts of climate change in business organisations with new evidence presented from empirical research into adaptation in nine case-study companies. It argues that adaptation to climate change has many similarities with processes of organisational learning. The paper suggests that business organisations face a number of obstacles in learning how to adapt to climate change impacts, especially in relation to the weakness and ambiguity of signals about climate change and the uncertainty about benefits flowing from adaptation measures. Organisations rarely adapt 'autonomously', since their adaptive behaviour is influenced by policy and market conditions, and draws on resources external to the organisation. The paper identifies four adaptation strategies that pattern organisational adaptive behaviour

  18. Adaptive neural network controller for the molten steel level control of strip casting processes

    International Nuclear Information System (INIS)

    Chen, Hung Yi; Huang, Shiuh Jer

    2010-01-01

    The twin-roll strip casting process is a steel-strip production method which combines continuous casting and hot rolling processes. The production line from molten liquid steel to the final steel-strip is shortened and the production cost is reduced significantly as compared to conventional continuous casting. The quality of strip casting process depends on many process parameters, such as molten steel level in the pool, solidification position, and roll gap. Their relationships are complex and the strip casting process has the properties of nonlinear uncertainty and time-varying characteristics. It is difficult to establish an accurate process model for designing a model-based controller to monitor the strip quality. In this paper, a model-free adaptive neural network controller is developed to overcome this problem. The proposed control strategy is based on a neural network structure combined with a sliding-mode control scheme. An adaptive rule is employed to on-line adjust the weights of radial basis functions by using the reaching condition of a specified sliding surface. This surface has the on-line learning ability to respond to the system's nonlinear and time-varying behaviors. Since this model-free controller has a simple control structure and small number of control parameters, it is easy to implement. Simulation results, based on a semi experimental system dynamic model and parameters, are executed to show the control performance of the proposed intelligent controller. In addition, the control performance is compared with that of a traditional Pid controller

  19. Improvements on Fresnel arrays for high contrast imaging

    Science.gov (United States)

    Wilhem, Roux; Laurent, Koechlin

    2018-03-01

    The Fresnel Diffractive Array Imager (FDAI) is based on a new optical concept for space telescopes, developed at Institut de Recherche en Astrophysique et Planétologie (IRAP), Toulouse, France. For the visible and near-infrared it has already proven its performances in resolution and dynamic range. We propose it now for astrophysical applications in the ultraviolet with apertures from 6 to 30 meters, aimed at imaging in UV faint astrophysical sources close to bright ones, as well as other applications requiring high dynamic range. Of course the project needs first a probatory mission at small aperture to validate the concept in space. In collaboration with institutes in Spain and Russia, we will propose to board a small prototype of Fresnel imager on the International Space Station (ISS), with a program combining technical tests and astrophysical targets. The spectral domain should contain the Lyman- α line ( λ = 121 nm). As part of its preparation, we improve the Fresnel array design for a better Point Spread Function in UV, presently on a small laboratory prototype working at 260 nm. Moreover, we plan to validate a new optical design and chromatic correction adapted to UV. In this article we present the results of numerical propagations showing the improvement in dynamic range obtained by combining and adapting three methods : central obturation, optimization of the bars mesh holding the Fresnel rings, and orthogonal apodization. We briefly present the proposed astrophysical program of a probatory mission with such UV optics.

  20. Ultrasonic phased arrays for nondestructive inspection of forgings

    International Nuclear Information System (INIS)

    Wuestenberg, H.; Rotter, B.; Klanke, H.P.; Harbecke, D.

    1993-01-01

    Ultrasonic examinations on large forgings like rotor shafts for turbines or components for nuclear reactors are carried out at various manufacturing stages and during in-service inspections. During the manufacture, most of the inspections are carried out manually. Special in-service conditions, such as those at nuclear pressure vessels, have resulted in the development of mechanized scanning equipment. Ultrasonic probes have improved, and well-adapted sound fields and pulse shapes and based on special imaging procedures for the representation of the reportable reflectors have been applied. Since the geometry of many forgings requires the use of a multitude of angles for the inspections in-service and during manufacture, phased-array probes can be used successfully. The main advantages of the phased-array concept, e.g. the generation of a multitude of angles with the typical increase of redundancy in detection and quantitative evaluation and the possibility to produce pictures of defect situations, will be described in this contribution

  1. Strategic Adaptation

    DEFF Research Database (Denmark)

    Andersen, Torben Juul

    2015-01-01

    This article provides an overview of theoretical contributions that have influenced the discourse around strategic adaptation including contingency perspectives, strategic fit reasoning, decision structure, information processing, corporate entrepreneurship, and strategy process. The related...... concepts of strategic renewal, dynamic managerial capabilities, dynamic capabilities, and strategic response capabilities are discussed and contextualized against strategic responsiveness. The insights derived from this article are used to outline the contours of a dynamic process of strategic adaptation....... This model incorporates elements of central strategizing, autonomous entrepreneurial behavior, interactive information processing, and open communication systems that enhance the organization's ability to observe exogenous changes and respond effectively to them....

  2. Compensation for the signal processing characteristics of ultrasound B-mode scanners in adaptive speckle reduction.

    Science.gov (United States)

    Crawford, D C; Bell, D S; Bamber, J C

    1993-01-01

    A systematic method to compensate for nonlinear amplification of individual ultrasound B-scanners has been investigated in order to optimise performance of an adaptive speckle reduction (ASR) filter for a wide range of clinical ultrasonic imaging equipment. Three potential methods have been investigated: (1) a method involving an appropriate selection of the speckle recognition feature was successful when the scanner signal processing executes simple logarithmic compressions; (2) an inverse transform (decompression) of the B-mode image was effective in correcting for the measured characteristics of image data compression when the algorithm was implemented in full floating point arithmetic; (3) characterising the behaviour of the statistical speckle recognition feature under conditions of speckle noise was found to be the method of choice for implementation of the adaptive speckle reduction algorithm in limited precision integer arithmetic. In this example, the statistical features of variance and mean were investigated. The third method may be implemented on commercially available fast image processing hardware and is also better suited for transfer into dedicated hardware to facilitate real-time adaptive speckle reduction. A systematic method is described for obtaining ASR calibration data from B-mode images of a speckle producing phantom.

  3. Synthesis and characterization of Mn-doped ZnO column arrays

    International Nuclear Information System (INIS)

    Yang Mei; Guo Zhixing; Qiu Kehui; Long Jianping; Yin Guangfu; Guan Denggao; Liu Sutian; Zhou Shijie

    2010-01-01

    Mn-doped ZnO column arrays were successfully synthesized by conventional sol-gel process. Effect of Mn/Zn atomic ratio and reaction time were investigated, and the morphology, tropism and optical properties of Mn-doped ZnO column arrays were characterized by SEM, XRD and photoluminescence (PL) spectroscopy. The result shows that a Mn/Zn atomic ratio of 0.1 and growth time of 12 h are the optimal condition for the preparation of densely distributed ZnO column arrays. XRD analysis shows that Mn-doped ZnO column arrays are highly c-axis oriented. As for Mn-doped ZnO column arrays, obvious increase of photoluminescence intensity is observed at the wavelength of ∼395 nm and ∼413 nm, compared to pure ZnO column arrays.

  4. Resonance spectra of diabolo optical antenna arrays

    Directory of Open Access Journals (Sweden)

    Hong Guo

    2015-10-01

    Full Text Available A complete set of diabolo optical antenna arrays with different waist widths and periods was fabricated on a sapphire substrate by using a standard e-beam lithography and lift-off process. Fabricated diabolo optical antenna arrays were characterized by measuring the transmittance and reflectance with a microscope-coupled FTIR spectrometer. It was found experimentally that reducing the waist width significantly shifts the resonance to longer wavelength and narrowing the waist of the antennas is more effective than increasing the period of the array for tuning the resonance wavelength. Also it is found that the magnetic field enhancement near the antenna waist is correlated to the shift of the resonance wavelength.

  5. Microbial Warfare: Illuminating CRISPR adaptive immunity using single-molecule fluorescence

    NARCIS (Netherlands)

    Loeff, L.

    2017-01-01

    Bacteria and archaea are constantly threatened by a large array of viruses and other genetic elements. Driven by evolution, these organisms have acquired a wide arsenal of defense mechanisms that allow the host organism to fight off the invaders. Among these defense mechanisms is an adaptive and

  6. Conducting polymer nanowire arrays for high performance supercapacitors.

    Science.gov (United States)

    Wang, Kai; Wu, Haiping; Meng, Yuena; Wei, Zhixiang

    2014-01-15

    This Review provides a brief summary of the most recent research developments in the fabrication and application of one-dimensional ordered conducting polymers nanostructure (especially nanowire arrays) and their composites as electrodes for supercapacitors. By controlling the nucleation and growth process of polymerization, aligned conducting polymer nanowire arrays and their composites with nano-carbon materials can be prepared by employing in situ chemical polymerization or electrochemical polymerization without a template. This kind of nanostructure (such as polypyrrole and polyaniline nanowire arrays) possesses high capacitance, superior rate capability ascribed to large electrochemical surface, and an optimal ion diffusion path in the ordered nanowire structure, which is proved to be an ideal electrode material for high performance supercapacitors. Furthermore, flexible, micro-scale, threadlike, and multifunctional supercapacitors are introduced based on conducting polyaniline nanowire arrays and their composites. These prototypes of supercapacitors utilize the high flexibility, good processability, and large capacitance of conducting polymers, which efficiently extend the usage of supercapacitors in various situations, and even for a complicated integration system of different electronic devices. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. A qualitative approach of psychosocial adaptation process in patients undergoing long-term hemodialysis.

    Science.gov (United States)

    Lin, Chun-Chih; Han, Chin-Yen; Pan, I-Ju

    2015-03-01

    Professional hemodialysis (HD) nursing tends to be task-oriented and lack consideration of the client's viewpoint. This study aims to interpret the process of psychosocial adaptation to dealing with HD in people with end-stage renal disease (ESRD). A grounded theory guided this study. Theoretical sampling included 15 people receiving HD at the HD center of a hospital from July to November 2010. Participants received an information sheet in writing, a verbal invitation, and informed consent forms before interviews were conducted. A constant comparative data analysis was analyzed using open, axial and selective coding. The computer software ATLAS.ti assisted data management. Credibility, transferability, dependability, and confirmability ensured the rigor of study process. This study identified "adopting life with hemodialysis", which captures the process of the psychosocial adaptation in people with ESRD as one transformation. Four categories that evolved from "adopting HD life" are (a) slipping into, (b) restricted to a renal world, (c) losing self control, and (d) stuck in an endless process. The findings of this investigation indicate the multidimensional requirements of people receiving maintenance dialysis, with an emphasis on the deficiency in psychosocial and emotional care. The study's findings contribute to clinical practice by increasing the understanding of the experience of chronic HD treatment from the recipient's viewpoint. The better our understanding, the better the care provided will meet the needs of the people receiving HD. Copyright © 2015. Published by Elsevier B.V.

  8. Wideband Low Side Lobe Aperture Coupled Patch Phased Array Antennas

    Science.gov (United States)

    Poduval, Dhruva

    Low profile printed antenna arrays with wide bandwidth, high gain, and low Side Lobe Level (SLL) are in great demand for current and future commercial and military communication systems and radar. Aperture coupled patch antennas have been proposed to obtain wide impedance bandwidths in the past. Aperture coupling is preferred particularly for phased arrays because of their advantage of integration to other active devices and circuits, e.g. phase shifters, power amplifiers, low noise amplifiers, mixers etc. However, when designing such arrays, the interplay between array performance characteristics, such as gain, side lobe level, back lobe level, mutual coupling etc. must be understood and optimized under multiple design constraints, e.g. substrate material properties and thicknesses, element to element spacing, and feed lines and their orientation and arrangements with respect to the antenna elements. The focus of this thesis is to investigate, design, and develop an aperture coupled patch array with wide operating bandwidth (30%), high gain (17.5 dBi), low side lobe level (20 dB), and high Forward to Backward (F/B) ratio (21.8 dB). The target frequency range is 2.4 to 3 GHz given its wide application in WLAN, LTE (Long Term Evolution) and other communication systems. Notwithstanding that the design concept can very well be adapted at other frequencies. Specifically, a 16 element, 4 by 4 planar microstrip patch array is designed using HFSS and experimentally developed and tested. Starting from mutual coupling minimization a corporate feeding scheme is designed to achieve the needed performance. To reduce the SLL the corporate feeding network is redesigned to obtain a specific amplitude taper. Studies are conducted to determine the optimum location for a metallic reflector under the feed line to improve the F/B. An experimental prototype of the antenna was built and tested validating and demonstrating the performance levels expected from simulation predictions

  9. Analysis and Design of a Context Adaptable SAD/MSE Architecture

    Directory of Open Access Journals (Sweden)

    Arvind Sudarsanam

    2009-01-01

    Full Text Available Design of flexible multimedia accelerators that can cater to multiple algorithms is being aggressively pursued in the media processors community. Such an approach is justified in the era of sub-45 nm technology where an increasingly dominating leakage power component is forcing designers to make the best possible use of on-chip resources. In this paper we present an analysis of two commonly used window-based operations (sum of absolute differences and mean squared error across a variety of search patterns and block sizes (2×3, 5×5, etc.. We propose a context adaptable architecture that has (i configurable 2D systolic array and (ii 2D Configurable Register Array (CRA. CRA can cater to variable pixel access patterns while reusing fetched pixels across search windows. Benefits of proposed architecture when compared to 15 other published architectures are adaptability, high throughput, and low latency at a cost of increased footprint, when ported on a Xilinx FPGA.

  10. Materials preparation and fabrication of pyroelectric polymer/silicon MOSFET detector arrays. Final report

    International Nuclear Information System (INIS)

    Bloomfield, P.

    1992-01-01

    The authors have delivered several 64-element linear arrays of pyroelectric elements fully integrated on silicon wafers with MOS readout devices. They have delivered detailed drawings of the linear arrays to LANL. They have processed a series of two inch wafers per submitted design. Each two inch wafer contains two 64 element arrays. After spin-coating copolymer onto the arrays, vacuum depositing the top electrodes, and polarizing the copolymer films so as to make them pyroelectrically active, each wafer was split in half. The authors developed a thicker oxide coating separating the extended gate electrode (beneath the polymer detector) from the silicon. This should reduce its parasitic capacitance and hence improve the S/N. They provided LANL three processed 64 element sensor arrays. Each array was affixed to a connector panel and selected solder pads of the common ground, the common source voltage supply connections, the 64 individual drain connections, and the 64 drain connections (for direct pyroelectric sensing response rather than the MOSFET action) were wire bonded to the connector panel solder pads. This entails (64 + 64 + 1 + 1) = 130 possible bond connections per 64 element array. This report now details the processing steps and the progress of the individual wafers as they were carried through from beginning to end

  11. Mathematical analysis of the real time array PCR (RTA PCR) process

    NARCIS (Netherlands)

    Dijksman, Johan Frederik; Pierik, A.

    2012-01-01

    Real time array PCR (RTA PCR) is a recently developed biochemical technique that measures amplification curves (like with quantitative real time Polymerase Chain Reaction (qRT PCR)) of a multitude of different templates in a sample. It combines two different methods in order to profit from the

  12. Resource and Performance Evaluations of Fixed Point QRD-RLS Systolic Array through FPGA Implementation

    Science.gov (United States)

    Yokoyama, Yoshiaki; Kim, Minseok; Arai, Hiroyuki

    At present, when using space-time processing techniques with multiple antennas for mobile radio communication, real-time weight adaptation is necessary. Due to the progress of integrated circuit technology, dedicated processor implementation with ASIC or FPGA can be employed to implement various wireless applications. This paper presents a resource and performance evaluation of the QRD-RLS systolic array processor based on fixed-point CORDIC algorithm with FPGA. In this paper, to save hardware resources, we propose the shared architecture of a complex CORDIC processor. The required precision of internal calculation, the circuit area for the number of antenna elements and wordlength, and the processing speed will be evaluated. The resource estimation provides a possible processor configuration with a current FPGA on the market. Computer simulations assuming a fading channel will show a fast convergence property with a finite number of training symbols. The proposed architecture has also been implemented and its operation was verified by beamforming evaluation through a radio propagation experiment.

  13. Graphical Environment Tools for Application to Gamma-Ray Energy Tracking Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Todd, Richard A. [RIS Corp.; Radford, David C. [ORNL Physics Div.

    2013-12-30

    Highly segmented, position-sensitive germanium detector systems are being developed for nuclear physics research where traditional electronic signal processing with mixed analog and digital function blocks would be enormously complex and costly. Future systems will be constructed using pipelined processing of high-speed digitized signals as is done in the telecommunications industry. Techniques which provide rapid algorithm and system development for future systems are desirable. This project has used digital signal processing concepts and existing graphical system design tools to develop a set of re-usable modular functions and libraries targeted for the nuclear physics community. Researchers working with complex nuclear detector arrays such as the Gamma-Ray Energy Tracking Array (GRETA) have been able to construct advanced data processing algorithms for implementation in field programmable gate arrays (FPGAs) through application of these library functions using intuitive graphical interfaces.

  14. Improving Accuracy of Processing by Adaptive Control Techniques

    Directory of Open Access Journals (Sweden)

    N. N. Barbashov

    2016-01-01

    Full Text Available When machining the work-pieces a range of scatter of the work-piece dimensions to the tolerance limit is displaced in response to the errors. To improve an accuracy of machining and prevent products from defects it is necessary to diminish the machining error components, i.e. to improve the accuracy of machine tool, tool life, rigidity of the system, accuracy of adjustment. It is also necessary to provide on-machine adjustment after a certain time. However, increasing number of readjustments reduces the performance and high machine and tool requirements lead to a significant increase in the machining cost.To improve the accuracy and machining rate, various devices of active control (in-process gaging devices, as well as controlled machining through adaptive systems for a technological process control now become widely used. Thus, the accuracy improvement in this case is reached by compensation of a majority of technological errors. The sensors of active control can provide improving the accuracy of processing by one or two quality classes, and simultaneous operation of several machines.For efficient use of sensors of active control it is necessary to develop the accuracy control methods by means of introducing the appropriate adjustments to solve this problem. Methods based on the moving average, appear to be the most promising for accuracy control, since they contain information on the change in the last several measured values of the parameter under control.When using the proposed method in calculation, the first three members of the sequence of deviations remain unchanged, therefore 1 1 x  x , 2 2 x  x , 3 3 x  x Then, for each i-th member of the sequence we calculate that way: , ' i i i x  x  k x , where instead of the i x values will be populated with the corresponding values ' i x calculated as an average of three previous members:3 ' 1  2  3  i i i i x x x x .As a criterion for the estimate of the control

  15. Smart photodetector arrays for error control in page-oriented optical memory

    Science.gov (United States)

    Schaffer, Maureen Elizabeth

    1998-12-01

    Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data

  16. Micropatterned arrays of porous silicon: toward sensory biointerfaces.

    Science.gov (United States)

    Flavel, Benjamin S; Sweetman, Martin J; Shearer, Cameron J; Shapter, Joseph G; Voelcker, Nicolas H

    2011-07-01

    We describe the fabrication of arrays of porous silicon spots by means of photolithography where a positive photoresist serves as a mask during the anodization process. In particular, photoluminescent arrays and porous silicon spots suitable for further chemical modification and the attachment of human cells were created. The produced arrays of porous silicon were chemically modified by means of a thermal hydrosilylation reaction that facilitated immobilization of the fluorescent dye lissamine, and alternatively, the cell adhesion peptide arginine-glycine-aspartic acid-serine. The latter modification enabled the selective attachment of human lens epithelial cells on the peptide functionalized regions of the patterns. This type of surface patterning, using etched porous silicon arrays functionalized with biological recognition elements, presents a new format of interfacing porous silicon with mammalian cells. Porous silicon arrays with photoluminescent properties produced by this patterning strategy also have potential applications as platforms for in situ monitoring of cell behavior.

  17. Simulation of an Electromagnetic Acoustic Transducer Array by Using Analytical Method and FDTD

    Directory of Open Access Journals (Sweden)

    Yuedong Xie

    2016-01-01

    Full Text Available Previously, we developed a method based on FEM and FDTD for the study of an Electromagnetic Acoustic Transducer Array (EMAT. This paper presents a new analytical solution to the eddy current problem for the meander coil used in an EMAT, which is adapted from the classic Deeds and Dodd solution originally intended for circular coils. The analytical solution resulting from this novel adaptation exploits the large radius extrapolation and shows several advantages over the finite element method (FEM, especially in the higher frequency regime. The calculated Lorentz force density from the analytical EM solver is then coupled to the ultrasonic simulations, which exploit the finite-difference time-domain (FDTD method to describe the propagation of ultrasound waves, in particular for Rayleigh waves. Radiation pattern obtained with Hilbert transform on time-domain waveforms is proposed to characterise the sensor in terms of its beam directivity and field distribution along the steering angle, which can produce performance parameters for an EMAT array, facilitating the optimum design of such sensors.

  18. Developing barbed microtip-based electrode arrays for biopotential measurement.

    Science.gov (United States)

    Hsu, Li-Sheng; Tung, Shu-Wei; Kuo, Che-Hsi; Yang, Yao-Joe

    2014-07-10

    This study involved fabricating barbed microtip-based electrode arrays by using silicon wet etching. KOH anisotropic wet etching was employed to form a standard pyramidal microtip array and HF/HNO3 isotropic etching was used to fabricate barbs on these microtips. To improve the electrical conductance between the tip array on the front side of the wafer and the electrical contact on the back side, a through-silicon via was created during the wet etching process. The experimental results show that the forces required to detach the barbed microtip arrays from human skin, a polydimethylsiloxane (PDMS) polymer, and a polyvinylchloride (PVC) film were larger compared with those required to detach microtip arrays that lacked barbs. The impedances of the skin-electrode interface were measured and the performance levels of the proposed dry electrode were characterized. Electrode prototypes that employed the proposed tip arrays were implemented. Electroencephalogram (EEG) and electrocardiography (ECG) recordings using these electrode prototypes were also demonstrated.

  19. Developing Barbed Microtip-Based Electrode Arrays for Biopotential Measurement

    Directory of Open Access Journals (Sweden)

    Li-Sheng Hsu

    2014-07-01

    Full Text Available This study involved fabricating barbed microtip-based electrode arrays by using silicon wet etching. KOH anisotropic wet etching was employed to form a standard pyramidal microtip array and HF/HNO3 isotropic etching was used to fabricate barbs on these microtips. To improve the electrical conductance between the tip array on the front side of the wafer and the electrical contact on the back side, a through-silicon via was created during the wet etching process. The experimental results show that the forces required to detach the barbed microtip arrays from human skin, a polydimethylsiloxane (PDMS polymer, and a polyvinylchloride (PVC film were larger compared with those required to detach microtip arrays that lacked barbs. The impedances of the skin-electrode interface were measured and the performance levels of the proposed dry electrode were characterized. Electrode prototypes that employed the proposed tip arrays were implemented. Electroencephalogram (EEG and electrocardiography (ECG recordings using these electrode prototypes were also demonstrated.

  20. Sonochemically Fabricated Microelectrode Arrays for Use as Sensing Platforms

    Directory of Open Access Journals (Sweden)

    Stuart D. Collyer

    2010-05-01

    Full Text Available The development, manufacture, modification and subsequent utilisation of sonochemically-formed microelectrode arrays is described for a range of applications. Initial fabrication of the sensing platform utilises ultrasonic ablation of electrochemically insulating polymers deposited upon conductive carbon substrates, forming an array of up to 70,000 microelectrode pores cm–2. Electrochemical and optical analyses using these arrays, their enhanced signal response and stir-independence area are all discussed. The growth of conducting polymeric “mushroom” protrusion arrays with entrapped biological entities, thereby forming biosensors is detailed. The simplicity and inexpensiveness of this approach, lending itself ideally to mass fabrication coupled with unrivalled sensitivity and stir independence makes commercial viability of this process a reality. Application of microelectrode arrays as functional components within sensors include devices for detection of chlorine, glucose, ethanol and pesticides. Immunosensors based on microelectrode arrays are described within this monograph for antigens associated with prostate cancer and transient ischemic attacks (strokes.