WorldWideScience

Sample records for adaptive array processing

  1. Principles of Adaptive Array Processing

    Science.gov (United States)

    2006-09-01

    ACE with and without tapering (homogeneous case). These analytical results are less suited to predict the detection performance of a real system ...Nickel: Adaptive Beamforming for Phased Array Radars. Proc. Int. Radar Symposium IRS’98 (Munich, Sept. 1998), DGON and VDE /ITG, pp. 897-906.(Reprint also...strategies for airborne radar. Asilomar Conf. on Signals, Systems and Computers, Pacific Grove, CA, 1998, IEEE Cat.Nr. 0-7803-5148-7/98, pp. 1327-1331. [17

  2. Adaptive motion compensation in sonar array processing

    NARCIS (Netherlands)

    Groen, J.

    2006-01-01

    In recent years, sonar performance has mainly improved via a significant increase in array ap-erture, signal bandwidth and computational power. This thesis aims at improving sonar array processing techniques based on these three steps forward. In applications such as anti-submarine warfare and mine

  3. Application of optical processing to adaptive phased array radar

    Science.gov (United States)

    Carroll, C. W.; Vijaya Kumar, B. V. K.

    1988-01-01

    The results of the investigation of the applicability of optical processing to Adaptive Phased Array Radar (APAR) data processing will be summarized. Subjects that are covered include: (1) new iterative Fourier transform based technique to determine the array antenna weight vector such that the resulting antenna pattern has nulls at desired locations; (2) obtaining the solution of the optimal Wiener weight vector by both iterative and direct methods on two laboratory Optical Linear Algebra Processing (OLAP) systems; and (3) an investigation of the effects of errors present in OLAP systems on the solution vectors.

  4. Using adaptive antenna array in LTE with MIMO for space-time processing

    Directory of Open Access Journals (Sweden)

    Abdourahamane Ahmed Ali

    2015-04-01

    Full Text Available The actual methods of improvement the existent wireless transmission systems are proposed. Mathematical apparatus is considered and proved by models, graph of which are shown, using the adaptive array antenna in LTE with MIMO for space-time processing. The results show that improvements, which are joined with space-time processing, positively reflects on LTE cell size or on throughput

  5. Introduction to adaptive arrays

    CERN Document Server

    Monzingo, Bob; Haupt, Randy

    2011-01-01

    This second edition is an extensive modernization of the bestselling introduction to the subject of adaptive array sensor systems. With the number of applications of adaptive array sensor systems growing each year, this look at the principles and fundamental techniques that are critical to these systems is more important than ever before. Introduction to Adaptive Arrays, 2nd Edition is organized as a tutorial, taking the reader by the hand and leading them through the maze of jargon that often surrounds this highly technical subject. It is easy to read and easy to follow as fundamental concept

  6. Proceedings of the Adaptive Sensor Array Processing Workshop (12th) Held in Lexington, MA on 16-18 March 2004 (CD-ROM)

    National Research Council Canada - National Science Library

    James, F

    2004-01-01

    ...: The twelfth annual workshop on Adaptive Sensor Array Processing presented a diverse agenda featuring new work on adaptive methods for communications, radar and sonar, algorithmic challenges posed...

  7. Proceedings of the Adaptive Sensor Array Processing (ASAP) Workshop 12-14 March 1997. Volume 1

    National Research Council Canada - National Science Library

    O'Donovan, G

    1997-01-01

    ... was included in the first and third ASAP workshops, ASAP has traditionally concentrated on radar core topics include airborne radar testbed systems, space time adaptive processing, multipath jamming...

  8. Adaptive ground implemented phase array

    Science.gov (United States)

    Spearing, R. E.

    1973-01-01

    The simulation of an adaptive ground implemented phased array of five antenna elements is reported for a very high frequency system design that is tolerant to the radio frequency interference environment encountered by a tracking data relay satellite. Signals originating from satellites are received by the VHF ring array and both horizontal and vertical polarizations from each of the five elements are multiplexed and transmitted down to ground station. A panel on the transmitting end of the simulation chamber contains up to 10 S-band RFI sources along with the desired signal to simulate the dynamic relationship between user and TDRS. The 10 input channels are summed, and desired and interference signals are separated and corrected until the resultant sum signal-to-interference ratio is maximized. Testing performed with this simulation equipment demonstrates good correlation between predicted and actual results.

  9. Sensor array signal processing

    CERN Document Server

    Naidu, Prabhakar S

    2009-01-01

    Chapter One: An Overview of Wavefields 1.1 Types of Wavefields and the Governing Equations 1.2 Wavefield in open space 1.3 Wavefield in bounded space 1.4 Stochastic wavefield 1.5 Multipath propagation 1.6 Propagation through random medium 1.7 ExercisesChapter Two: Sensor Array Systems 2.1 Uniform linear array (ULA) 2.2 Planar array 2.3 Distributed sensor array 2.4 Broadband sensor array 2.5 Source and sensor arrays 2.6 Multi-component sensor array2.7 ExercisesChapter Three: Frequency Wavenumber Processing 3.1 Digital filters in the w-k domain 3.2 Mapping of 1D into 2D filters 3.3 Multichannel Wiener filters 3.4 Wiener filters for ULA and UCA 3.5 Predictive noise cancellation 3.6 Exercises Chapter Four: Source Localization: Frequency Wavenumber Spectrum4.1 Frequency wavenumber spectrum 4.2 Beamformation 4.3 Capon's w-k spectrum 4.4 Maximum entropy w-k spectrum 4.5 Doppler-Azimuth Processing4.6 ExercisesChapter Five: Source Localization: Subspace Methods 5.1 Subspace methods (Narrowband) 5.2 Subspace methods (B...

  10. Micromirror Arrays for Adaptive Optics; TOPICAL

    International Nuclear Information System (INIS)

    Carr, E.J.

    2000-01-01

    The long-range goal of this project is to develop the optical and mechanical design of a micromirror array for adaptive optics that will meet the following criteria: flat mirror surface ((lambda)/20), high fill factor ( and gt; 95%), large stroke (5-10(micro)m), and pixel size(approx)-200(micro)m. This will be accomplished by optimizing the mirror surface and actuators independently and then combining them using bonding technologies that are currently being developed

  11. Subband Adaptive Array for DS-CDMA Mobile Radio

    Directory of Open Access Journals (Sweden)

    Tran Xuan Nam

    2004-01-01

    Full Text Available We propose a novel scheme of subband adaptive array (SBAA for direct-sequence code division multiple access (DS-CDMA. The scheme exploits the spreading code and pilot signal as the reference signal to estimate the propagation channel. Moreover, instead of combining the array outputs at each output tap using a synthesis filter and then despreading them, we despread directly the array outputs at each output tap by the desired user's code to save the synthesis filter. Although its configuration is far different from that of 2D RAKEs, the proposed scheme exhibits relatively equivalent performance of 2D RAKEs while having less computation load due to utilising adaptive signal processing in subbands. Simulation programs are carried out to explore the performance of the scheme and compare its performance with that of the standard 2D RAKE.

  12. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  13. Adaptive Processes in Hearing

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Christensen-Dalsgaard, Jakob; Tranebjærg, Lisbeth

    2018-01-01

    , and is essential to achieve successful speech communication, correct orientation in our full environment, and eventually survival. These adaptive processes may differ in individuals with hearing loss, whose auditory system may cope via ‘‘readapting’’ itself over a longer time scale to the changes in sensory input...... induced by hearing impairment and the compensation provided by hearing devices. These devices themselves are now able to adapt to the listener’s individual environment, attentional state, and behavior. These topics related to auditory adaptation, in the broad sense of the term, were central to the 6th...... International Symposium on Auditory and Audiological Research held in Nyborg, Denmark, in August 2017. The symposium addressed adaptive processes in hearing from different angles, together with a wide variety of other auditory and audiological topics. The papers in this special issue result from some...

  14. Adaptive Beamforming Based on Complex Quaternion Processes

    Directory of Open Access Journals (Sweden)

    Jian-wu Tao

    2014-01-01

    Full Text Available Motivated by the benefits of array signal processing in quaternion domain, we investigate the problem of adaptive beamforming based on complex quaternion processes in this paper. First, a complex quaternion least-mean squares (CQLMS algorithm is proposed and its performance is analyzed. The CQLMS algorithm is suitable for adaptive beamforming of vector-sensor array. The weight vector update of CQLMS algorithm is derived based on the complex gradient, leading to lower computational complexity. Because the complex quaternion can exhibit the orthogonal structure of an electromagnetic vector-sensor in a natural way, a complex quaternion model in time domain is provided for a 3-component vector-sensor array. And the normalized adaptive beamformer using CQLMS is presented. Finally, simulation results are given to validate the performance of the proposed adaptive beamformer.

  15. Design of Robust Adaptive Array Processors for Non-Stationary Ocean Environments

    National Research Council Canada - National Science Library

    Wage, Kathleen E

    2009-01-01

    The overall goal of this project is to design adaptive array processing algorithms that have good transient performance, are robust to mismatch, work with low sample support, and incorporate waveguide...

  16. Biomimetic micromechanical adaptive flow-sensor arrays

    Science.gov (United States)

    Krijnen, Gijs; Floris, Arjan; Dijkstra, Marcel; Lammerink, Theo; Wiegerink, Remco

    2007-05-01

    We report current developments in biomimetic flow-sensors based on flow sensitive mechano-sensors of crickets. Crickets have one form of acoustic sensing evolved in the form of mechanoreceptive sensory hairs. These filiform hairs are highly perceptive to low-frequency sound with energy sensitivities close to thermal threshold. In this work we describe hair-sensors fabricated by a combination of sacrificial poly-silicon technology, to form silicon-nitride suspended membranes, and SU8 polymer processing for fabrication of hairs with diameters of about 50 μm and up to 1 mm length. The membranes have thin chromium electrodes on top forming variable capacitors with the substrate that allow for capacitive read-out. Previously these sensors have been shown to exhibit acoustic sensitivity. Like for the crickets, the MEMS hair-sensors are positioned on elongated structures, resembling the cercus of crickets. In this work we present optical measurements on acoustically and electrostatically excited hair-sensors. We present adaptive control of flow-sensitivity and resonance frequency by electrostatic spring stiffness softening. Experimental data and simple analytical models derived from transduction theory are shown to exhibit good correspondence, both confirming theory and the applicability of the presented approach towards adaptation.

  17. Fundamentals of spherical array processing

    CERN Document Server

    Rafaely, Boaz

    2015-01-01

    This book provides a comprehensive introduction to the theory and practice of spherical microphone arrays. It is written for graduate students, researchers and engineers who work with spherical microphone arrays in a wide range of applications.   The first two chapters provide the reader with the necessary mathematical and physical background, including an introduction to the spherical Fourier transform and the formulation of plane-wave sound fields in the spherical harmonic domain. The third chapter covers the theory of spatial sampling, employed when selecting the positions of microphones to sample sound pressure functions in space. Subsequent chapters present various spherical array configurations, including the popular rigid-sphere-based configuration. Beamforming (spatial filtering) in the spherical harmonics domain, including axis-symmetric beamforming, and the performance measures of directivity index and white noise gain are introduced, and a range of optimal beamformers for spherical arrays, includi...

  18. Integrating Scientific Array Processing into Standard SQL

    Science.gov (United States)

    Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter

    2014-05-01

    We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.

  19. ALMA Array Operations Group process overview

    Science.gov (United States)

    Barrios, Emilio; Alarcon, Hector

    2016-07-01

    ALMA Science operations activities in Chile are responsibility of the Department of Science Operations, which consists of three groups, the Array Operations Group (AOG), the Program Management Group (PMG) and the Data Management Group (DMG). The AOG includes the Array Operators and have the mission to provide support for science observations, operating safely and efficiently the array. The poster describes the AOG process, management and operational tools.

  20. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT)

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  1. Adaptive antenna array algorithms and their impact on code division ...

    African Journals Online (AJOL)

    In this paper four each blind adaptive array algorithms are developed, and their performance under different test situations (e.g. A WGN (Additive White Gaussian Noise) channel, and multipath environment) is studied A MATLAB test bed is created to show their performance on these two test situations and an optimum one ...

  2. Adaptive Injection-locking Oscillator Array for RF Spectrum Analysis

    International Nuclear Information System (INIS)

    Leung, Daniel

    2011-01-01

    A highly parallel radio frequency receiver using an array of injection-locking oscillators for on-chip, rapid estimation of signal amplitudes and frequencies is considered. The oscillators are tuned to different natural frequencies, and variable gain amplifiers are used to provide negative feedback to adapt the locking band-width with the input signal to yield a combined measure of input signal amplitude and frequency detuning. To further this effort, an array of 16 two-stage differential ring oscillators and 16 Gilbert-cell mixers is designed for 40-400 MHz operation. The injection-locking oscillator array is assembled on a custom printed-circuit board. Control and calibration is achieved by on-board microcontroller.

  3. A recurrent neural network for adaptive beamforming and array correction.

    Science.gov (United States)

    Che, Hangjun; Li, Chuandong; He, Xing; Huang, Tingwen

    2016-08-01

    In this paper, a recurrent neural network (RNN) is proposed for solving adaptive beamforming problem. In order to minimize sidelobe interference, the problem is described as a convex optimization problem based on linear array model. RNN is designed to optimize system's weight values in the feasible region which is derived from arrays' state and plane wave's information. The new algorithm is proven to be stable and converge to optimal solution in the sense of Lyapunov. So as to verify new algorithm's performance, we apply it to beamforming under array mismatch situation. Comparing with other optimization algorithms, simulations suggest that RNN has strong ability to search for exact solutions under the condition of large scale constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Adaptive lesion formation using dual mode ultrasound array system

    Science.gov (United States)

    Liu, Dalong; Casper, Andrew; Haritonova, Alyona; Ebbini, Emad S.

    2017-03-01

    We present the results from an ultrasound-guided focused ultrasound platform designed to perform real-time monitoring and control of lesion formation. Real-time signal processing of echogenicity changes during lesion formation allows for identification of signature events indicative of tissue damage. The detection of these events triggers the cessation or the reduction of the exposure (intensity and/or time) to prevent overexposure. A dual mode ultrasound array (DMUA) is used for forming single- and multiple-focus patterns in a variety of tissues. The DMUA approach allows for inherent registration between the therapeutic and imaging coordinate systems providing instantaneous, spatially-accurate feedback on lesion formation dynamics. The beamformed RF data has been shown to have high sensitivity and specificity to tissue changes during lesion formation, including in vivo. In particular, the beamformed echo data from the DMUA is very sensitive to cavitation activity in response to HIFU in a variety of modes, e.g. boiling cavitation. This form of feedback is characterized by sudden increase in echogenicity that could occur within milliseconds of the application of HIFU (see http://youtu.be/No2wh-ceTLs for an example). The real-time beamforming and signal processing allowing the adaptive control of lesion formation is enabled by a high performance GPU platform (response time within 10 msec). We present results from a series of experiments in bovine cardiac tissue demonstrating the robustness and increased speed of volumetric lesion formation for a range of clinically-relevant exposures. Gross histology demonstrate clearly that adaptive lesion formation results in tissue damage consistent with the size of the focal spot and the raster scan in 3 dimensions. In contrast, uncontrolled volumetric lesions exhibit significant pre-focal buildup due to excessive exposure from multiple full-exposure HIFU shots. Stopping or reducing the HIFU exposure upon the detection of such an

  5. Resource-adaptive cognitive processes

    CERN Document Server

    Crocker, Matthew W

    2010-01-01

    This book investigates the adaptation of cognitive processes to limited resources. The central topics of this book are heuristics considered as results of the adaptation to resource limitations, through natural evolution in the case of humans, or through artificial construction in the case of computational systems; the construction and analysis of resource control in cognitive processes; and an analysis of resource-adaptivity within the paradigm of concurrent computation. The editors integrated the results of a collaborative 5-year research project that involved over 50 scientists. After a mot

  6. Adaptive algorithm based on antenna arrays for radio communication systems

    Directory of Open Access Journals (Sweden)

    Fedosov Valentin

    2017-01-01

    Full Text Available Trends in the modern world increasingly lead to the growing popularity of wireless technologies. This is possible due to the rapid development of mobile communications, the Internet gaining high popularity, using wireless networks at enterprises, offices, buildings, etc. It requires advanced network technologies with high throughput capacity to meet the needs of users. To date, a popular destination is the development of spatial signal processing techniques allowing to increase spatial bandwidth of communication channels. The most popular method is spatial coding MIMO to increase data transmission speed which is carried out due to several spatial streams emitted by several antennas. Another advantage of this technology is the bandwidth increase to be achieved without expanding the specified frequency range. Spatial coding methods are even more attractive due to a limited frequency resource. Currently, there is an increasing use of wireless communications (for example, WiFi and WiMAX in information transmission networks. One of the main problems of evolving wireless systems is the need to increase bandwidth and improve the quality of service (reducing the error probability. Bandwidth can be increased by expanding the bandwidth or increasing the radiated power. Nevertheless, the application of these methods has some drawbacks, due to the requirements of biological protection and electromagnetic compatibility, the increase of power and the expansion of the frequency band is limited. This problem is especially relevant in mobile (cellular communication systems and wireless networks operating in difficult signal propagation conditions. One of the most effective ways to solve this problem is to use adaptive antenna arrays with weakly correlated antenna elements. Communication systems using such antennas are called MIMO systems (Multiple Input Multiple Output multiple input - multiple outputs. At the moment, existing MIMO-idea implementations do not

  7. Adaptive Port-Starboard Beamforming of Triplet Sonar Arrays

    NARCIS (Netherlands)

    Groen, J.; Beerens, S.P.; Been, R.; Doisy, Y.

    2005-01-01

    Abstract—For a low-frequency active sonar (LFAS) with a triplet receiver array, it is not clear in advance which signal processing techniques optimize its performance. Here, several advanced beamformers are analyzed theoretically, and the results are compared to experimental data obtained in sea

  8. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    Science.gov (United States)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  9. Array processing for seismic surface waves

    Energy Technology Data Exchange (ETDEWEB)

    Marano, S.

    2013-07-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries.

  10. Array processing for seismic surface waves

    International Nuclear Information System (INIS)

    Marano, S.

    2013-01-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries

  11. The process of organisational adaptation through innovations, and organisational adaptability

    OpenAIRE

    Tikka, Tommi

    2010-01-01

    This study is about the process of organisational adaptation and organisational adaptability. The study generates a theoretical framework about organisational adaptation behaviour and conditions that have influence on success of organisational adaptation. The research questions of the study are: How does an organisation adapt through innovations, and which conditions enhance or impede organisational adaptation through innovations? The data were gathered from five case organisations withi...

  12. ArrayBridge: Interweaving declarative array processing with high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Haoyuan [The Ohio State Univ., Columbus, OH (United States); Floratos, Sofoklis [The Ohio State Univ., Columbus, OH (United States); Blanas, Spyros [The Ohio State Univ., Columbus, OH (United States); Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Prabhat, Prabhat [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brown, Paul [Paradigm4, Inc., Waltham, MA (United States)

    2017-05-04

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aims to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.

  13. Superresolution with Seismic Arrays using Empirical Matched Field Processing

    Energy Technology Data Exchange (ETDEWEB)

    Harris, D B; Kvaerna, T

    2010-03-24

    Scattering and refraction of seismic waves can be exploited with empirical matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term 'superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2% accuracy of 549 explosions conducted by closely-spaced mines in northwest Russia. The mines are observed at 340-410 kilometers range and are separated by as little as 3 kilometers. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely-spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hertz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane wave model.

  14. Adaptive port-starboard beamforming of triplet arrays

    NARCIS (Netherlands)

    Beerens, S.P.; Been, R.; Groen, J.; Noutary, E.; Doisy, Y.

    2000-01-01

    Triplet arrays are single line arrays with three hydrophones on a circular section of the array. The triplet structure provides immediate port-starboard (PS) discrimination. This paper discusses the theoretical and experimental performance of triplet arrays. Results are obtained on detection gain

  15. Array signal processing in the NASA Deep Space Network

    Science.gov (United States)

    Pham, Timothy T.; Jongeling, Andre P.

    2004-01-01

    In this paper, we will describe the benefits of arraying and past as well as expected future use of this application. The signal processing aspects of array system are described. Field measurements via actual tracking spacecraft are also presented.

  16. Adaptive Processing for Sequence Alignment

    KAUST Repository

    Zidan, Mohammed A.; Bonny, Talal; Salama, Khaled N.

    2012-01-01

    Disclosed are various embodiments for adaptive processing for sequence alignment. In one embodiment, among others, a method includes obtaining a query sequence and a plurality of database sequences. A first portion of the plurality of database sequences is distributed to a central processing unit (CPU) and a second portion of the plurality of database sequences is distributed to a graphical processing unit (GPU) based upon a predetermined splitting ratio associated with the plurality of database sequences, where the database sequences of the first portion are shorter than the database sequences of the second portion. A first alignment score for the query sequence is determined with the CPU based upon the first portion of the plurality of database sequences and a second alignment score for the query sequence is determined with the GPU based upon the second portion of the plurality of database sequences.

  17. Adaptive Processing for Sequence Alignment

    KAUST Repository

    Zidan, Mohammed A.

    2012-01-26

    Disclosed are various embodiments for adaptive processing for sequence alignment. In one embodiment, among others, a method includes obtaining a query sequence and a plurality of database sequences. A first portion of the plurality of database sequences is distributed to a central processing unit (CPU) and a second portion of the plurality of database sequences is distributed to a graphical processing unit (GPU) based upon a predetermined splitting ratio associated with the plurality of database sequences, where the database sequences of the first portion are shorter than the database sequences of the second portion. A first alignment score for the query sequence is determined with the CPU based upon the first portion of the plurality of database sequences and a second alignment score for the query sequence is determined with the GPU based upon the second portion of the plurality of database sequences.

  18. The Very Large Array Data Processing Pipeline

    Science.gov (United States)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an

  19. Processes and Materials for Flexible PV Arrays

    National Research Council Canada - National Science Library

    Gierow, Paul

    2002-01-01

    .... A parallel incentive for development of flexible PV arrays are the possibilities of synergistic advantages for certain types of spacecraft, in particular the Solar Thermal Propulsion (STP) Vehicle...

  20. Fundamentals of adaptive signal processing

    CERN Document Server

    Uncini, Aurelio

    2015-01-01

    This book is an accessible guide to adaptive signal processing methods that equips the reader with advanced theoretical and practical tools for the study and development of circuit structures and provides robust algorithms relevant to a wide variety of application scenarios. Examples include multimodal and multimedia communications, the biological and biomedical fields, economic models, environmental sciences, acoustics, telecommunications, remote sensing, monitoring, and, in general, the modeling and prediction of complex physical phenomena. The reader will learn not only how to design and implement the algorithms but also how to evaluate their performance for specific applications utilizing the tools provided. While using a simple mathematical language, the employed approach is very rigorous. The text will be of value both for research purposes and for courses of study.

  1. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  2. Theory and applications of spherical microphone array processing

    CERN Document Server

    Jarrett, Daniel P; Naylor, Patrick A

    2017-01-01

    This book presents the signal processing algorithms that have been developed to process the signals acquired by a spherical microphone array. Spherical microphone arrays can be used to capture the sound field in three dimensions and have received significant interest from researchers and audio engineers. Algorithms for spherical array processing are different to corresponding algorithms already known in the literature of linear and planar arrays because the spherical geometry can be exploited to great beneficial effect. The authors aim to advance the field of spherical array processing by helping those new to the field to study it efficiently and from a single source, as well as by offering a way for more experienced researchers and engineers to consolidate their understanding, adding either or both of breadth and depth. The level of the presentation corresponds to graduate studies at MSc and PhD level. This book begins with a presentation of some of the essential mathematical and physical theory relevant to ...

  3. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU

    Directory of Open Access Journals (Sweden)

    Hailong Xu

    2016-03-01

    Full Text Available Nowadays, software-defined radio (SDR has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP and Space-Frequency Adaptive Processing (SFAP are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications.

  4. The optimal configuration of photovoltaic module arrays based on adaptive switching controls

    International Nuclear Information System (INIS)

    Chao, Kuei-Hsiang; Lai, Pei-Lun; Liao, Bo-Jyun

    2015-01-01

    Highlights: • We propose a strategy for determining the optimal configuration of a PV array. • The proposed strategy was based on particle swarm optimization (PSO) method. • It can identify the optimal module array connection scheme in the event of shading. • It can also find the optimal connection of a PV array even in module malfunctions. - Abstract: This study proposes a strategy for determining the optimal configuration of photovoltaic (PV) module arrays in shading or malfunction conditions. This strategy was based on particle swarm optimization (PSO). If shading or malfunctions of the photovoltaic module array occur, the module array immediately undergoes adaptive reconfiguration to increase the power output of the PV power generation system. First, the maximal power generated at various irradiation levels and temperatures was recorded during normal array operation. Subsequently, the irradiation level and module temperature, regardless of operating conditions, were used to recall the maximal power previously recorded. This previous maximum was compared with the maximal power value obtained using the maximum power point tracker to assess whether the PV module array was experiencing shading or malfunctions. After determining that the array was experiencing shading or malfunctions, PSO was used to identify the optimal module array connection scheme in abnormal conditions, and connection switches were used to implement optimal array reconfiguration. Finally, experiments were conducted to assess the strategy for identifying the optimal reconfiguration of a PV module array in the event of shading or malfunctions

  5. Generic nano-imprint process for fabrication of nanowire arrays

    NARCIS (Netherlands)

    Pierret, A.; Hocevar, M.; Diedenhofen, S.L.; Algra, R.E.; Vlieg, E.; Timmering, E.C.; Verschuuren, M.A.; Immink, W.G.G.; Verheijen, M.A.; Bakkers, E.P.A.M.

    2010-01-01

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2inch substrates. After lift-off organic residues remain on the surface, which induce the growth of

  6. An adaptive signal-processing approach to online adaptive tutoring.

    Science.gov (United States)

    Bergeron, Bryan; Cline, Andrew

    2011-01-01

    Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.

  7. Computationally Efficient Blind Code Synchronization for Asynchronous DS-CDMA Systems with Adaptive Antenna Arrays

    Directory of Open Access Journals (Sweden)

    Chia-Chang Hu

    2005-04-01

    Full Text Available A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array (J, the rank of the MWF (M, the system processing gain (N, and the number of samples in a chip interval (S, that is, 𝒪(JMNS. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE or the subspace-based eigenstructure analysis is a function of 𝒪((JNS3. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the J-element antenna array, the amount of the L-sample support, and the rank of the M-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.

  8. Interactive Teaching of Adaptive Signal Processing

    OpenAIRE

    Stewart, R W; Harteneck, M; Weiss, S

    2000-01-01

    Over the last 30 years adaptive digital signal processing has progressed from being a strictly graduate level advanced class in signal processing theory to a topic that is part of the core curriculum for many undergraduate signal processing classes. The key reason is the continued advance of communications technology, with its need for echo control and equalisation, and the widespread use of adaptive filters in audio, biomedical, and control applications. In this paper we will review the basi...

  9. Experimental investigation of the ribbon-array ablation process

    International Nuclear Information System (INIS)

    Li Zhenghong; Xu Rongkun; Chu Yanyun; Yang Jianlun; Xu Zeping; Ye Fan; Chen Faxin; Xue Feibiao; Ning Jiamin; Qin Yi; Meng Shijian; Hu Qingyuan; Si Fenni; Feng Jinghua; Zhang Faqiang; Chen Jinchuan; Li Linbo; Chen Dingyang; Ding Ning; Zhou Xiuwen

    2013-01-01

    Ablation processes of ribbon-array loads, as well as wire-array loads for comparison, were investigated on Qiangguang-1 accelerator. The ultraviolet framing images indicate that the ribbon-array loads have stable passages of currents, which produce axially uniform ablated plasma. The end-on x-ray framing camera observed the azimuthally modulated distribution of the early ablated ribbon-array plasma and the shrink process of the x-ray radiation region. Magnetic probes measured the total and precursor currents of ribbon-array and wire-array loads, and there exists no evident difference between the precursor currents of the two types of loads. The proportion of the precursor current to the total current is 15% to 20%, and the start time of the precursor current is about 25 ns later than that of the total current. The melting time of the load material is about 16 ns, when the inward drift velocity of the ablated plasma is taken to be 1.5 × 10 7 cm/s.

  10. Removing Background Noise with Phased Array Signal Processing

    Science.gov (United States)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  11. Robust adaptive multichannel SAR processing based on covariance matrix reconstruction

    Science.gov (United States)

    Tan, Zhen-ya; He, Feng

    2018-04-01

    With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.

  12. Radar techniques using array antennas

    CERN Document Server

    Wirth, Wulf-Dieter

    2013-01-01

    Radar Techniques Using Array Antennas is a thorough introduction to the possibilities of radar technology based on electronic steerable and active array antennas. Topics covered include array signal processing, array calibration, adaptive digital beamforming, adaptive monopulse, superresolution, pulse compression, sequential detection, target detection with long pulse series, space-time adaptive processing (STAP), moving target detection using synthetic aperture radar (SAR), target imaging, energy management and system parameter relations. The discussed methods are confirmed by simulation stud

  13. High Dynamic Range adaptive ΔΣ-based Focal Plane Array architecture

    KAUST Repository

    Yao, Shun; Kavusi, Sam; Salama, Khaled N.

    2012-01-01

    In this paper, an Adaptive Delta-Sigma based architecture for High Dynamic Range (HDR) Focal Plane Arrays is presented. The noise shaping effect of the Delta-Sigma modulation in the low end, and the distortion noise induced in the high end of Photo

  14. Multiple wall-reflection effect in adaptive-array differential-phase reflectometry on QUEST

    International Nuclear Information System (INIS)

    Idei, H.; Fujisawa, A.; Nagashima, Y.; Onchi, T.; Hanada, K.; Zushi, H.; Mishra, K.; Hamasaki, M.; Hayashi, Y.; Yamamoto, M.K.

    2016-01-01

    A phased array antenna and Software-Defined Radio (SDR) heterodyne-detection systems have been developed for adaptive array approaches in reflectometry on the QUEST. In the QUEST device considered as a large oversized cavity, standing wave (multiple wall-reflection) effect was significantly observed with distorted amplitude and phase evolution even if the adaptive array analyses were applied. The distorted fields were analyzed by Fast Fourier Transform (FFT) in wavenumber domain to treat separately the components with and without wall reflections. The differential phase evolution was properly obtained from the distorted field evolution by the FFT procedures. A frequency derivative method has been proposed to overcome the multiple-wall reflection effect, and SDR super-heterodyned components with small frequency difference for the derivative method were correctly obtained using the FFT analysis

  15. Generic nano-imprint process for fabrication of nanowire arrays

    Energy Technology Data Exchange (ETDEWEB)

    Pierret, Aurelie; Hocevar, Moira; Algra, Rienk E; Timmering, Eugene C; Verschuuren, Marc A; Immink, George W G; Verheijen, Marcel A; Bakkers, Erik P A M [Philips Research Laboratories Eindhoven, High Tech Campus 11, 5656 AE Eindhoven (Netherlands); Diedenhofen, Silke L [FOM Institute for Atomic and Molecular Physics c/o Philips Research Laboratories, High Tech Campus 4, 5656 AE Eindhoven (Netherlands); Vlieg, E, E-mail: e.p.a.m.bakkers@tue.nl [IMM, Solid State Chemistry, Radboud University Nijmegen, Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands)

    2010-02-10

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2 inch substrates. After lift-off organic residues remain on the surface, which induce the growth of additional undesired nanowires. We show that cleaning of the samples before growth with piranha solution in combination with a thermal anneal at 550 deg. C for InP and 700 deg. C for GaP results in uniform nanowire arrays with 1% variation in nanowire length, and without undesired extra nanowires. Our chemical cleaning procedure is applicable to other lithographic techniques such as e-beam lithography, and therefore represents a generic process.

  16. A Versatile Multichannel Digital Signal Processing Module for Microcalorimeter Arrays

    Science.gov (United States)

    Tan, H.; Collins, J. W.; Walby, M.; Hennig, W.; Warburton, W. K.; Grudberg, P.

    2012-06-01

    Different techniques have been developed for reading out microcalorimeter sensor arrays: individual outputs for small arrays, and time-division or frequency-division or code-division multiplexing for large arrays. Typically, raw waveform data are first read out from the arrays using one of these techniques and then stored on computer hard drives for offline optimum filtering, leading not only to requirements for large storage space but also limitations on achievable count rate. Thus, a read-out module that is capable of processing microcalorimeter signals in real time will be highly desirable. We have developed multichannel digital signal processing electronics that are capable of on-board, real time processing of microcalorimeter sensor signals from multiplexed or individual pixel arrays. It is a 3U PXI module consisting of a standardized core processor board and a set of daughter boards. Each daughter board is designed to interface a specific type of microcalorimeter array to the core processor. The combination of the standardized core plus this set of easily designed and modified daughter boards results in a versatile data acquisition module that not only can easily expand to future detector systems, but is also low cost. In this paper, we first present the core processor/daughter board architecture, and then report the performance of an 8-channel daughter board, which digitizes individual pixel outputs at 1 MSPS with 16-bit precision. We will also introduce a time-division multiplexing type daughter board, which takes in time-division multiplexing signals through fiber-optic cables and then processes the digital signals to generate energy spectra in real time.

  17. Optimal adaptive normalized matched filter for large antenna arrays

    KAUST Repository

    Kammoun, Abla

    2016-09-13

    This paper focuses on the problem of detecting a target in the presence of a compound Gaussian clutter with unknown statistics. To this end, we focus on the design of the adaptive normalized matched filter (ANMF) detector which uses the regularized Tyler estimator (RTE) built from N-dimensional observations x, · · ·, x in order to estimate the clutter covariance matrix. The choice for the RTE is motivated by its possessing two major attributes: first its resilience to the presence of outliers, and second its regularization parameter that makes it more suitable to handle the scarcity in observations. In order to facilitate the design of the ANMF detector, we consider the regime in which n and N are both large. This allows us to derive closed-form expressions for the asymptotic false alarm and detection probabilities. Based on these expressions, we propose an asymptotically optimal setting for the regularization parameter of the RTE that maximizes the asymptotic detection probability while keeping the asymptotic false alarm probability below a certain threshold. Numerical results are provided in order to illustrate the gain of the proposed detector over a recently proposed setting of the regularization parameter.

  18. Optimal adaptive normalized matched filter for large antenna arrays

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Fré dé ric; Alouini, Mohamed-Slim

    2016-01-01

    This paper focuses on the problem of detecting a target in the presence of a compound Gaussian clutter with unknown statistics. To this end, we focus on the design of the adaptive normalized matched filter (ANMF) detector which uses the regularized Tyler estimator (RTE) built from N-dimensional observations x, · · ·, x in order to estimate the clutter covariance matrix. The choice for the RTE is motivated by its possessing two major attributes: first its resilience to the presence of outliers, and second its regularization parameter that makes it more suitable to handle the scarcity in observations. In order to facilitate the design of the ANMF detector, we consider the regime in which n and N are both large. This allows us to derive closed-form expressions for the asymptotic false alarm and detection probabilities. Based on these expressions, we propose an asymptotically optimal setting for the regularization parameter of the RTE that maximizes the asymptotic detection probability while keeping the asymptotic false alarm probability below a certain threshold. Numerical results are provided in order to illustrate the gain of the proposed detector over a recently proposed setting of the regularization parameter.

  19. Radiation-induced adaptive response in fetal mice: a micro-array study

    International Nuclear Information System (INIS)

    Vares, G.; Bing, Wang; Mitsuru, Nenoi; Tetsuo, Nakajima; Kaoru, Tanaka; Isamu, Hayata

    2006-01-01

    Exposure of sublethal doses of ionizing radiation can induce protective mechanisms against a subsequent higher dose irradiation. This phenomenon called radio-adaptation (or adaptive response - AR), has been described in a wide range of biological models. In a series of studies, we demonstrated the existence of a radiation-induced AR in mice during late organogenesis. For better understanding of molecular mechanisms underlying AR in our model, we performed a global analysis of transcriptome regulations in cells collected from whole mouse fetuses. Using cDNA micro-arrays, we studied gene expression in these cells after in utero priming exposure to irradiation. Several combinations of radiation dose and dose-rate were applied to induce or not an AR in our system. Gene regulation was observed after exposure to priming radiation in each condition. Student's t-test was performed in order to identify genes whose expression modulation was specifically different in AR-inducing an( non-AR-inducing conditions. Genes were ranked according to their ability in discriminating AR-specific modulations. Since AR genes were implicated in variety of functions and cellular processes, we applied a functional classification algorithm, which clustered genes in a limited number of functionally related group: We established that AR genes are significantly enriched for specific keywords. Our results show a significant modulation of genes implicated in signal transduction pathways. No AR-specific alteration of DNA repair could be observed. Nevertheless, it is likely that modulation of DNA repair activity results, at least partly, from post-transcriptional regulation. One major hypothesis is that de-regulations of signal transduction pathways and apoptosis may be responsible for AR phenotype. In previous work, we demonstrated that radiation-induced AR in mice during organogenesis is related to Trp53 gene status and to the occurrence of radiation-induced apoptosis. Other work proposed that p53

  20. The Applicability of Incoherent Array Processing to IMS Seismic Array Stations

    Science.gov (United States)

    Gibbons, S. J.

    2012-04-01

    The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea

  1. Optimal and adaptive methods of processing hydroacoustic signals (review)

    Science.gov (United States)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  2. An adaptive orienting theory of error processing.

    Science.gov (United States)

    Wessel, Jan R

    2018-03-01

    The ability to detect and correct action errors is paramount to safe and efficient goal-directed behaviors. Existing work on the neural underpinnings of error processing and post-error behavioral adaptations has led to the development of several mechanistic theories of error processing. These theories can be roughly grouped into adaptive and maladaptive theories. While adaptive theories propose that errors trigger a cascade of processes that will result in improved behavior after error commission, maladaptive theories hold that error commission momentarily impairs behavior. Neither group of theories can account for all available data, as different empirical studies find both impaired and improved post-error behavior. This article attempts a synthesis between the predictions made by prominent adaptive and maladaptive theories. Specifically, it is proposed that errors invoke a nonspecific cascade of processing that will rapidly interrupt and inhibit ongoing behavior and cognition, as well as orient attention toward the source of the error. It is proposed that this cascade follows all unexpected action outcomes, not just errors. In the case of errors, this cascade is followed by error-specific, controlled processing, which is specifically aimed at (re)tuning the existing task set. This theory combines existing predictions from maladaptive orienting and bottleneck theories with specific neural mechanisms from the wider field of cognitive control, including from error-specific theories of adaptive post-error processing. The article aims to describe the proposed framework and its implications for post-error slowing and post-error accuracy, propose mechanistic neural circuitry for post-error processing, and derive specific hypotheses for future empirical investigations. © 2017 Society for Psychophysiological Research.

  3. Neural Adaptation Effects in Conceptual Processing

    Directory of Open Access Journals (Sweden)

    Barbara F. M. Marino

    2015-07-01

    Full Text Available We investigated the conceptual processing of nouns referring to objects characterized by a highly typical color and orientation. We used a go/no-go task in which we asked participants to categorize each noun as referring or not to natural entities (e.g., animals after a selective adaptation of color-edge neurons in the posterior LV4 region of the visual cortex was induced by means of a McCollough effect procedure. This manipulation affected categorization: the green-vertical adaptation led to slower responses than the green-horizontal adaptation, regardless of the specific color and orientation of the to-be-categorized noun. This result suggests that the conceptual processing of natural entities may entail the activation of modality-specific neural channels with weights proportional to the reliability of the signals produced by these channels during actual perception. This finding is discussed with reference to the debate about the grounded cognition view.

  4. Total focusing method with correlation processing of antenna array signals

    Science.gov (United States)

    Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.

    2018-03-01

    The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.

  5. Bayesian nonparametric adaptive control using Gaussian processes.

    Science.gov (United States)

    Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A

    2015-03-01

    Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

  6. Model-based processing for underwater acoustic arrays

    CERN Document Server

    Sullivan, Edmund J

    2015-01-01

    This monograph presents a unified approach to model-based processing for underwater acoustic arrays. The use of physical models in passive array processing is not a new idea, but it has been used on a case-by-case basis, and as such, lacks any unifying structure. This work views all such processing methods as estimation procedures, which then can be unified by treating them all as a form of joint estimation based on a Kalman-type recursive processor, which can be recursive either in space or time, depending on the application. This is done for three reasons. First, the Kalman filter provides a natural framework for the inclusion of physical models in a processing scheme. Second, it allows poorly known model parameters to be jointly estimated along with the quantities of interest. This is important, since in certain areas of array processing already in use, such as those based on matched-field processing, the so-called mismatch problem either degrades performance or, indeed, prevents any solution at all. Third...

  7. Processing and Linguistics Properties of Adaptable Systems

    Directory of Open Access Journals (Sweden)

    Dumitru TODOROI

    2006-01-01

    Full Text Available Continuation and development of the research in Adaptable Programming Initialization [Tod-05.1,2,3] is presented. As continuation of [Tod-05.2,3] in this paper metalinguistic tools used in the process of introduction of new constructions (data, operations, instructions and controls are developed. The generalization schemes of evaluation of adaptable languages and systems are discussed. These results analogically with [Tod-05.2,3] are obtained by the team, composed from the researchers D. Todoroi [Tod-05.4], Z. Todoroi [ZTod-05], and D. Micusa [Mic-03]. Presented results will be included in the book [Tod-06].

  8. Directional hearing aid using hybrid adaptive beamformer (HAB) and binaural ITE array

    Science.gov (United States)

    Shaw, Scott T.; Larow, Andy J.; Gibian, Gary L.; Sherlock, Laguinn P.; Schulein, Robert

    2002-05-01

    A directional hearing aid algorithm called the Hybrid Adaptive Beamformer (HAB), developed for NIH/NIA, can be applied to many different microphone array configurations. In this project the HAB algorithm was applied to a new array employing in-the-ear microphones at each ear (HAB-ITE), to see if previous HAB performance could be achieved with a more cosmetically acceptable package. With diotic output, the average benefit in threshold SNR was 10.9 dB for three HoH and 11.7 dB for five normal-hearing subjects. These results are slightly better than previous results of equivalent tests with a 3-in. array. With an innovative binaural fitting, a small benefit beyond that provided by diotic adaptive beamforming was observed: 12.5 dB for HoH and 13.3 dB for normal-hearing subjects, a 1.6 dB improvement over the diotic presentation. Subjectively, the binaural fitting preserved binaural hearing abilities, giving the user a sense of space, and providing left-right localization. Thus the goal of creating an adaptive beamformer that simultaneously provides excellent noise reduction and binaural hearing was achieved. Further work remains before the HAB-ITE can be incorporated into a real product, optimizing binaural adaptive beamforming, and integrating the concept with other technologies to produce a viable product prototype. [Work supported by NIH/NIDCD.

  9. Adapting Controlled-source Coherence Analysis to Dense Array Data in Earthquake Seismology

    Science.gov (United States)

    Schwarz, B.; Sigloch, K.; Nissen-Meyer, T.

    2017-12-01

    Exploration seismology deals with highly coherent wave fields generated by repeatable controlled sources and recorded by dense receiver arrays, whose geometry is tailored to back-scattered energy normally neglected in earthquake seismology. Owing to these favorable conditions, stacking and coherence analysis are routinely employed to suppress incoherent noise and regularize the data, thereby strongly contributing to the success of subsequent processing steps, including migration for the imaging of back-scattering interfaces or waveform tomography for the inversion of velocity structure. Attempts have been made to utilize wave field coherence on the length scales of passive-source seismology, e.g. for the imaging of transition-zone discontinuities or the core-mantle-boundary using reflected precursors. Results are however often deteriorated due to the sparse station coverage and interference of faint back-scattered with transmitted phases. USArray sampled wave fields generated by earthquake sources at an unprecedented density and similar array deployments are ongoing or planned in Alaska, the Alps and Canada. This makes the local coherence of earthquake data an increasingly valuable resource to exploit.Building on the experience in controlled-source surveys, we aim to extend the well-established concept of beam-forming to the richer toolbox that is nowadays used in seismic exploration. We suggest adapted strategies for local data coherence analysis, where summation is performed with operators that extract the local slope and curvature of wave fronts emerging at the receiver array. Besides estimating wave front properties, we demonstrate that the inherent data summation can also be used to generate virtual station responses at intermediate locations where no actual deployment was performed. Owing to the fact that stacking acts as a directional filter, interfering coherent wave fields can be efficiently separated from each other by means of coherent subtraction. We

  10. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    Science.gov (United States)

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  11. CR-Calculus and adaptive array theory applied to MIMO random vibration control tests

    Science.gov (United States)

    Musella, U.; Manzato, S.; Peeters, B.; Guillaume, P.

    2016-09-01

    Performing Multiple-Input Multiple-Output (MIMO) tests to reproduce the vibration environment in a user-defined number of control points of a unit under test is necessary in applications where a realistic environment replication has to be achieved. MIMO tests require vibration control strategies to calculate the required drive signal vector that gives an acceptable replication of the target. This target is a (complex) vector with magnitude and phase information at the control points for MIMO Sine Control tests while in MIMO Random Control tests, in the most general case, the target is a complete spectral density matrix. The idea behind this work is to tailor a MIMO random vibration control approach that can be generalized to other MIMO tests, e.g. MIMO Sine and MIMO Time Waveform Replication. In this work the approach is to use gradient-based procedures over the complex space, applying the so called CR-Calculus and the adaptive array theory. With this approach it is possible to better control the process performances allowing the step-by-step Jacobian Matrix update. The theoretical bases behind the work are followed by an application of the developed method to a two-exciter two-axis system and by performance comparisons with standard methods.

  12. Signal Processing for a Lunar Array: Minimizing Power Consumption

    Science.gov (United States)

    D'Addario, Larry; Simmons, Samuel

    2011-01-01

    Motivation for the study is: (1) Lunar Radio Array for low frequency, high redshift Dark Ages/Epoch of Reionization observations (z =6-50, f=30-200 MHz) (2) High precision cosmological measurements of 21 cm H I line fluctuations (3) Probe universe before first star formation and provide information about the Intergalactic Medium and evolution of large scale structures (5) Does the current cosmological model accurately describe the Universe before reionization? Lunar Radio Array is for (1) Radio interferometer based on the far side of the moon (1a) Necessary for precision measurements, (1b) Shielding from earth-based and solar RFI (12) No permanent ionosphere, (2) Minimum collecting area of approximately 1 square km and brightness sensitivity 10 mK (3)Several technologies must be developed before deployment The power needed to process signals from a large array of nonsteerable elements is not prohibitive, even for the Moon, and even in current technology. Two different concepts have been proposed: (1) Dark Ages Radio Interferometer (DALI) (2)( Lunar Array for Radio Cosmology (LARC)

  13. Physics-based signal processing algorithms for micromachined cantilever arrays

    Science.gov (United States)

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  14. Adaptive multiple importance sampling for Gaussian processes

    Czech Academy of Sciences Publication Activity Database

    Xiong, X.; Šmídl, Václav; Filippone, M.

    2017-01-01

    Roč. 87, č. 8 (2017), s. 1644-1665 ISSN 0094-9655 R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Gaussian Process * Bayesian estimation * Adaptive importance sampling Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 0.757, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/smidl-0469804.pdf

  15. Systolic array processing of the sequential decoding algorithm

    Science.gov (United States)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  16. Multivariable adaptive control of bio process

    Energy Technology Data Exchange (ETDEWEB)

    Maher, M.; Bahhou, B.; Roux, G. [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France); Maher, M. [Faculte des Sciences, Rabat (Morocco). Lab. de Physique

    1995-12-31

    This paper presents a multivariable adaptive control of a continuous-flow fermentation process for the alcohol production. The linear quadratic control strategy is used for the regulation of substrate and ethanol concentrations in the bioreactor. The control inputs are the dilution rate and the influent substrate concentration. A robust identification algorithm is used for the on-line estimation of linear MIMO model`s parameters. Experimental results of a pilot-plant fermenter application are reported and show the control performances. (authors) 8 refs.

  17. Adaptive Layer Height During DLP Materials Processing

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Zhang, Yang; Nielsen, Jakob Skov

    2016-01-01

    for considerable process speedup during the Additive Manufacture of components that contain areas of low cross-section variability, at no loss of surface quality. The adaptive slicing strategy was tested with a purpose built vat polymerisation system and numerical engine designed and constructed to serve as a Next......-Gen technology platform. By means of assessing hemispherical manufactured test specimen and through 3D surface mapping with variable-focus microscopy and confocal microscopy, a balance between minimal loss of surface quality with a maximal increase of manufacturing rate has been identified as a simple angle...

  18. SAR processing with stepped chirps and phased array antennas.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2006-09-01

    Wideband radar signals are problematic for phased array antennas. Wideband radar signals can be generated from series or groups of narrow-band signals centered at different frequencies. An equivalent wideband LFM chirp can be assembled from lesser-bandwidth chirp segments in the data processing. The chirp segments can be transmitted as separate narrow-band pulses, each with their own steering phase operation. This overcomes the problematic dilemma of steering wideband chirps with phase shifters alone, that is, without true time-delay elements.

  19. A Simple Approach in Estimating the Effectiveness of Adapting Mirror Concentrator and Tracking Mechanism for PV Arrays in the Tropics

    Directory of Open Access Journals (Sweden)

    M. E. Ya’acob

    2014-01-01

    Full Text Available Mirror concentrating element and tracking mechanism has been seriously investigated and widely adapted in solar PV technology. In this study, a practical in-field method is conducted in Serdang, Selangor, Malaysia, for the two technologies in comparison to the common fixed flat PV arrays. The data sampling process is measured under stochastic weather characteristics with the main target of calculating the effectiveness of PV power output. The data are monitored, recorded, and analysed in real time via GPRS online monitoring system for 10 consecutive months. The analysis is based on a simple comparison of the actual daily power generation from each PV generator with statistical analysis of multiple linear regression (MLR and analysis of variance test (ANOVA. From the analysis, it is shown that tracking mechanism generates approximately 88 Watts (9.4% compared to the mirror concentrator which generates 144 Watts (23.4% of the cumulative dc power for different array configurations at standard testing condition (STC references. The significant increase in power generation shows feasibilities of implying both mechanisms for PV generators and thus contributes to additional reference in PV array design.

  20. Wavefront sensing and adaptive control in phased array of fiber collimators

    Science.gov (United States)

    Lachinova, Svetlana L.; Vorontsov, Mikhail A.

    2011-03-01

    A new wavefront control approach for mitigation of atmospheric turbulence-induced wavefront phase aberrations in coherent fiber-array-based laser beam projection systems is introduced and analyzed. This approach is based on integration of wavefront sensing capabilities directly into the fiber-array transmitter aperture. In the coherent fiber array considered, we assume that each fiber collimator (subaperture) of the array is capable of precompensation of local (onsubaperture) wavefront phase tip and tilt aberrations using controllable rapid displacement of the tip of the delivery fiber at the collimating lens focal plane. In the technique proposed, this tip and tilt phase aberration control is based on maximization of the optical power received through the same fiber collimator using the stochastic parallel gradient descent (SPGD) technique. The coordinates of the fiber tip after the local tip and tilt aberrations are mitigated correspond to the coordinates of the focal-spot centroid of the optical wave backscattered off the target. Similar to a conventional Shack-Hartmann wavefront sensor, phase function over the entire fiber-array aperture can then be retrieved using the coordinates obtained. The piston phases that are required for coherent combining (phase locking) of the outgoing beams at the target plane can be further calculated from the reconstructed wavefront phase. Results of analysis and numerical simulations are presented. Performance of adaptive precompensation of phase aberrations in this laser beam projection system type is compared for various system configurations characterized by the number of fiber collimators and atmospheric turbulence conditions. The wavefront control concept presented can be effectively applied for long-range laser beam projection scenarios for which the time delay related with the double-pass laser beam propagation to the target and back is compared or even exceeds the characteristic time of the atmospheric turbulence change

  1. The fabrication techniques of Z-pinch targets. Techniques of fabricating self-adapted Z-pinch wire-arrays

    International Nuclear Information System (INIS)

    Qiu Longhui; Wei Yun; Liu Debin; Sun Zuoke; Yuan Yuping

    2002-01-01

    In order to fabricate wire arrays for use in the Z-pinch physical experiments, the fabrication techniques are investigated as follow: Thickness of about 1-1.5 μm of gold is electroplated on the surface of ultra-fine tungsten wires. Fibers of deuterated-polystyrene (DPS) with diameters from 30 to 100 microns are made from molten DPS. And two kinds of planar wire-arrays and four types of annular wire-arrays are designed, which are able to adapt to the variation of the distance between the cathode and anode inside the target chamber. Furthermore, wire-arrays with diameters form 5-24 μm are fabricated with tungsten wires, respectively. The on-site test shows that the wire-arrays can self-adapt to the distance changes perfectly

  2. Performance Analysis of Blind Beamforming Algorithms in Adaptive Antenna Array in Rayleigh Fading Channel Model

    International Nuclear Information System (INIS)

    Yasin, M; Akhtar, Pervez; Pathan, Amir Hassan

    2013-01-01

    In this paper, we analyze the performance of adaptive blind algorithms – i.e. Kaiser Constant Modulus Algorithm (KCMA), Hamming CMA (HAMCMA) – with CMA in a wireless cellular communication system using digital modulation technique. These blind algorithms are used in digital signal processor of adaptive antenna to make it smart and change weights of the antenna array system dynamically. The simulation results revealed that KCMA and HAMCMA provide minimum mean square error (MSE) with 1.247 dB and 1.077 dB antenna gain enhancement, 75% reduction in bit error rate (BER) respectively over that of CMA. Therefore, KCMA and HAMCMA algorithms give a cost effective solution for a communication system

  3. ADAPTATION PROCESS TO CLIMATE CHANGE IN AGRICULTURE- AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    Ghulam Mustafa

    2017-10-01

    Full Text Available Climatic variations affect agriculture in a process with no known end means. Adaptations help to reduce the adverse impacts of climate change. Unfortunately, adaptation has never been considered as a process. Current study empirically identified the adaptation process and its different stages. Moreover, little is known about the farm level adaptation strategies and their determinants. The study in hand found farm level adaptation strategies and determinants of these strategies. The study identified three stages of adaptation i.e. perception, intention and adaptation. It was found that 71.4% farmers perceived about climate change, 58.5% intended to adapt while 40.2% actually adapted. The study further explored that farmers do adaptations through changing crop variety (56.3%, changing planting dates (44.6%, tree plantation (37.5%, increase/conserve irrigation (39.3% and crop diversification (49.2%. The adaptation strategies used by farmers were autonomous and mostly determined perception to climate change. It was also noted that the adaptation strategies move in a circular process and once they are adapted they remained adapted for a longer period of time. Some constraints slow the adaptation process so; we recommend farmers should be given price incentives to speed-up this process.

  4. Intelligent Adaptation Process for Case Based Systems

    International Nuclear Information System (INIS)

    Nassar, A.M.; Mohamed, A.H.; Mohamed, A.H.

    2014-01-01

    Case Based Reasoning (CBR) Systems is one of the important decision making systems applied in many fields all over the world. The effectiveness of any CBR system based on the quality of the storage cases in the case library. Similar cases can be retrieved and adapted to produce the solution for the new problem. One of the main issues faced the CBR systems is the difficulties of achieving the useful cases. The proposed system introduces a new approach that uses the genetic algorithm (GA) technique to automate constructing the cases into the case library. Also, it can optimize the best one to be stored in the library for the future uses. However, the proposed system can avoid the problems of the uncertain and noisy cases. Besides, it can simply the retrieving and adaptation processes. So, it can improve the performance of the CBR system. The suggested system can be applied for many real-time problems. It has been applied for diagnosis the faults of the wireless network, diagnosis of the cancer diseases, diagnosis of the debugging of a software as cases of study. The proposed system has proved its performance in this field

  5. Adapted diffusion processes for effective forging dies

    Science.gov (United States)

    Paschke, H.; Nienhaus, A.; Brunotte, K.; Petersen, T.; Siegmund, M.; Lippold, L.; Weber, M.; Mejauschek, M.; Landgraf, P.; Braeuer, G.; Behrens, B.-A.; Lampke, T.

    2018-05-01

    Hot forging is an effective production method producing safety relevant parts with excellent mechanical properties. The economic efficiency directly depends on the occurring wear of the tools, which limits service lifetime. Several approaches of the presenting research group aim at minimizing the wear caused by interacting mechanical and thermal loads by using enhanced nitriding technology. Thus, by modifying the surface zone layer it is possible to create a resistance against thermal softening provoking plastic deformation and pronounced abrasive wear. As a disadvantage, intensely nitrided surfaces may possibly include the risk of increased crack sensitivity and therefore feature the chipping of material at the treated surface. Recent projects (evaluated in several industrial applications) show the high technological potential of adapted treatments: A first approach evaluated localized treatments by preventing areas from nitrogen diffusion with applied pastes or other coverages. Now, further ideas are to use this principle to structure the surface with differently designed patterns generating smaller ductile zones beneath nitrided ones. The selection of suitable designs is subject to certain geo-metrical requirements though. The intention of this approach is to prevent the formation and propagation of cracks under thermal shock conditions. Analytical characterization methods for crack sensitivity of surface zone layers and an accurate system of testing rigs for thermal shock conditions verified the treatment concepts. Additionally, serial forging tests using adapted testing geometries and finally, tests in the industrial production field were performed. Besides stabilizing the service lifetime and decreasing specific wear mechanisms caused by thermal influences, the crack behavior was influenced positively. This leads to a higher efficiency of the industrial production process and enables higher output in forging campaigns of industrial partners.

  6. The Urban Adaptation and Adaptation Process of Urban Migrant Children: A Qualitative Study

    Science.gov (United States)

    Liu, Yang; Fang, Xiaoyi; Cai, Rong; Wu, Yang; Zhang, Yaofang

    2009-01-01

    This article employs qualitative research methods to explore the urban adaptation and adaptation processes of Chinese migrant children. Through twenty-one in-depth interviews with migrant children, the researchers discovered: The participant migrant children showed a fairly high level of adaptation to the city; their process of urban adaptation…

  7. High Dynamic Range adaptive ΔΣ-based Focal Plane Array architecture

    KAUST Repository

    Yao, Shun

    2012-10-16

    In this paper, an Adaptive Delta-Sigma based architecture for High Dynamic Range (HDR) Focal Plane Arrays is presented. The noise shaping effect of the Delta-Sigma modulation in the low end, and the distortion noise induced in the high end of Photo-diode current were analyzed in detail. The proposed architecture can extend the DR for about 20N log2 dB at the high end of Photo-diode current with an N bit Up-Down counter. At the low end, it can compensate for the larger readout noise by employing Extended Counting. The Adaptive Delta-Sigma architecture employing a 4-bit Up-Down counter achieved about 160dB in the DR, with a Peak SNR (PSNR) of 80dB at the high end. Compared to the other HDR architectures, the Adaptive Delta-Sigma based architecture provides the widest DR with the best SNR performance in the extended range.

  8. Mosaic Process for the Fabrication of an Acoustic Transducer Array

    National Research Council Canada - National Science Library

    2005-01-01

    .... Deriving a geometric shape for the array based on the established performance level. Selecting piezoceramic materials based on considerations related to the performance level and derived geometry...

  9. Network measures for characterising team adaptation processes

    NARCIS (Netherlands)

    Barth, S.K.; Schraagen, J.M.C.; Schmettow, M.

    2015-01-01

    The aim of this study was to advance the conceptualisation of team adaptation by applying social network analysis (SNA) measures in a field study of a paediatric cardiac surgical team adapting to changes in task complexity and ongoing dynamic complexity. Forty surgical procedures were observed by

  10. Impact of Antenna Placement on Frequency Domain Adaptive Antenna Array in Hybrid FRF Cellular System

    Directory of Open Access Journals (Sweden)

    Sri Maldia Hari Asti

    2012-01-01

    Full Text Available Frequency domain adaptive antenna array (FDAAA is an effective method to suppress interference caused by frequency selective fading and multiple-access interference (MAI in single-carrier (SC transmission. However, the performance of FDAAA receiver will be affected by the antenna placement parameters such as antenna separation and spread of angle of arrival (AOA. On the other hand, hybrid frequency reuse can be adopted in cellular system to improve the cellular capacity. However, optimal frequency reuse factor (FRF depends on the channel propagation and transceiver scheme as well. In this paper, we analyze the impact of antenna separation and AOA spread on FDAAA receiver and optimize the cellular capacity by using hybrid FRF.

  11. On Organizational Adaptation via Dynamic Process Selection

    National Research Council Canada - National Science Library

    Handley, Holly A; Levis, Alexander H

    2000-01-01

    .... An executable organizational model composed of individual models of a five stage interacting decision maker is used to evaluate the effectiveness of the different adaptation strategies on organizational performance...

  12. Blind Time-Frequency Analysis for Source Discrimination in Multisensor Array Processing

    National Research Council Canada - National Science Library

    Amin, Moeness

    1999-01-01

    .... We have clearly demonstrated, through analysis and simulations, the offerings of time-frequency distributions in solving key problems in sensor array processing, including direction finding, source...

  13. Modular and Adaptive Control of Sound Processing

    Science.gov (United States)

    van Nort, Douglas

    parameters. In this view, desired gestural dynamics and sonic response are achieved through modular construction of mapping layers that are themselves subject to parametric control. Complementing this view of the design process, the work concludes with an approach in which the creation of gestural control/sound dynamics are considered in the low-level of the underlying sound model. The result is an adaptive system that is specialized to noise-based transformations that are particularly relevant in an electroacoustic music context. Taken together, these different approaches to design and evaluation result in a unified framework for creation of an instrumental system. The key point is that this framework addresses the influence that mapping structure and control dynamics have on the perceived feel of the instrument. Each of the results illustrate this using either top-down or bottom-up approaches that consider musical control context, thereby pointing to the greater potential for refined sonic articulation that can be had by combining them in the design process.

  14. Adaptive enhancement of learning protocol in hippocampal cultured networks grown on multielectrode arrays

    Science.gov (United States)

    Pimashkin, Alexey; Gladkov, Arseniy; Mukhina, Irina; Kazantsev, Victor

    2013-01-01

    Learning in neuronal networks can be investigated using dissociated cultures on multielectrode arrays supplied with appropriate closed-loop stimulation. It was shown in previous studies that weakly respondent neurons on the electrodes can be trained to increase their evoked spiking rate within a predefined time window after the stimulus. Such neurons can be associated with weak synaptic connections in nearby culture network. The stimulation leads to the increase in the connectivity and in the response. However, it was not possible to perform the learning protocol for the neurons on electrodes with relatively strong synaptic inputs and responding at higher rates. We proposed an adaptive closed-loop stimulation protocol capable to achieve learning even for the highly respondent electrodes. It means that the culture network can reorganize appropriately its synaptic connectivity to generate a desired response. We introduced an adaptive reinforcement condition accounting for the response variability in control stimulation. It significantly enhanced the learning protocol to a large number of responding electrodes independently on its base response level. We also found that learning effect preserved after 4–6 h after training. PMID:23745105

  15. Signal processing for solar array monitoring, fault detection, and optimization

    CERN Document Server

    Braun, Henry; Spanias, Andreas

    2012-01-01

    Although the solar energy industry has experienced rapid growth recently, high-level management of photovoltaic (PV) arrays has remained an open problem. As sensing and monitoring technology continues to improve, there is an opportunity to deploy sensors in PV arrays in order to improve their management. In this book, we examine the potential role of sensing and monitoring technology in a PV context, focusing on the areas of fault detection, topology optimization, and performance evaluation/data visualization. First, several types of commonly occurring PV array faults are considered and detection algorithms are described. Next, the potential for dynamic optimization of an array's topology is discussed, with a focus on mitigation of fault conditions and optimization of power output under non-fault conditions. Finally, monitoring system design considerations such as type and accuracy of measurements, sampling rate, and communication protocols are considered. It is our hope that the benefits of monitoring presen...

  16. Sampling phased array a new technique for signal processing and ultrasonic imaging

    OpenAIRE

    Bulavinov, A.; Joneit, D.; Kröning, M.; Bernus, L.; Dalichow, M.H.; Reddy, K.M.

    2006-01-01

    Different signal processing and image reconstruction techniques are applied in ultrasonic non-destructive material evaluation. In recent years, rapid development in the fields of microelectronics and computer engineering lead to wide application of phased array systems. A new phased array technique, called "Sampling Phased Array" has been developed in Fraunhofer Institute for non-destructive testing. It realizes unique approach of measurement and processing of ultrasonic signals. The sampling...

  17. A systematic process for adaptive concept exploration

    Science.gov (United States)

    Nixon, Janel Nicole

    several common challenges to the creation of quantitative modeling and simulation environments. Namely, a greater number of alternative solutions imply a greater number of design variables as well as larger ranges on those variables. This translates to a high-dimension combinatorial problem. As the size and dimensionality of the solution space gets larger, the number of physically impossible solutions within that space greatly increases. Thus, the ratio of feasible design space to infeasible space decreases, making it much harder to not only obtain a good quantitative sample of the space, but to also make sense of that data. This is especially the case in the early stages of design, where it is not practical to dedicate a great deal of resources to performing thorough, high-fidelity analyses on all the potential solutions. To make quantitative analyses feasible in these early stages of design, a method is needed that allows for a relatively sparse set of information to be collected quickly and efficiently, and yet, that information needs to be meaningful enough with which to base a decision. The method developed to address this need uses a Systematic Process for Adaptive Concept Exploration (SPACE). In the SPACE method, design space exploration occurs in a sequential fashion; as data is acquired, the sampling scheme adapts to the specific problem at hand. Previously gathered data is used to make inferences about the nature of the problem so that future samples can be taken from the more interesting portions of the design space. Furthermore, the SPACE method identifies those analyses that have significant impacts on the relationships being modeled, so that effort can be focused on acquiring only the most pertinent information. The SPACE method uses a four-part sampling scheme to efficiently uncover the parametric relationships between the design variables and responses. Step 1 aims to identify the location of infeasible space within the region of interest using an initial

  18. Adaptive Space-Time, Processing for High Performance, Robust Military Wireless Communications

    National Research Council Canada - National Science Library

    Haimovich, Alexander

    2000-01-01

    ...: (I) performance of adaptive arrays for wireless communications over fading channels in the presence of cochannel interference particularly the case when the number of interference sources exceeds...

  19. An adaptive management process for forest soil conservation.

    Science.gov (United States)

    Michael P. Curran; Douglas G. Maynard; Ronald L. Heninger; Thomas A. Terry; Steven W. Howes; Douglas M. Stone; Thomas Niemann; Richard E. Miller; Robert F. Powers

    2005-01-01

    Soil disturbance guidelines should be based on comparable disturbance categories adapted to specific local soil conditions, validated by monitoring and research. Guidelines, standards, and practices should be continually improved based on an adaptive management process, which is presented in this paper. Core components of this process include: reliable monitoring...

  20. Adaptive Process Management with ADEPT2

    NARCIS (Netherlands)

    Reichert, M.U.; Rinderle, S.B.; Kreher, U; Dadam, P.

    2005-01-01

    This demo paper describes core functions of the ADEPT2 process management system. In the ADEPT project we have been working on the design and implementation of a next generation process management software. Based on a conceptual framework for dynamic process changes, on novel process support

  1. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.

    Science.gov (United States)

    Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F

    2015-10-15

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. A self-adaptive thermal switch array for rapid temperature stabilization under various thermal power inputs

    International Nuclear Information System (INIS)

    Geng, Xiaobao; Patel, Pragnesh; Narain, Amitabh; Meng, Dennis Desheng

    2011-01-01

    A self-adaptive thermal switch array (TSA) based on actuation by low-melting-point alloy droplets is reported to stabilize the temperature of a heat-generating microelectromechanical system (MEMS) device at a predetermined range (i.e. the optimal working temperature of the device) with neither a control circuit nor electrical power consumption. When the temperature is below this range, the TSA stays off and works as a thermal insulator. Therefore, the MEMS device can quickly heat itself up to its optimal working temperature during startup. Once this temperature is reached, TSA is automatically turned on to increase the thermal conductance, working as an effective thermal spreader. As a result, the MEMS device tends to stay at its optimal working temperature without complex thermal management components and the associated parasitic power loss. A prototype TSA was fabricated and characterized to prove the concept. The stabilization temperatures under various power inputs have been studied both experimentally and theoretically. Under the increment of power input from 3.8 to 5.8 W, the temperature of the device increased only by 2.5 °C due to the stabilization effect of TSA

  3. Adaptive constructive processes and the future of memory

    OpenAIRE

    Schacter, Daniel L.

    2012-01-01

    Memory serves critical functions in everyday life, but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, or illusions. The article describes several types of memory errors that are produced by adaptive constructive processes, and focuses in particular on the process of imagining or simulating events that might occur in one’s personal future. Simulating future events reli...

  4. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  5. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  6. Adaptive Constructive Processes and the Future of Memory

    Science.gov (United States)

    Schacter, Daniel L.

    2012-01-01

    Memory serves critical functions in everyday life but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, and illusions. The article describes several types of memory errors that are produced by adaptive constructive processes…

  7. Application of Seismic Array Processing to Tsunami Early Warning

    Science.gov (United States)

    An, C.; Meng, L.

    2015-12-01

    Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800

  8. Adaptive Memory: Is Survival Processing Special?

    Science.gov (United States)

    Nairne, James S.; Pandeirada, Josefa N. S.

    2008-01-01

    Do the operating characteristics of memory continue to bear the imprints of ancestral selection pressures? Previous work in our laboratory has shown that human memory may be specially tuned to retain information processed in terms of its survival relevance. A few seconds of survival processing in an incidental learning context can produce recall…

  9. Oxide nano-rod array structure via a simple metallurgical process

    International Nuclear Information System (INIS)

    Nanko, M; Do, D T M

    2011-01-01

    A simple method for fabricating oxide nano-rod array structure via metallurgical process is reported. Some dilute alloys such as Ni(Al) solid solution shows internal oxidation with rod-like oxide precipices during high-temperature oxidation with low oxygen partial pressure. By removing a metal part in internal oxidation zone, oxide nano-rod array structure can be developed on the surface of metallic components. In this report, Al 2 O 3 or NiAl 2 O 4 nano-rod array structures were prepared by using Ni(Al) solid solution. Effects of Cr addition into Ni(Al) solid solution on internal oxidation were also reported. Pack cementation process for aluminizing of Ni surface was applied to prepare nano-rod array components with desired shape. Near-net shape Ni components with oxide nano-rod array structure on their surface can be prepared by using the pack cementation process and internal oxidation,

  10. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  11. Adaptive process triage system cannot identify patients with gastrointestinal perforation

    DEFF Research Database (Denmark)

    Bohm, Aske Mathias; Tolstrup, Mai-Britt; Gögenur, Ismail

    2017-01-01

    INTRODUCTION: Adaptive process triage (ADAPT) is a triage tool developed to assess the severity and address the priority of emergency patients. In 2009-2011, ADAPT was the most frequently used triage system in Denmark. Until now, no Danish triage system has been evaluated based on a selective group...... triaged as green or yellow had a GIP that was not identified by the triage system. CONCLUSION: ADAPT is incapable of identifying one of the most critically ill patient groups in need of emergency abdominal surgery. FUNDING: none. TRIAL REGISTRATION: HEH-2013-034 I-Suite: 02336....

  12. Cas4-Dependent Prespacer Processing Ensures High-Fidelity Programming of CRISPR Arrays.

    Science.gov (United States)

    Lee, Hayun; Zhou, Yi; Taylor, David W; Sashital, Dipali G

    2018-04-05

    CRISPR-Cas immune systems integrate short segments of foreign DNA as spacers into the host CRISPR locus to provide molecular memory of infection. Cas4 proteins are widespread in CRISPR-Cas systems and are thought to participate in spacer acquisition, although their exact function remains unknown. Here we show that Bacillus halodurans type I-C Cas4 is required for efficient prespacer processing prior to Cas1-Cas2-mediated integration. Cas4 interacts tightly with the Cas1 integrase, forming a heterohexameric complex containing two Cas1 dimers and two Cas4 subunits. In the presence of Cas1 and Cas2, Cas4 processes double-stranded substrates with long 3' overhangs through site-specific endonucleolytic cleavage. Cas4 recognizes PAM sequences within the prespacer and prevents integration of unprocessed prespacers, ensuring that only functional spacers will be integrated into the CRISPR array. Our results reveal the critical role of Cas4 in maintaining fidelity during CRISPR adaptation, providing a structural and mechanistic model for prespacer processing and integration. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Millimeter-Wave Microstrip Antenna Array Design and an Adaptive Algorithm for Future 5G Wireless Communication Systems

    Directory of Open Access Journals (Sweden)

    Cheng-Nan Hu

    2016-01-01

    Full Text Available This paper presents a high gain millimeter-wave (mmW low-temperature cofired ceramic (LTCC microstrip antenna array with a compact, simple, and low-profile structure. Incorporating minimum mean square error (MMSE adaptive algorithms with the proposed 64-element microstrip antenna array, the numerical investigation reveals substantial improvements in interference reduction. A prototype is presented with a simple design for mass production. As an experiment, HFSS was used to simulate an antenna with a width of 1 mm and a length of 1.23 mm, resonating at 38 GHz. Two identical mmW LTCC microstrip antenna arrays were built for measurement, and the center element was excited. The results demonstrated a return loss better than 15 dB and a peak gain higher than 6.5 dBi at frequencies of interest, which verified the feasibility of the design concept.

  14. Adaptive multiscale processing for contrast enhancement

    Science.gov (United States)

    Laine, Andrew F.; Song, Shuwu; Fan, Jian; Huda, Walter; Honeyman, Janice C.; Steinbach, Barbara G.

    1993-07-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms within a continuum of scale space and used to enhance features of importance to mammography. Choosing analyzing functions that are well localized in both space and frequency, results in a powerful methodology for image analysis. We describe methods of contrast enhancement based on two overcomplete (redundant) multiscale representations: (1) Dyadic wavelet transform (2) (phi) -transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by non-linear, logarithmic and constant scale-space weight functions. Multiscale edges identified within distinct levels of transform space provide a local support for enhancement throughout each decomposition. We demonstrate that features extracted from wavelet spaces can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.

  15. Collective Mindfulness in Post-implementation IS Adaptation Processes

    DEFF Research Database (Denmark)

    Aanestad, Margun; Jensen, Tina Blegind

    2016-01-01

    identify the way in which the organizational capability we call "collective mindfulness" was achieved. Being aware of how to practically achieve collective mindfulness, managers may be able to better facilitate mindful handling of post-implementation IS adaptation processes....

  16. Behavioral training promotes multiple adaptive processes following acute hearing loss.

    Science.gov (United States)

    Keating, Peter; Rosenior-Patten, Onayomi; Dahmen, Johannes C; Bell, Olivia; King, Andrew J

    2016-03-23

    The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders.

  17. Adaption of the Magnetometer Towed Array geophysical system to meet Department of Energy needs for hazardous waste site characterization

    International Nuclear Information System (INIS)

    Cochran, J.R.; McDonald, J.R.; Russell, R.J.; Robertson, R.; Hensel, E.

    1995-10-01

    This report documents US Department of Energy (DOE)-funded activities that have adapted the US Navy's Surface Towed Ordnance Locator System (STOLS) to meet DOE needs for a ''... better, faster, safer and cheaper ...'' system for characterizing inactive hazardous waste sites. These activities were undertaken by Sandia National Laboratories (Sandia), the Naval Research Laboratory, Geo-Centers Inc., New Mexico State University and others under the title of the Magnetometer Towed Array (MTA)

  18. Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network

    Science.gov (United States)

    Navarro, Robert

    2006-01-01

    The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..

  19. Efficient processing of two-dimensional arrays with C or C++

    Science.gov (United States)

    Donato, David I.

    2017-07-20

    Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency

  20. Sensory Processing Subtypes in Autism: Association with Adaptive Behavior

    Science.gov (United States)

    Lane, Alison E.; Young, Robyn L.; Baker, Amy E. Z.; Angley, Manya T.

    2010-01-01

    Children with autism are frequently observed to experience difficulties in sensory processing. This study examined specific patterns of sensory processing in 54 children with autistic disorder and their association with adaptive behavior. Model-based cluster analysis revealed three distinct sensory processing subtypes in autism. These subtypes…

  1. Functional Dual Adaptive Control with Recursive Gaussian Process Model

    International Nuclear Information System (INIS)

    Prüher, Jakub; Král, Ladislav

    2015-01-01

    The paper deals with dual adaptive control problem, where the functional uncertainties in the system description are modelled by a non-parametric Gaussian process regression model. Current approaches to adaptive control based on Gaussian process models are severely limited in their practical applicability, because the model is re-adjusted using all the currently available data, which keeps growing with every time step. We propose the use of recursive Gaussian process regression algorithm for significant reduction in computational requirements, thus bringing the Gaussian process-based adaptive controllers closer to their practical applicability. In this work, we design a bi-criterial dual controller based on recursive Gaussian process model for discrete-time stochastic dynamic systems given in an affine-in-control form. Using Monte Carlo simulations, we show that the proposed controller achieves comparable performance with the full Gaussian process-based controller in terms of control quality while keeping the computational demands bounded. (paper)

  2. Bone Adaptation as an Evolutionary Process

    DEFF Research Database (Denmark)

    Bagge, Mette

    1998-01-01

    . The memory of past loadings is included in themodel to account for the delay in the bone response from the loadchanges. The remodeling rate equation is derived from the structuraloptimization task of maximizing the stiffness in each time step. Additional informationscan be extracted from the optimization...... process; the remodelingequilibrium parameter where no apposition or resorption takes place is defined asthe element optimality conditions and the optimal design is used as aninitial design for the onset ofthe remodeling simulation. Some examples of boneadaptation resulting from load changes is given....

  3. Sampling phased array - a new technique for ultrasonic signal processing and imaging

    OpenAIRE

    Verkooijen, J.; Boulavinov, A.

    2008-01-01

    Over the past 10 years, the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called 'Sampling Phased Array', has been developed in the Fraunhofer Institute for Non-Destructive Testing([1]). It realises a unique approach of measurement and processing of ultrasonic signals. Th...

  4. Sampling phased array, a new technique for ultrasonic signal processing and imaging now available to industry

    OpenAIRE

    Verkooijen, J.; Bulavinov, A.

    2008-01-01

    Over the past 10 years the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called "Sampling Phased Array" has been developed in the Fraunhofer Institute for non-destructive testing [1]. It realizes a unique approach of measurement and processing of ultrasonic signals. The s...

  5. Studies of implosion processes of nested tungsten wire-array Z-pinch

    International Nuclear Information System (INIS)

    Ning Cheng; Ding Ning; Liu Quan; Yang Zhenhua

    2006-01-01

    Nested wire-array is a kind of promising structured-load because it can improve the quality of Z-pinch plasma and enhance the radiation power of X-ray source. Based on the zero-dimensional model, the assumption of wire-array collision, and the criterion of optimized load (maximal load kinetic energy), optimization of the typical nested wire-array as a load of Z machine at Sandia Laboratory was carried out. It was shown that the load has been basically optimized. The Z-pinch process of the typical load was numerically studied by means of one-dimensional three-temperature radiation magneto-hydrodynamics (RMHD) code. The obtained results reproduce the dynamic process of the Z-pinch and show the implosion trajectory of nested wire-array and the transfer process of drive current between the inner and outer array. The experimental and computational X-ray pulse was compared, and it was suggested that the assumption of wire-array collision was reasonable in nested wire-array Z-pinch at least for the current level of Z machine. (authors)

  6. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    Science.gov (United States)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  7. Modeling of processes of an adaptive business management

    Directory of Open Access Journals (Sweden)

    Karev Dmitry Vladimirovich

    2011-04-01

    Full Text Available On the basis of the analysis of systems of adaptive management board business proposed the original version of the real system of adaptive management, the basis of which used dynamic recursive model cash flow forecast and real data. Proposed definitions and the simulation of scales and intervals of model time in the control system, as well as the thresholds observations and conditions of changing (correction of the administrative decisions. The process of adaptive management is illustrated on the basis proposed by the author of the script of development of business.

  8. Beamspace Adaptive Beamforming for Hydrodynamic Towed Array Self-Noise Cancellation

    National Research Council Canada - National Science Library

    Premus, Vincent

    2001-01-01

    ... against signal self-nulling associated with steering vector mismatch. Particular attention is paid to the definition of white noise gain as the metric that reflects the level of mainlobe adaptive nulling for an adaptive beamformer...

  9. Beamspace Adaptive Beamforming for Hydrodynamic Towed Array Self-Noise Cancellation

    National Research Council Canada - National Science Library

    Premus, Vincent

    2000-01-01

    ... against signal self-nulling associated with steering vector mismatch. Particular attention is paid to the definition of white noise gain as the metric that reflects the level of mainlobe adaptive nulling for an adaptive beamformer...

  10. Adapting the unified software development process for user interface development

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.

    2006-01-01

    In this paper we describe how existing software developing processes, such as Rational Unified Process, can be adapted in order to allow disciplined and more efficient development of user interfaces. The main objective of this paper is to demonstrate that standard modeling environments, based on the

  11. ADAPTIVE CONTEXT PROCESSING IN ON-LINE HANDWRITTEN CHARACTER RECOGNITION

    NARCIS (Netherlands)

    Iwayama, N.; Ishigaki, K.

    2004-01-01

    We propose a new approach to context processing in on-line handwritten character recognition (OLCR). Based on the observation that writers often repeat the strings that they input, we take the approach of adaptive context processing. (ACP). In ACP, the strings input by a writer are automatically

  12. Adaptive smart simulator for characterization and MPPT construction of PV array

    International Nuclear Information System (INIS)

    Ouada, Mehdi; Meridjet, Mohamed Salah; Dib, Djalel

    2016-01-01

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  13. Adaptive smart simulator for characterization and MPPT construction of PV array

    Science.gov (United States)

    Ouada, Mehdi; Meridjet, Mohamed Salah; Dib, Djalel

    2016-07-01

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  14. Adaptive smart simulator for characterization and MPPT construction of PV array

    Energy Technology Data Exchange (ETDEWEB)

    Ouada, Mehdi, E-mail: mehdi.ouada@univ-annaba.org; Meridjet, Mohamed Salah [Electromechanical engineering department, Electromechanical engineering laboratory, Badji Mokhtar University, B.P. 12, Annaba (Algeria); Dib, Djalel [Department of Electrical Engineering, University of Tebessa, Tebessa (Algeria)

    2016-07-25

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  15. Signal and array processing techniques for RFID readers

    Science.gov (United States)

    Wang, Jing; Amin, Moeness; Zhang, Yimin

    2006-05-01

    Radio Frequency Identification (RFID) has recently attracted much attention in both the technical and business communities. It has found wide applications in, for example, toll collection, supply-chain management, access control, localization tracking, real-time monitoring, and object identification. Situations may arise where the movement directions of the tagged RFID items through a portal is of interest and must be determined. Doppler estimation may prove complicated or impractical to perform by RFID readers. Several alternative approaches, including the use of an array of sensors with arbitrary geometry, can be applied. In this paper, we consider direction-of-arrival (DOA) estimation techniques for application to near-field narrowband RFID problems. Particularly, we examine the use of a pair of RFID antennas to track moving RFID tagged items through a portal. With two antennas, the near-field DOA estimation problem can be simplified to a far-field problem, yielding a simple way for identifying the direction of the tag movement, where only one parameter, the angle, needs to be considered. In this case, tracking of the moving direction of the tag simply amounts to computing the spatial cross-correlation between the data samples received at the two antennas. It is pointed out that the radiation patterns of the reader and tag antennas, particularly their phase characteristics, have a significant effect on the performance of DOA estimation. Indoor experiments are conducted in the Radar Imaging and RFID Labs at Villanova University for validating the proposed technique for target movement direction estimations.

  16. Improvement of resolution in full-view linear-array photoacoustic computed tomography using a novel adaptive weighting method

    Science.gov (United States)

    Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza

    2017-03-01

    Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.

  17. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    Science.gov (United States)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  18. Explicit and implicit processes in behavioural adaptation to road width.

    Science.gov (United States)

    Lewis-Evans, Ben; Charlton, Samuel G

    2006-05-01

    The finding that drivers may react to safety interventions in a way that is contrary to what was intended is the phenomenon of behavioural adaptation. This phenomenon has been demonstrated across various safety interventions and has serious implications for road safety programs the world over. The present research used a driving simulator to assess behavioural adaptation in drivers' speed and lateral displacement in response to manipulations of road width. Of interest was whether behavioural adaptation would occur and whether we could determine whether it was the result of explicit, conscious decisions or implicit perceptual processes. The results supported an implicit, zero perceived risk model of behavioural adaptation with reduced speeds on a narrowed road accompanied by increased ratings of risk and a marked inability of the participants to identify that any change in road width had occurred.

  19. Process Development And Simulation For Cold Fabrication Of Doubly Curved Metal Plate By Using Line Array Roll Set

    International Nuclear Information System (INIS)

    Shim, D. S.; Jung, C. G.; Seong, D. Y.; Yang, D. Y.; Han, J. M.; Han, M. S.

    2007-01-01

    For effective manufacturing of a doubly curved sheet metal, a novel sheet metal forming process is proposed. The suggested process uses a Line Array Roll Set (LARS) composed of a pair of upper and lower roll assemblies in a symmetric manner. The process offers flexibility as compared with the conventional manufacturing processes, because it does not require any complex-shaped die and loss of material by blank-holding is minimized. LARS allows flexibility of the incremental forming process and adopts the principle of bending deformation, resulting in a slight deformation in thickness. Rolls composed of line array roll sets are divided into a driving roll row and two idle roll rows. The arrayed rolls in the central lines of the upper and lower roll assemblies are motor-driven so that they deform and transfer the sheet metal using friction between the rolls and the sheet metal. The remaining rolls are idle rolls, generating bending deformation with driving rolls. Furthermore, all the rolls are movable in any direction so that they are adaptable to any size or shape of the desired three-dimensional configuration. In the process, the sheet is deformed incrementally as deformation proceeds simultaneously in rolling and transverse directions step by step. Consequently, it can be applied to the fabrication of doubly curved ship hull plates by undergoing several passes. In this work, FEM simulations are carried out for verification of the proposed incremental forming system using the chosen design parameters. Based on the results of the simulation, the relationship between the roll set configuration and the curvature of a sheet metal is determined. The process information such as the forming loads and torques acting on every roll is analyzed as important data for the design and development of the manufacturing system

  20. A Readout Integrated Circuit (ROIC) employing self-adaptive background current compensation technique for Infrared Focal Plane Array (IRFPA)

    Science.gov (United States)

    Zhou, Tong; Zhao, Jian; He, Yong; Jiang, Bo; Su, Yan

    2018-05-01

    A novel self-adaptive background current compensation circuit applied to infrared focal plane array is proposed in this paper, which can compensate the background current generated in different conditions. Designed double-threshold detection strategy is to estimate and eliminate the background currents, which could significantly reduce the hardware overhead and improve the uniformity among different pixels. In addition, the circuit is well compatible to various categories of infrared thermo-sensitive materials. The testing results of a 4 × 4 experimental chip showed that the proposed circuit achieves high precision, wide application and high intelligence. Tape-out of the 320 × 240 readout circuit, as well as the bonding, encapsulation and imaging verification of uncooled infrared focal plane array, have also been completed.

  1. APD arrays and large-area APDs via a new planar process

    CERN Document Server

    Farrell, R; Vanderpuye, K; Grazioso, R; Myers, R; Entine, G

    2000-01-01

    A fabrication process has been developed which allows the beveled-edge-type of avalanche photodiode (APD) to be made without the need for the artful bevel formation steps. This new process, applicable to both APD arrays and to discrete detectors, greatly simplifies manufacture and should lead to significant cost reduction for such photodetectors. This is achieved through a simple innovation that allows isolation around the device or array pixel to be brought into the plane of the surface of the silicon wafer, hence a planar process. A description of the new process is presented along with performance data for a variety of APD device and array configurations. APD array pixel gains in excess of 10 000 have been measured. Array pixel coincidence timing resolution of less than 5 ns has been demonstrated. An energy resolution of 6% for 662 keV gamma-rays using a CsI(T1) scintillator on a planar processed large-area APD has been recorded. Discrete APDs with active areas up to 13 cm sup 2 have been operated.

  2. Assembly and Integration Process of the First High Density Detector Array for the Atacama Cosmology Telescope

    Science.gov (United States)

    Li, Yaqiong; Choi, Steve; Ho, Shuay-Pwu; Crowley, Kevin T.; Salatino, Maria; Simon, Sara M.; Staggs, Suzanne T.; Nati, Federico; Wollack, Edward J.

    2016-01-01

    The Advanced ACTPol (AdvACT) upgrade on the Atacama Cosmology Telescope (ACT) consists of multichroicTransition Edge Sensor (TES) detector arrays to measure the Cosmic Microwave Background (CMB) polarization anisotropies in multiple frequency bands. The first AdvACT detector array, sensitive to both 150 and 230 GHz, is fabricated on a 150 mm diameter wafer and read out with a completely different scheme compared to ACTPol. Approximately 2000 TES bolometers are packed into the wafer leading to both a much denser detector density and readout circuitry. The demonstration of the assembly and integration of the AdvACT arrays is important for the next generation CMB experiments, which will continue to increase the pixel number and density. We present the detailed assembly process of the first AdvACT detector array.

  3. Research in adaptive management: working relations and the research process.

    Science.gov (United States)

    Amanda C. Graham; Linda E. Kruger

    2002-01-01

    This report analyzes how a small group of Forest Service scientists participating in efforts to implement adaptive management approach working relations, and how they understand and apply the research process. Nine scientists completed a questionnaire to assess their preferred mode of thinking (the Herrmann Brain Dominance Instrument), engaged in a facilitated...

  4. Dissociating Face Identity and Facial Expression Processing Via Visual Adaptation

    Directory of Open Access Journals (Sweden)

    Hong Xu

    2012-10-01

    Full Text Available Face identity and facial expression are processed in two distinct neural pathways. However, most of the existing face adaptation literature studies them separately, despite the fact that they are two aspects from the same face. The current study conducted a systematic comparison between these two aspects by face adaptation, investigating how top- and bottom-half face parts contribute to the processing of face identity and facial expression. A real face (sad, “Adam” and its two size-equivalent face parts (top- and bottom-half were used as the adaptor in separate conditions. For face identity adaptation, the test stimuli were generated by morphing Adam's sad face with another person's sad face (“Sam”. For facial expression adaptation, the test stimuli were created by morphing Adam's sad face with his neutral face and morphing the neutral face with his happy face. In each trial, after exposure to the adaptor, observers indicated the perceived face identity or facial expression of the following test face via a key press. They were also tested in a baseline condition without adaptation. Results show that the top- and bottom-half face each generated a significant face identity aftereffect. However, the aftereffect by top-half face adaptation is much larger than that by the bottom-half face. On the contrary, only the bottom-half face generated a significant facial expression aftereffect. This dissociation of top- and bottom-half face adaptation suggests that face parts play different roles in face identity and facial expression. It thus provides further evidence for the distributed systems of face perception.

  5. Adaptive Moving Object Tracking Integrating Neural Networks And Intelligent Processing

    Science.gov (United States)

    Lee, James S. J.; Nguyen, Dziem D.; Lin, C.

    1989-03-01

    A real-time adaptive scheme is introduced to detect and track moving objects under noisy, dynamic conditions including moving sensors. This approach integrates the adaptiveness and incremental learning characteristics of neural networks with intelligent reasoning and process control. Spatiotemporal filtering is used to detect and analyze motion, exploiting the speed and accuracy of multiresolution processing. A neural network algorithm constitutes the basic computational structure for classification. A recognition and learning controller guides the on-line training of the network, and invokes pattern recognition to determine processing parameters dynamically and to verify detection results. A tracking controller acts as the central control unit, so that tracking goals direct the over-all system. Performance is benchmarked against the Widrow-Hoff algorithm, for target detection scenarios presented in diverse FLIR image sequences. Efficient algorithm design ensures that this recognition and control scheme, implemented in software and commercially available image processing hardware, meets the real-time requirements of tracking applications.

  6. Adaptation.

    Science.gov (United States)

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  7. A FPGA-based signal processing unit for a GEM array detector

    International Nuclear Information System (INIS)

    Yen, W.W.; Chou, H.P.

    2013-06-01

    in the present study, a signal processing unit for a GEM one-dimensional array detector is presented to measure the trajectory of photoelectrons produced by cosmic X-rays. The present GEM array detector system has 16 signal channels. The front-end unit provides timing signals from trigger units and energy signals from charge sensitive amplifies. The prototype of the processing unit is implemented using commercial field programmable gate array circuit boards. The FPGA based system is linked to a personal computer for testing and data analysis. Tests using simulated signals indicated that the FPGA-based signal processing unit has a good linearity and is flexible for parameter adjustment for various experimental conditions (authors)

  8. Characterization of diffusivity based on spherical array processing

    DEFF Research Database (Denmark)

    Nolan, Melanie; Fernandez Grande, Efren; Jeong, Cheol-Ho

    2015-01-01

    -dimensional domain and consequently examine some of its fundamental properties: spatial distribution of sound pressure levels, particle velocity and sound intensity. The study allows for visualization of the intensity field inside a reverberant space, and successfully illustrates the behavior of the sound field...... in such an environment. This initial investigation shows the validity of the suggested processing and reveals interesting perspectives for future work. Ultimately, the aim is to define a proper and reliable measure of the diffuse sound field conditions in a reverberation chamber, with the prospect of improving...

  9. A multi-step electrochemical etching process for a three-dimensional micro probe array

    International Nuclear Information System (INIS)

    Kim, Yoonji; Youn, Sechan; Cho, Young-Ho; Park, HoJoon; Chang, Byeung Gyu; Oh, Yong Soo

    2011-01-01

    We present a simple, fast, and cost-effective process for three-dimensional (3D) micro probe array fabrication using multi-step electrochemical metal foil etching. Compared to the previous electroplating (add-on) process, the present electrochemical (subtractive) process results in well-controlled material properties of the metallic microstructures. In the experimental study, we describe the single-step and multi-step electrochemical aluminum foil etching processes. In the single-step process, the depth etch rate and the bias etch rate of an aluminum foil have been measured as 1.50 ± 0.10 and 0.77 ± 0.03 µm min −1 , respectively. On the basis of the single-step process results, we have designed and performed the two-step electrochemical etching process for the 3D micro probe array fabrication. The fabricated 3D micro probe array shows the vertical and lateral fabrication errors of 15.5 ± 5.8% and 3.3 ± 0.9%, respectively, with the surface roughness of 37.4 ± 9.6 nm. The contact force and the contact resistance of the 3D micro probe array have been measured to be 24.30 ± 0.98 mN and 2.27 ± 0.11 Ω, respectively, for an overdrive of 49.12 ± 1.25 µm.

  10. DBPM signal processing with field programmable gate arrays

    International Nuclear Information System (INIS)

    Lai Longwei; Yi Xing; Zhang Ning; Yang Guisen; Wang Baopeng; Xiong Yun; Leng Yongbin; Yan Yingbing

    2011-01-01

    DBPM system performance is determined by the design and implementation of beam position signal processing algorithm. In order to develop the system, a beam position signal processing algorithm is implemented on FPGA. The hardware is a PMC board ICS-1554A-002 (GE Corp.) with FPGA chip XC5VSX95T. This paper adopts quadrature frequency mixing to down convert high frequency signal to base. Different from conventional method, the mixing is implemented by CORDIC algorithm. The algorithm theory and implementation details are discussed in this paper. As the board contains no front end gain controller, this paper introduces a published patent-pending technique that has been adopted to realize the function in digital logic. The whole design is implemented with VHDL language. An on-line evaluation has been carried on SSRF (Shanghai Synchrotron Radiation Facility)storage ring. Results indicate that the system turn-by-turn data can measure the real beam movement accurately,and system resolution is 1.1μm. (authors)

  11. High-resolution imaging methods in array signal processing

    DEFF Research Database (Denmark)

    Xenaki, Angeliki

    in active sonar signal processing for detection and imaging of submerged oil contamination in sea water from a deep-water oil leak. The submerged oil _eld is modeled as a uid medium exhibiting spatial perturbations in the acoustic parameters from their mean ambient values which cause weak scattering...... of the incident acoustic energy. A highfrequency active sonar is selected to insonify the medium and receive the backscattered waves. High-frequency acoustic methods can both overcome the optical opacity of water (unlike methods based on electromagnetic waves) and resolve the small-scale structure...... of the submerged oil field (unlike low-frequency acoustic methods). The study shows that high-frequency acoustic methods are suitable not only for large-scale localization of the oil contamination in the water column but also for statistical characterization of the submerged oil field through inference...

  12. Data array acquisition and joint processing in local plasma spectroscopy

    International Nuclear Information System (INIS)

    Ekimov, K.; Luizova, L.; Soloviev, A.; Khakhaev, A.

    2005-01-01

    The setup and software for optical emission spectroscopy with spatial and temporal resolutions were developed. The automated installation includes LabView compatible instrument interfaces. The algorithm of joint data processing is based on principal component method and allows the increase in stability of results of the radial transform and the instrument distortion elimination in the presence of noises. The system is applied to diagnostics of the arc discharge in mercury vapors with the addition of thallium. The distributions of ground state and excited mercury atoms, excited thallium atoms and electron density over the arc cross section have been measured on the basis of analysis of spectral line shapes. The Saha balance between electron and high lying excited states densities was checked. An unexpected broadening of some thallium spectral lines was found out

  13. Adaptation of the Biolog Phenotype MicroArrayTM Technology to Profile the Obligate Anaerobe Geobacter metallireducens

    Energy Technology Data Exchange (ETDEWEB)

    Joyner, Dominique; Fortney, Julian; Chakraborty, Romy; Hazen, Terry

    2010-05-17

    The Biolog OmniLog? Phenotype MicroArray (PM) plate technology was successfully adapted to generate a select phenotypic profile of the strict anaerobe Geobacter metallireducens (G.m.). The profile generated for G.m. provides insight into the chemical sensitivity of the organism as well as some of its metabolic capabilities when grown with a basal medium containing acetate and Fe(III). The PM technology was developed for aerobic organisms. The reduction of a tetrazolium dye by the test organism represents metabolic activity on the array which is detected and measured by the OmniLog(R) system. We have previously adapted the technology for the anaerobic sulfate reducing bacterium Desulfovibrio vulgaris. In this work, we have taken the technology a step further by adapting it for the iron reducing obligate anaerobe Geobacter metallireducens. In an osmotic stress microarray it was determined that the organism has higher sensitivity to impermeable solutes 3-6percent KCl and 2-5percent NaNO3 that result in osmotic stress by osmosis to the cell than to permeable non-ionic solutes represented by 5-20percent ethylene glycol and 2-3percent urea. The osmotic stress microarray also includes an array of osmoprotectants and precursor molecules that were screened to identify substrates that would provide osmotic protection to NaCl stress. None of the substrates tested conferred resistance to elevated concentrations of salt. Verification studies in which G.m. was grown in defined medium amended with 100mM NaCl (MIC) and the common osmoprotectants betaine, glycine and proline supported the PM findings. Further verification was done by analysis of transcriptomic profiles of G.m. grown under 100mM NaCl stress that revealed up-regulation of genes related to degradation rather than accumulation of the above-mentioned osmoprotectants. The phenotypic profile, supported by additional analysis indicates that the accumulation of these osmoprotectants as a response to salt stress does not

  14. Numerical Simulation of the Diffusion Processes in Nanoelectrode Arrays Using an Axial Neighbor Symmetry Approximation.

    Science.gov (United States)

    Peinetti, Ana Sol; Gilardoni, Rodrigo S; Mizrahi, Martín; Requejo, Felix G; González, Graciela A; Battaglini, Fernando

    2016-06-07

    Nanoelectrode arrays have introduced a complete new battery of devices with fascinating electrocatalytic, sensitivity, and selectivity properties. To understand and predict the electrochemical response of these arrays, a theoretical framework is needed. Cyclic voltammetry is a well-fitted experimental technique to understand the undergoing diffusion and kinetics processes. Previous works describing microelectrode arrays have exploited the interelectrode distance to simulate its behavior as the summation of individual electrodes. This approach becomes limited when the size of the electrodes decreases to the nanometer scale due to their strong radial effect with the consequent overlapping of the diffusional fields. In this work, we present a computational model able to simulate the electrochemical behavior of arrays working either as the summation of individual electrodes or being affected by the overlapping of the diffusional fields without previous considerations. Our computational model relays in dividing a regular electrode array in cells. In each of them, there is a central electrode surrounded by neighbor electrodes; these neighbor electrodes are transformed in a ring maintaining the same active electrode area than the summation of the closest neighbor electrodes. Using this axial neighbor symmetry approximation, the problem acquires a cylindrical symmetry, being applicable to any diffusion pattern. The model is validated against micro- and nanoelectrode arrays showing its ability to predict their behavior and therefore to be used as a designing tool.

  15. Frequency Adaptability and Waveform Design for OFDM Radar Space-Time Adaptive Processing

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL; Glover, Charles Wayne [ORNL

    2012-01-01

    We propose an adaptive waveform design technique for an orthogonal frequency division multiplexing (OFDM) radar signal employing a space-time adaptive processing (STAP) technique. We observe that there are inherent variabilities of the target and interference responses in the frequency domain. Therefore, the use of an OFDM signal can not only increase the frequency diversity of our system, but also improve the target detectability by adaptively modifying the OFDM coefficients in order to exploit the frequency-variabilities of the scenario. First, we formulate a realistic OFDM-STAP measurement model considering the sparse nature of the target and interference spectra in the spatio-temporal domain. Then, we show that the optimal STAP-filter weight-vector is equal to the generalized eigenvector corresponding to the minimum generalized eigenvalue of the interference and target covariance matrices. With numerical examples we demonstrate that the resultant OFDM-STAP filter-weights are adaptable to the frequency-variabilities of the target and interference responses, in addition to the spatio-temporal variabilities. Hence, by better utilizing the frequency variabilities, we propose an adaptive OFDM-waveform design technique, and consequently gain a significant amount of STAP-performance improvement.

  16. Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit.

    Science.gov (United States)

    Mei, Gang; Xu, Liangliang; Xu, Nengxiong

    2017-09-01

    This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points' spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available.

  17. Multi-Model Adaptive Fuzzy Controller for a CSTR Process

    Directory of Open Access Journals (Sweden)

    Shubham Gogoria

    2015-09-01

    Full Text Available Continuous Stirred Tank Reactors are intensively used to control exothermic reactions in chemical industries. It is a very complex multi-variable system with non-linear characteristics. This paper deals with linearization of the mathematical model of a CSTR Process. Multi model adaptive fuzzy controller has been designed to control the reactor concentration and temperature of CSTR process. This method combines the output of multiple Fuzzy controllers, which are operated at various operating points. The proposed solution is a straightforward implementation of Fuzzy controller with gain scheduler to control the linearly inseparable parameters of a highly non-linear process.

  18. Adoption: biological and social processes linked to adaptation.

    Science.gov (United States)

    Grotevant, Harold D; McDermott, Jennifer M

    2014-01-01

    Children join adoptive families through domestic adoption from the public child welfare system, infant adoption through private agencies, and international adoption. Each pathway presents distinctive developmental opportunities and challenges. Adopted children are at higher risk than the general population for problems with adaptation, especially externalizing, internalizing, and attention problems. This review moves beyond the field's emphasis on adoptee-nonadoptee differences to highlight biological and social processes that affect adaptation of adoptees across time. The experience of stress, whether prenatal, postnatal/preadoption, or during the adoption transition, can have significant impacts on the developing neuroendocrine system. These effects can contribute to problems with physical growth, brain development, and sleep, activating cascading effects on social, emotional, and cognitive development. Family processes involving contact between adoptive and birth family members, co-parenting in gay and lesbian adoptive families, and racial socialization in transracially adoptive families affect social development of adopted children into adulthood.

  19. Assessment of low-cost manufacturing process sequences. [photovoltaic solar arrays

    Science.gov (United States)

    Chamberlain, R. G.

    1979-01-01

    An extensive research and development activity to reduce the cost of manufacturing photovoltaic solar arrays by a factor of approximately one hundred is discussed. Proposed and actual manufacturing process descriptions were compared to manufacturing costs. An overview of this methodology is presented.

  20. Assessment of Measurement Distortions in GNSS Antenna Array Space-Time Processing

    Directory of Open Access Journals (Sweden)

    Thyagaraja Marathe

    2016-01-01

    Full Text Available Antenna array processing techniques are studied in GNSS as effective tools to mitigate interference in spatial and spatiotemporal domains. However, without specific considerations, the array processing results in biases and distortions in the cross-ambiguity function (CAF of the ranging codes. In space-time processing (STP the CAF misshaping can happen due to the combined effect of space-time processing and the unintentional signal attenuation by filtering. This paper focuses on characterizing these degradations for different controlled signal scenarios and for live data from an antenna array. The antenna array simulation method introduced in this paper enables one to perform accurate analyses in the field of STP. The effects of relative placement of the interference source with respect to the desired signal direction are shown using overall measurement errors and profile of the signal strength. Analyses of contributions from each source of distortion are conducted individually and collectively. Effects of distortions on GNSS pseudorange errors and position errors are compared for blind, semi-distortionless, and distortionless beamforming methods. The results from characterization can be useful for designing low distortion filters that are especially important for high accuracy GNSS applications in challenging environments.

  1. Solution processed bismuth sulfide nanowire array core/silver shuffle shell solar cells

    NARCIS (Netherlands)

    Cao, Y.; Bernechea, M.; Maclachlan, A.; Zardetto, V.; Creatore, M.; Haque, S.A.; Konstantatos, G.

    2015-01-01

    Low bandgap inorganic semiconductor nanowires have served as building blocks in solution processed solar cells to improve their power conversion capacity and reduce fabrication cost. In this work, we first reported bismuth sulfide nanowire arrays grown from colloidal seeds on a transparent

  2. Increasing the specificity and function of DNA microarrays by processing arrays at different stringencies

    DEFF Research Database (Denmark)

    Dufva, Martin; Petersen, Jesper; Poulsen, Lena

    2009-01-01

    DNA microarrays have for a decade been the only platform for genome-wide analysis and have provided a wealth of information about living organisms. DNA microarrays are processed today under one condition only, which puts large demands on assay development because all probes on the array need to f...

  3. Solution-processed single-wall carbon nanotube transistor arrays for wearable display backplanes

    Directory of Open Access Journals (Sweden)

    Byeong-Cheol Kang

    2018-01-01

    Full Text Available In this paper, we demonstrate solution-processed single-wall carbon nanotube thin-film transistor (SWCNT-TFT arrays with polymeric gate dielectrics on the polymeric substrates for wearable display backplanes, which can be directly attached to the human body. The optimized SWCNT-TFTs without any buffer layer on flexible substrates exhibit a linear field-effect mobility of 1.5cm2/V-s and a threshold voltage of around 0V. The statistical plot of the key device metrics extracted from 35 SWCNT-TFTs which were fabricated in different batches at different times conclusively support that we successfully demonstrated high-performance solution-processed SWCNT-TFT arrays which demand excellent uniformity in the device performance. We also investigate the operational stability of wearable SWCNT-TFT arrays against an applied strain of up to 40%, which is the essential for a harsh degree of strain on human body. We believe that the demonstration of flexible SWCNT-TFT arrays which were fabricated by all solution-process except the deposition of metal electrodes at process temperature below 130oC can open up new routes for wearable display backplanes.

  4. Astronomical Data Processing Using SciQL, an SQL Based Query Language for Array Data

    Science.gov (United States)

    Zhang, Y.; Scheers, B.; Kersten, M.; Ivanova, M.; Nes, N.

    2012-09-01

    SciQL (pronounced as ‘cycle’) is a novel SQL-based array query language for scientific applications with both tables and arrays as first class citizens. SciQL lowers the entrance fee of adopting relational DBMS (RDBMS) in scientific domains, because it includes functionality often only found in mathematics software packages. In this paper, we demonstrate the usefulness of SciQL for astronomical data processing using examples from the Transient Key Project of the LOFAR radio telescope. In particular, how the LOFAR light-curve database of all detected sources can be constructed, by correlating sources across the spatial, frequency, time and polarisation domains.

  5. Frequency Diverse Array Radar Signal Processing via Space-Range-Doppler Focus (SRDF Method

    Directory of Open Access Journals (Sweden)

    Chen Xiaolong

    2018-04-01

    Full Text Available To meet the urgent demand of low-observable moving target detection in complex environments, a novel method of Frequency Diverse Array (FDA radar signal processing method based on Space-Rang-Doppler Focusing (SRDF is proposed in this paper. The current development status of the FDA radar, the design of the array structure, beamforming, and joint estimation of distance and angle are systematically reviewed. The extra degrees of freedom provided by FDA radar are fully utilizsed, which include the Degrees Of Freedom (DOFs of the transmitted waveform, the location of array elements, correlation of beam azimuth and distance, and the long dwell time, which are also the DOFs in joint spatial (angle, distance, and frequency (Doppler dimensions. Simulation results show that the proposed method has the potential of improving target detection and parameter estimation for weak moving targets in complex environments and has broad application prospects in clutter and interference suppression, moving target refinement, etc..

  6. Processing and display of three-dimensional arrays of numerical data using octree encoding

    International Nuclear Information System (INIS)

    Amans, J.L.; Antoine, M.; Darier, P.

    1986-04-01

    The analysis of three-dimensional (3-D) arrays of numerical data from medical, industrial or scientific imaging, by synthetic generation of realistic images, has been widely developed. The Octree encoding, that organizes the volume data in a hierarchical tree structure, has some interesting features for 3-D arrays of data processing. The Octree encoding method, based on the recursive subdivision of a 3-D array, is an extension of the Quadtree encoding in the two-dimensional plane. We have developed a software package to validate the basic Octree encoding methodology for some manipulation and display operations of volume data. The contribution introduces the technique we have used (called ''overlay technique'') to make the projection operation of an Octree on a Quadtree encoded image plane. The application of this technique to the hidden surface display is presented [fr

  7. Adaptive processes drive ecomorphological convergent evolution in antwrens (Thamnophilidae).

    Science.gov (United States)

    Bravo, Gustavo A; Remsen, J V; Brumfield, Robb T

    2014-10-01

    Phylogenetic niche conservatism (PNC) and convergence are contrasting evolutionary patterns that describe phenotypic similarity across independent lineages. Assessing whether and how adaptive processes give origin to these patterns represent a fundamental step toward understanding phenotypic evolution. Phylogenetic model-based approaches offer the opportunity not only to distinguish between PNC and convergence, but also to determine the extent that adaptive processes explain phenotypic similarity. The Myrmotherula complex in the Neotropical family Thamnophilidae is a polyphyletic group of sexually dimorphic small insectivorous forest birds that are relatively homogeneous in size and shape. Here, we integrate a comprehensive species-level molecular phylogeny of the Myrmotherula complex with morphometric and ecological data within a comparative framework to test whether phenotypic similarity is described by a pattern of PNC or convergence, and to identify evolutionary mechanisms underlying body size and shape evolution. We show that antwrens in the Myrmotherula complex represent distantly related clades that exhibit adaptive convergent evolution in body size and divergent evolution in body shape. Phenotypic similarity in the group is primarily driven by their tendency to converge toward smaller body sizes. Differences in body size and shape across lineages are associated to ecological and behavioral factors. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  8. Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers

    Directory of Open Access Journals (Sweden)

    Wei Shu

    1994-01-01

    Full Text Available One of the challenges in programming distributed memory parallel machines is deciding how to allocate work to processors. This problem is particularly important for computations with unpredictable dynamic behaviors or irregular structures. We present a scheme for dynamic scheduling of medium-grained processes that is useful in this context. The adaptive contracting within neighborhood (ACWN is a dynamic, distributed, load-dependent, and scalable scheme. It deals with dynamic and unpredictable creation of processes and adapts to different systems. The scheme is described and contrasted with two other schemes that have been proposed in this context, namely the randomized allocation and the gradient model. The performance of the three schemes on an Intel iPSC/2 hypercube is presented and analyzed. The experimental results show that even though the ACWN algorithm incurs somewhat larger overhead than the randomized allocation, it achieves better performance in most cases due to its adaptiveness. Its feature of quickly spreading the work helps it outperform the gradient model in performance and scalability.

  9. The CHARA array adaptive optics I: common-path optical and mechanical design, and preliminary on-sky results

    Science.gov (United States)

    Che, Xiao; Sturmann, Laszlo; Monnier, John D.; ten Brummelaar, Theo A.; Sturmann, Judit; Ridgway, Stephen T.; Ireland, Michael J.; Turner, Nils H.; McAlister, Harold A.

    2014-07-01

    The CHARA array is an optical interferometer with six 1-meter diameter telescopes, providing baselines from 33 to 331 meters. With sub-milliarcsecond angular resolution, its versatile visible and near infrared combiners offer a unique angle of studying nearby stellar systems by spatially resolving their detailed structures. To improve the sensitivity and scientific throughput, the CHARA array was funded by NSF-ATI in 2011 to install adaptive optics (AO) systems on all six telescopes. The initial grant covers Phase I of the AO systems, which includes on-telescope Wavefront Sensors (WFS) and non-common-path (NCP) error correction. Meanwhile we are seeking funding for Phase II which will add large Deformable Mirrors on telescopes to close the full AO loop. The corrections of NCP error and static aberrations in the optical system beyond the WFS are described in the second paper of this series. This paper describes the design of the common-path optical system and the on-telescope WFS, and shows the on-sky commissioning results.

  10. Seismic array processing and computational infrastructure for improved monitoring of Alaskan and Aleutian seismicity and volcanoes

    Science.gov (United States)

    Lindquist, Kent Gordon

    We constructed a near-real-time system, called Iceworm, to automate seismic data collection, processing, storage, and distribution at the Alaska Earthquake Information Center (AEIC). Phase-picking, phase association, and interprocess communication components come from Earthworm (U.S. Geological Survey). A new generic, internal format for digital data supports unified handling of data from diverse sources. A new infrastructure for applying processing algorithms to near-real-time data streams supports automated information extraction from seismic wavefields. Integration of Datascope (U. of Colorado) provides relational database management of all automated measurements, parametric information for located hypocenters, and waveform data from Iceworm. Data from 1997 yield 329 earthquakes located by both Iceworm and the AEIC. Of these, 203 have location residuals under 22 km, sufficient for hazard response. Regionalized inversions for local magnitude in Alaska yield Msb{L} calibration curves (logAsb0) that differ from the Californian Richter magnitude. The new curve is 0.2\\ Msb{L} units more attenuative than the Californian curve at 400 km for earthquakes north of the Denali fault. South of the fault, and for a region north of Cook Inlet, the difference is 0.4\\ Msb{L}. A curve for deep events differs by 0.6\\ Msb{L} at 650 km. We expand geographic coverage of Alaskan regional seismic monitoring to the Aleutians, the Bering Sea, and the entire Arctic by initiating the processing of four short-period, Alaskan seismic arrays. To show the array stations' sensitivity, we detect and locate two microearthquakes that were missed by the AEIC. An empirical study of the location sensitivity of the arrays predicts improvements over the Alaskan regional network that are shown as map-view contour plots. We verify these predictions by detecting an Msb{L} 3.2 event near Unimak Island with one array. The detection and location of four representative earthquakes illustrates the expansion

  11. Structural control of ultra-fine CoPt nanodot arrays via electrodeposition process

    Energy Technology Data Exchange (ETDEWEB)

    Wodarz, Siggi [Department of Applied Chemistry, Waseda University, Shinjuku, Tokyo 169-8555 (Japan); Hasegawa, Takashi; Ishio, Shunji [Department of Materials Science, Akita University, Akita City 010-8502 (Japan); Homma, Takayuki, E-mail: t.homma@waseda.jp [Department of Applied Chemistry, Waseda University, Shinjuku, Tokyo 169-8555 (Japan)

    2017-05-15

    CoPt nanodot arrays were fabricated by combining electrodeposition and electron beam lithography (EBL) for the use of bit-patterned media (BPM). To achieve precise control of deposition uniformity and coercivity of the CoPt nanodot arrays, their crystal structure and magnetic properties were controlled by controlling the diffusion state of metal ions from the initial deposition stage with the application of bath agitation. Following bath agitation, the composition gradient of the CoPt alloy with thickness was mitigated to have a near-ideal alloy composition of Co:Pt =80:20, which induces epitaxial-like growth from Ru substrate, thus resulting in the improvement of the crystal orientation of the hcp (002) structure from its initial deposition stages. Furthermore, the cross-sectional transmission electron microscope (TEM) analysis of the nanodots deposited with bath agitation showed CoPt growth along its c-axis oriented in the perpendicular direction, having uniform lattice fringes on the hcp (002) plane from the Ru underlayer interface, which is a significant factor to induce perpendicular magnetic anisotropy. Magnetic characterization of the CoPt nanodot arrays showed increase in the perpendicular coercivity and squareness of the hysteresis loops from 2.0 kOe and 0.64 (without agitation) to 4.0 kOe and 0.87 with bath agitation. Based on the detailed characterization of nanodot arrays, the precise crystal structure control of the nanodot arrays with ultra-high recording density by electrochemical process was successfully demonstrated. - Highlights: • Ultra-fine CoPt nanodot arrays were fabricated by electrodeposition. • Crystallinity of hcp (002) was improved with uniform composition formation. • Uniform formation of hcp lattices leads to an increase in the coercivity.

  12. Adaptive PCA based fault diagnosis scheme in imperial smelting process.

    Science.gov (United States)

    Hu, Zhikun; Chen, Zhiwen; Gui, Weihua; Jiang, Bin

    2014-09-01

    In this paper, an adaptive fault detection scheme based on a recursive principal component analysis (PCA) is proposed to deal with the problem of false alarm due to normal process changes in real process. Our further study is also dedicated to develop a fault isolation approach based on Generalized Likelihood Ratio (GLR) test and Singular Value Decomposition (SVD) which is one of general techniques of PCA, on which the off-set and scaling fault can be easily isolated with explicit off-set fault direction and scaling fault classification. The identification of off-set and scaling fault is also applied. The complete scheme of PCA-based fault diagnosis procedure is proposed. The proposed scheme is first applied to Imperial Smelting Process, and the results show that the proposed strategies can be able to mitigate false alarms and isolate faults efficiently. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  13. CONSTRUCTIVE MODEL OF ADAPTATION OF DATA STRUCTURES IN RAM. PART II. CONSTRUCTORS OF SCENARIOS AND ADAPTATION PROCESSES

    Directory of Open Access Journals (Sweden)

    V. I. Shynkarenko

    2016-04-01

    Full Text Available Purpose.The second part of the paper completes presentation of constructive and the productive structures (CPS, modeling adaptation of data structures in memory (RAM. The purpose of the second part in the research is to develop a model of process of adaptation data in a RAM functioning in different hardware and software environments and scenarios of data processing. Methodology. The methodology of mathematical and algorithmic constructionism was applied. In this part of the paper, changes were developed the constructors of scenarios and adaptation processes based on a generalized CPS through its transformational conversions. Constructors are interpreted, specialized CPS. Were highlighted the terminal alphabets of the constructor scenarios in the form of data processing algorithms and the constructor of adaptation – in the form of algorithmic components of the adaptation process. The methodology involves the development of substitution rules that determine the output process of the relevant structures. Findings. In the second part of the paper, system is represented by CPS modeling adaptation data placement in the RAM, namely, constructors of scenarios and of adaptation processes. The result of the implementation of constructor of scenarios is a set of data processing operations in the form of text in the language of programming C#, constructor of the adaptation processes – a process of adaptation, and the result the process of adaptation – the adapted binary code of processing data structures. Originality. For the first time proposed the constructive model of data processing – the scenario that takes into account the order and number of calls to the various elements of data structures and adaptation of data structures to the different hardware and software environments. At the same the placement of data in RAM and processing algorithms are adapted. Constructionism application in modeling allows to link data models and algorithms for

  14. [Super sweet corn hybrids adaptability for industrial processing. I freezing].

    Science.gov (United States)

    Alfonzo, Braunnier; Camacho, Candelario; Ortiz de Bertorelli, Ligia; De Venanzi, Frank

    2002-09-01

    With the purpose of evaluating adaptability to the freezing process of super sweet corn sh2 hybrids Krispy King, Victor and 324, 100 cobs of each type were frozen at -18 degrees C. After 120 days of storage, their chemical, microbiological and sensorial characteristics were compared with a sweet corn su. Industrial quality of the process of freezing and length and number of rows in cobs were also determined. Results revealed yields above 60% in frozen corns. Length and number of rows in cobs were acceptable. Most of the chemical characteristics of super sweet hybrids were not different from the sweet corn assayed at the 5% significance level. Moisture content and soluble solids of hybrid Victor, as well as total sugars of hybrid 324 were statistically different. All sh2 corns had higher pH values. During freezing, soluble solids concentration, sugars and acids decreased whereas pH increased. Frozen cobs exhibited acceptable microbiological rank, with low activities of mesophiles and total coliforms, absence of psychrophiles and fecal coliforms, and an appreciable amount of molds. In conclusion, sh2 hybrids adapted with no problems to the freezing process, they had lower contents of soluble solids and higher contents of total sugars, which almost doubled the amount of su corn; flavor, texture, sweetness and appearance of kernels were also better. Hybrid Victor was preferred by the evaluating panel and had an outstanding performance due to its yield and sensorial characteristics.

  15. Processing and display of medical three dimensional arrays of numerical data using octree encoding

    International Nuclear Information System (INIS)

    Amans, J.L.; Darier, P.

    1985-01-01

    Imaging modalities such as X-ray computerized Tomography (CT), Nuclear Medicine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging. We are currently developing experimental software that allows the analysis, processing and display of 3-D arrays of numerical data that are organized in a related hierarchical data structure using OCTREE (octal-tree) encoding technique based on a recursive subdivision of the data volume. The OCTREE encoding structure is an extension of the two-dimensional tree structure: the quadtree, developed for image processing applications. Before any operations, the 3-D array of data is OCTREE encoded, thereafter all processings are concerned with the encoded object. The elementary process for the elaboration of a synthetic image includes: conditioning the volume: volume partition (numerical and spatial segmentation), choice of the view-point..., two dimensional display, either by spatial integration (radiography) or by shaded surface representation. This paper introduces these different concepts and specifies the advantages of OCTREE encoding techniques in realizing these operations. Furthermore the application of the OCTREE encoding scheme to the display of 3-D medical volumes generated from multiple CT scans is presented

  16. NeuroSeek dual-color image processing infrared focal plane array

    Science.gov (United States)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  17. Effect of Source, Surfactant, and Deposition Process on Electronic Properties of Nanotube Arrays

    Directory of Open Access Journals (Sweden)

    Dheeraj Jain

    2011-01-01

    Full Text Available The electronic properties of arrays of carbon nanotubes from several different sources differing in the manufacturing process used with a variety of average properties such as length, diameter, and chirality are studied. We used several common surfactants to disperse each of these nanotubes and then deposited them on Si wafers from their aqueous solutions using dielectrophoresis. Transport measurements were performed to compare and determine the effect of different surfactants, deposition processes, and synthesis processes on nanotubes synthesized using CVD, CoMoCAT, laser ablation, and HiPCO.

  18. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    Science.gov (United States)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  19. Lightweight solar array blanket tooling, laser welding and cover process technology

    Science.gov (United States)

    Dillard, P. A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  20. Monitoring and Evaluation of Alcoholic Fermentation Processes Using a Chemocapacitor Sensor Array

    Science.gov (United States)

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-01-01

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument. PMID:25184490

  1. Road Sign Recognition with Fuzzy Adaptive Pre-Processing Models

    Science.gov (United States)

    Lin, Chien-Chuan; Wang, Ming-Shi

    2012-01-01

    A road sign recognition system based on adaptive image pre-processing models using two fuzzy inference schemes has been proposed. The first fuzzy inference scheme is to check the changes of the light illumination and rich red color of a frame image by the checking areas. The other is to check the variance of vehicle's speed and angle of steering wheel to select an adaptive size and position of the detection area. The Adaboost classifier was employed to detect the road sign candidates from an image and the support vector machine technique was employed to recognize the content of the road sign candidates. The prohibitory and warning road traffic signs are the processing targets in this research. The detection rate in the detection phase is 97.42%. In the recognition phase, the recognition rate is 93.04%. The total accuracy rate of the system is 92.47%. For video sequences, the best accuracy rate is 90.54%, and the average accuracy rate is 80.17%. The average computing time is 51.86 milliseconds per frame. The proposed system can not only overcome low illumination and rich red color around the road sign problems but also offer high detection rates and high computing performance. PMID:22778650

  2. High density processing electronics for superconducting tunnel junction x-ray detector arrays

    Energy Technology Data Exchange (ETDEWEB)

    Warburton, W.K., E-mail: bill@xia.com [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Harris, J.T. [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Friedrich, S. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States)

    2015-06-01

    Superconducting tunnel junctions (STJs) are excellent soft x-ray (100–2000 eV) detectors, particularly for synchrotron applications, because of their ability to obtain energy resolutions below 10 eV at count rates approaching 10 kcps. In order to achieve useful solid detection angles with these very small detectors, they are typically deployed in large arrays – currently with 100+ elements, but with 1000 elements being contemplated. In this paper we review a 5-year effort to develop compact, computer controlled low-noise processing electronics for STJ detector arrays, focusing on the major issues encountered and our solutions to them. Of particular interest are our preamplifier design, which can set the STJ operating points under computer control and achieve 2.7 eV energy resolution; our low noise power supply, which produces only 2 nV/√Hz noise at the preamplifier's critical cascode node; our digital processing card that digitizes and digitally processes 32 channels; and an STJ I–V curve scanning algorithm that computes noise as a function of offset voltage, allowing an optimum operating point to be easily selected. With 32 preamplifiers laid out on a custom 3U EuroCard, and the 32 channel digital card in a 3U PXI card format, electronics for a 128 channel array occupy only two small chassis, each the size of a National Instruments 5-slot PXI crate, and allow full array control with simple extensions of existing beam line data collection packages.

  3. Improved adaptive input voltage control of a solar array interfacing current mode controlled boost power stage

    International Nuclear Information System (INIS)

    Sitbon, Moshe; Schacham, Shmuel; Suntio, Teuvo; Kuperman, Alon

    2015-01-01

    Highlights: • Photovoltaic generator dynamic resistance online estimation method is proposed. • Control method allowing to achieve nominal performance at all time is presented. • The method is suitable for any type of photovoltaic system. - Abstract: Nonlinear characteristics of photovoltaic generators were recently shown to significantly influence the dynamics of interfacing power stages. Moreover, since the dynamic resistance of photovoltaic generators is both operating point and environmental variables dependent, the combined dynamics exhibits these dependencies as well, burdening control challenge. Typically, linear time invariant input voltage loop controllers (e.g. Proportional-Integrative-Derivative) are utilized in photovoltaic applications, designed according to nominal operating conditions. Nevertheless, since actual dynamics is seldom nominal, closed loop performance of such systems varies as well. In this paper, adaptive control method is proposed, allowing to estimate photovoltaic generator resistance online and utilize it to modify the controller parameters such that closed loop performance remains nominal throughout the whole operation range. Unlike previously proposed method, utilizing double-grid-frequency component for estimation purposes and suffering from various drawbacks such as operation point dependence and applicability to single-phase grid connected systems only, the proposed method is based on harmonic current injection and is independent on operating point and system topology

  4. Phased arrays techniques and split spectrum processing for inspection of thick titanium casting components

    International Nuclear Information System (INIS)

    Banchet, J.; Chahbaz, A.; Sicard, R.; Zellouf, D.E.

    2003-01-01

    In aircraft structures, titanium parts and engine members are critical structural components, and their inspection crucial. However, these structures are very difficult to inspect ultrasonically because of their large grain structure that increases noise drastically. In this work, phased array inspection setups were developed to detected small defects such as simulated inclusions and porosity contained in thick titanium casting blocks, which are frequently used in the aerospace industry. A Cut Spectrum Processing (CSP)-based algorithm was then implemented on the acquired data by employing a set of parallel bandpass filters with different center frequencies. This process led in substantial improvement of the signal to noise ratio and thus, of detectability

  5. Post-processing Free Quantum Random Number Generator Based on Avalanche Photodiode Array

    International Nuclear Information System (INIS)

    Li Yang; Liao Sheng-Kai; Liang Fu-Tian; Shen Qi; Liang Hao; Peng Cheng-Zhi

    2016-01-01

    Quantum random number generators adopting single photon detection have been restricted due to the non-negligible dead time of avalanche photodiodes (APDs). We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32 × 32 APD array is up to tens of Gbits/s. (paper)

  6. Controllable 3D architectures of aligned carbon nanotube arrays by multi-step processes

    Science.gov (United States)

    Huang, Shaoming

    2003-06-01

    An effective way to fabricate large area three-dimensional (3D) aligned CNTs pattern based on pyrolysis of iron(II) phthalocyanine (FePc) by two-step processes is reported. The controllable generation of different lengths and selective growth of the aligned CNT arrays on metal-patterned (e.g., Ag and Au) substrate are the bases for generating such 3D aligned CNTs architectures. By controlling experimental conditions 3D aligned CNT arrays with different lengths/densities and morphologies/structures as well as multi-layered architectures can be fabricated in large scale by multi-step pyrolysis of FePc. These 3D architectures could have interesting properties and be applied for developing novel nanotube-based devices.

  7. Fabrication of Aligned Polyaniline Nanofiber Array via a Facile Wet Chemical Process.

    Science.gov (United States)

    Sun, Qunhui; Bi, Wu; Fuller, Thomas F; Ding, Yong; Deng, Yulin

    2009-06-17

    In this work, we demonstrate for the first time a template free approach to synthesize aligned polyaniline nanofiber (PN) array on a passivated gold (Au) substrate via a facile wet chemical process. The Au surface was first modified using 4-aminothiophenol (4-ATP) to afford the surface functionality, followed subsequently by an oxidation polymerization of aniline (AN) monomer in an aqueous medium using ammonium persulfate as the oxidant and tartaric acid as the doping agent. The results show that a vertically aligned PANI nanofiber array with individual fiber diameters of ca. 100 nm, heights of ca. 600 nm and a packing density of ca. 40 pieces·µm(-2) , was synthesized. Copyright © 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Arrays of surface-normal electroabsorption modulators for the generation and signal processing of microwave photonics signals

    NARCIS (Netherlands)

    Noharet, Bertrand; Wang, Qin; Platt, Duncan; Junique, Stéphane; Marpaung, D.A.I.; Roeloffzen, C.G.H.

    2011-01-01

    The development of an array of 16 surface-normal electroabsorption modulators operating at 1550nm is presented. The modulator array is dedicated to the generation and processing of microwave photonics signals, targeting a modulation bandwidth in excess of 5GHz. The hybrid integration of the

  9. Adapting the transtheoretical model of change to the bereavement process.

    Science.gov (United States)

    Calderwood, Kimberly A

    2011-04-01

    Theorists currently believe that bereaved people undergo some transformation of self rather than returning to their original state. To advance our understanding of this process, this article presents an adaptation of Prochaska and DiClemente's transtheoretical model of change as it could be applied to the journey that bereaved individuals experience. This theory is unique because it addresses attitudes, intentions, and behavioral processes at each stage; it allows for a focus on a broader range of emotions than just anger and depression; it allows for the recognition of two periods of regression during the bereavement process; and it adds a maintenance stage, which other theories lack. This theory can benefit bereaved individuals directly and through the increased awareness among counselors, family, friends, employers, and society at large. This theory may also be used as a tool for bereavement programs to consider whether they are meeting clients' needs throughout the transformation change bereavement process rather than only focusing on the initial stages characterized by intense emotion.

  10. Green Software Engineering Adaption In Requirement Elicitation Process

    Directory of Open Access Journals (Sweden)

    Umma Khatuna Jannat

    2015-08-01

    Full Text Available A recent technology investigates the role of concern in the environment software that is green software system. Now it is widely accepted that the green software can fit all process of software development. It is also suitable for the requirement elicitation process. Now a days software companies have used requirements elicitation techniques in an enormous majority. Because this process plays more and more important roles in software development. At the present time most of the requirements elicitation process is improved by using some techniques and tools. So that the intention of this research suggests to adapt green software engineering for the intention of existing elicitation technique and recommend suitable actions for improvement. This research being involved qualitative data. I used few keywords in my searching procedure then searched IEEE ACM Springer Elsevier Google scholar Scopus and Wiley. Find out articles which published in 2010 until 2016. Finding from the literature review Identify 15 traditional requirement elicitations factors and 23 improvement techniques to convert green engineering. Lastly The paper includes a squat review of the literature a description of the grounded theory and some of the identity issues related finding of the necessity for requirements elicitation improvement techniques.

  11. Hierarchical adaptive experimental design for Gaussian process emulators

    International Nuclear Information System (INIS)

    Busby, Daniel

    2009-01-01

    Large computer simulators have usually complex and nonlinear input output functions. This complicated input output relation can be analyzed by global sensitivity analysis; however, this usually requires massive Monte Carlo simulations. To effectively reduce the number of simulations, statistical techniques such as Gaussian process emulators can be adopted. The accuracy and reliability of these emulators strongly depend on the experimental design where suitable evaluation points are selected. In this paper a new sequential design strategy called hierarchical adaptive design is proposed to obtain an accurate emulator using the least possible number of simulations. The hierarchical design proposed in this paper is tested on various standard analytic functions and on a challenging reservoir forecasting application. Comparisons with standard one-stage designs such as maximin latin hypercube designs show that the hierarchical adaptive design produces a more accurate emulator with the same number of computer experiments. Moreover a stopping criterion is proposed that enables to perform the number of simulations necessary to obtain required approximation accuracy.

  12. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    International Nuclear Information System (INIS)

    Tu, K T; Chung, C K

    2016-01-01

    An integrated technology of CO 2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO 2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO 2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO 2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold. (paper)

  13. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    Science.gov (United States)

    Tu, K. T.; Chung, C. K.

    2016-06-01

    An integrated technology of CO2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold.

  14. Uncertainty during pain anticipation: the adaptive value of preparatory processes.

    Science.gov (United States)

    Seidel, Eva-Maria; Pfabigan, Daniela M; Hahn, Andreas; Sladky, Ronald; Grahl, Arvina; Paul, Katharina; Kraus, Christoph; Küblböck, Martin; Kranz, Georg S; Hummer, Allan; Lanzenberger, Rupert; Windischberger, Christian; Lamm, Claus

    2015-02-01

    Anticipatory processes prepare the organism for upcoming experiences. The aim of this study was to investigate neural responses related to anticipation and processing of painful stimuli occurring with different levels of uncertainty. Twenty-five participants (13 females) took part in an electroencephalography and functional magnetic resonance imaging (fMRI) experiment at separate times. A visual cue announced the occurrence of an electrical painful or nonpainful stimulus, delivered with certainty or uncertainty (50% chance), at some point during the following 15 s. During the first 2 s of the anticipation phase, a strong effect of uncertainty was reflected in a pronounced frontal stimulus-preceding negativity (SPN) and increased fMRI activation in higher visual processing areas. In the last 2 s before stimulus delivery, we observed stimulus-specific preparatory processes indicated by a centroparietal SPN and posterior insula activation that was most pronounced for the certain pain condition. Uncertain anticipation was associated with attentional control processes. During stimulation, the results revealed that unexpected painful stimuli produced the strongest activation in the affective pain processing network and a more pronounced offset-P2. Our results reflect that during early anticipation uncertainty is strongly associated with affective mechanisms and seems to be a more salient event compared to certain anticipation. During the last 2 s before stimulation, attentional control mechanisms are initiated related to the increased salience of uncertainty. Furthermore, stimulus-specific preparatory mechanisms during certain anticipation also shaped the response to stimulation, underlining the adaptive value of stimulus-targeted preparatory activity which is less likely when facing an uncertain event. © 2014 Wiley Periodicals, Inc.

  15. Adaptive model predictive process control using neural networks

    Science.gov (United States)

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  16. Adaptive Automatic Gauge Control of a Cold Strip Rolling Process

    Directory of Open Access Journals (Sweden)

    ROMAN, N.

    2010-02-01

    Full Text Available The paper tackles with thickness control structure of the cold rolled strips. This structure is based on the rolls position control of a reversible quarto rolling mill. The main feature of the system proposed in the paper consists in the compensation of the errors introduced by the deficient dynamics of the hydraulic servo-system used for the rolls positioning, by means of a dynamic compensator that approximates the inverse system of the servo-system. Because the servo-system is considered variant over time, an on-line identification of the servo-system and parameter adapting of the compensator are achieved. The results obtained by numerical simulation are presented together with the data taken from real process. These results illustrate the efficiency of the proposed solutions.

  17. Towards Measuring the Adaptability of an AO4BPEL Process

    OpenAIRE

    Botangen, Khavee Agustus; Yu, Jian; Sheng, Michael

    2017-01-01

    Adaptability is a significant property which enables software systems to continuously provide the required functionality and achieve optimal performance. The recognised importance of adaptability makes its evaluation an essential task. However, the various adaptability dimensions and implementation mechanisms make adaptive strategies difficult to evaluate. In service oriented computing, several frameworks that extend the WS-BPEL, the de facto standard in composing distributed business applica...

  18. Information-educational environment with adaptive control of learning process

    Science.gov (United States)

    Modjaev, A. D.; Leonova, N. M.

    2017-01-01

    Recent years, a new scientific branch connected with the activities in social sphere management developing intensively and it is called "Social Cybernetics". In the framework of this scientific branch, theory and methods of management of social sphere are formed. Considerable attention is paid to the management, directly in real time. However, the decision of such management tasks is largely constrained by the lack of or insufficiently deep study of the relevant sections of the theory and methods of management. The article discusses the use of cybernetic principles in solving problems of control in social systems. Applying to educational activities a model of composite interrelated objects representing the behaviour of students at various stages of educational process is introduced. Statistical processing of experimental data obtained during the actual learning process is being done. If you increase the number of features used, additionally taking into account the degree and nature of variability of levels of current progress of students during various types of studies, new properties of students' grouping are discovered. L-clusters were identified, reflecting the behaviour of learners with similar characteristics during lectures. It was established that the characteristics of the clusters contain information about the dynamics of learners' behaviour, allowing them to be used in additional lessons. The ways of solving the problem of adaptive control based on the identified dynamic characteristics of the learners are planned.

  19. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  20. Multilevel processes and cultural adaptation: Examples from past and present small-scale societies

    OpenAIRE

    Reyes-García, V.; Balbo, A. L.; Gomez-Baggethun, E.; Gueze, M.; Mesoudi, A.; Richerson, P.; Rubio-Campillo, X.; Ruiz-Mallén, I.; Shennan, S.

    2016-01-01

    Cultural adaptation has become central in the context of accelerated global change with authors increasingly acknowledging the importance of understanding multilevel processes that operate as adaptation takes place. We explore the importance of multilevel processes in explaining cultural adaptation by describing how processes leading to cultural (mis)adaptation are linked through a complex nested hierarchy, where the lower levels combine into new units with new organizations, functions, and e...

  1. An application of neural network for Structural Health Monitoring of an adaptive wing with an array of FBG sensors

    International Nuclear Information System (INIS)

    Mieloszyk, Magdalena; Skarbek, Lukasz; Ostachowicz, Wieslaw; Krawczuk, Marek

    2011-01-01

    This paper presents an application of neural networks to determinate the level of activation of shape memory alloy actuators of an adaptive wing. In this concept the shape of the wing can be controlled and altered thanks to the wing design and the use of integrated shape memory alloy actuators. The wing is assumed as assembled from a number of wing sections that relative positions can be controlled independently by thermal activation of shape memory actuators. The investigated wing is employed with an array of Fibre Bragg Grating sensors. The Fibre Bragg Grating sensors with combination of a neural network have been used to Structural Health Monitoring of the wing condition. The FBG sensors are a great tool to control the condition of composite structures due to their immunity to electromagnetic fields as well as their small size and weight. They can be mounted onto the surface or embedded into the wing composite material without any significant influence on the wing strength. The paper concentrates on analysis of the determination of the twisting moment produced by an activated shape memory alloy actuator. This has been analysed both numerically using the finite element method by a commercial code ABAQUS (registered) and experimentally using Fibre Bragg Grating sensor measurements. The results of the analysis have been then used by a neural network to determine twisting moments produced by each shape memory alloy actuator.

  2. Estimation, filtering and adaptative control of a waste water processing process; Estimation, filtrage et commande adaptive d`un procede de traitement des eaux usees

    Energy Technology Data Exchange (ETDEWEB)

    Ben Youssef, C; Dahhou, B; Roux, G [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France); Rols, J L [Institut National des Sciences Appliquees (INSA), 31 - Toulouse (France)

    1996-12-31

    Controlling the process of a fixed bed bioreactor imply solving filtering and adaptative control problems. Estimation processes have been developed for unmeasurable parameters. An adaptative non linear control has been built, instead of conventional approaches trying to linearize the system and apply a linear control system. (D.L.) 10 refs.

  3. Adaptive capacity and human cognition: the process of individual adaptation to climate change

    Energy Technology Data Exchange (ETDEWEB)

    Grothmann, T. [Potsdam Institute for Climate Impact Research, Potsdam (Germany). Department of Global Change and Social Systems; Patt, A. [Boston University (United States). Department of Geography

    2005-10-01

    Adaptation has emerged as an important area of research and assessment among climate change scientists. Most scholarly work has identified resource constraints as being the most significant determinants of adaptation. However, empirical research on adaptation has so far mostly not addressed the importance of measurable and alterable psychological factors in determining adaptation. Drawing from the literature in psychology and behavioural economics, we develop a socio-cognitive Model of Private Proactive Adaptation to Climate Change (MPPACC). MPPACC separates out the psychological steps to taking action in response to perception, and allows one to see where the most important bottlenecks occur - including risk perception and perceived adaptive capacity, a factor largely neglected in previous climate change research. We then examine two case studies - one from urban Germany and one from rural Zimbabwe - to explore the validity of MPPACC to explaining adaptation. In the German study, we find that MPPACC provides better statistical power than traditional socio-economic models. In the Zimbabwean case study, we find a qualitative match between MPPACC and adaptive behaviour. Finally, we discuss the important implications of our findings both on vulnerability and adaptation assessments, and on efforts to promote adaptation through outside intervention. (author)

  4. Automatic Defect Detection for TFT-LCD Array Process Using Quasiconformal Kernel Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2011-09-01

    Full Text Available Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms.

  5. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    Science.gov (United States)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-08-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN_x) materials and microwave structures, and the resulting performance improvements.

  6. Fabrication of CoZn alloy nanowire arrays: Significant improvement in magnetic properties by annealing process

    International Nuclear Information System (INIS)

    Koohbor, M.; Soltanian, S.; Najafi, M.; Servati, P.

    2012-01-01

    Highlights: ► Increasing the Zn concentration changes the structure of NWs from hcp to amorphous. ► Increasing the Zn concentration significantly reduces the Hc value of NWs. ► Magnetic properties of CoZn NWs can be significantly enhanced by appropriate annealing. ► The pH of electrolyte has no significant effect on the properties of the NW arrays. ► Deposition frequency has considerable effects on the magnetic properties of NWs. - Abstract: Highly ordered arrays of Co 1−x Zn x (0 ≤ x ≤ 0.74) nanowires (NWs) with diameters of ∼35 nm and high length-to-diameter ratios (up to 150) were fabricated by co-electrodeposition of Co and Zn into pores of anodized aluminum oxide (AAO) templates. The Co and Zn contents of the NWs were adjusted by varying the ratio of Zn and Co ion concentrations in the electrolyte. The effect of the Zn content, electrodeposition conditions (frequency and pH) and annealing on the structural and magnetic properties (e.g., coercivity (Hc) and squareness (Sq)) of NW arrays were investigated using X-ray diffraction (XRD), scanning electron microscopy, electron diffraction, and alternating gradient force magnetometer (AGFM). XRD patterns reveal that an increase in the concentration of Zn ions of the electrolyte forces the hcp crystal structure of Co NWs to change into an amorphous phase, resulting in a significant reduction in Hc. It was found that the magnetic properties of NWs can be significantly improved by appropriate annealing process. The highest values for Hc (2050 Oe) and Sq (0.98) were obtained for NWs electrodeposited using 0.95/0.05 Co:Zn concentrations at 200 Hz and annealed at 575 °C. While the pH of electrolyte is found to have no significant effect on the structural and magnetic properties of the NW arrays, the electrodeposition frequency has considerable effects on the magnetic properties of the NW arrays. The changes in magnetic property of NWs are rooted in a competition between shape anisotropy and

  7. Fabrication of CoZn alloy nanowire arrays: Significant improvement in magnetic properties by annealing process

    Energy Technology Data Exchange (ETDEWEB)

    Koohbor, M. [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Soltanian, S., E-mail: s.soltanian@gmail.com [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver (Canada); Najafi, M. [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Department of Physics, Hamadan University of Technology, Hamadan (Iran, Islamic Republic of); Servati, P. [Department of Electrical and Computer Engineering, University of British Columbia, Vancouver (Canada)

    2012-01-05

    Highlights: Black-Right-Pointing-Pointer Increasing the Zn concentration changes the structure of NWs from hcp to amorphous. Black-Right-Pointing-Pointer Increasing the Zn concentration significantly reduces the Hc value of NWs. Black-Right-Pointing-Pointer Magnetic properties of CoZn NWs can be significantly enhanced by appropriate annealing. Black-Right-Pointing-Pointer The pH of electrolyte has no significant effect on the properties of the NW arrays. Black-Right-Pointing-Pointer Deposition frequency has considerable effects on the magnetic properties of NWs. - Abstract: Highly ordered arrays of Co{sub 1-x}Zn{sub x} (0 {<=} x {<=} 0.74) nanowires (NWs) with diameters of {approx}35 nm and high length-to-diameter ratios (up to 150) were fabricated by co-electrodeposition of Co and Zn into pores of anodized aluminum oxide (AAO) templates. The Co and Zn contents of the NWs were adjusted by varying the ratio of Zn and Co ion concentrations in the electrolyte. The effect of the Zn content, electrodeposition conditions (frequency and pH) and annealing on the structural and magnetic properties (e.g., coercivity (Hc) and squareness (Sq)) of NW arrays were investigated using X-ray diffraction (XRD), scanning electron microscopy, electron diffraction, and alternating gradient force magnetometer (AGFM). XRD patterns reveal that an increase in the concentration of Zn ions of the electrolyte forces the hcp crystal structure of Co NWs to change into an amorphous phase, resulting in a significant reduction in Hc. It was found that the magnetic properties of NWs can be significantly improved by appropriate annealing process. The highest values for Hc (2050 Oe) and Sq (0.98) were obtained for NWs electrodeposited using 0.95/0.05 Co:Zn concentrations at 200 Hz and annealed at 575 Degree-Sign C. While the pH of electrolyte is found to have no significant effect on the structural and magnetic properties of the NW arrays, the electrodeposition frequency has considerable effects on

  8. High-throughput fabrication of micrometer-sized compound parabolic mirror arrays by using parallel laser direct-write processing

    International Nuclear Information System (INIS)

    Yan, Wensheng; Gu, Min; Cumming, Benjamin P

    2015-01-01

    Micrometer-sized parabolic mirror arrays have significant applications in both light emitting diodes and solar cells. However, low fabrication throughput has been identified as major obstacle for the mirror arrays towards large-scale applications due to the serial nature of the conventional method. Here, the mirror arrays are fabricated by using a parallel laser direct-write processing, which addresses this barrier. In addition, it is demonstrated that the parallel writing is able to fabricate complex arrays besides simple arrays and thus offers wider applications. Optical measurements show that each single mirror confines the full-width at half-maximum value to as small as 17.8 μm at the height of 150 μm whilst providing a transmittance of up to 68.3% at a wavelength of 633 nm in good agreement with the calculation values. (paper)

  9. Improving Accuracy of Processing by Adaptive Control Techniques

    Directory of Open Access Journals (Sweden)

    N. N. Barbashov

    2016-01-01

    Full Text Available When machining the work-pieces a range of scatter of the work-piece dimensions to the tolerance limit is displaced in response to the errors. To improve an accuracy of machining and prevent products from defects it is necessary to diminish the machining error components, i.e. to improve the accuracy of machine tool, tool life, rigidity of the system, accuracy of adjustment. It is also necessary to provide on-machine adjustment after a certain time. However, increasing number of readjustments reduces the performance and high machine and tool requirements lead to a significant increase in the machining cost.To improve the accuracy and machining rate, various devices of active control (in-process gaging devices, as well as controlled machining through adaptive systems for a technological process control now become widely used. Thus, the accuracy improvement in this case is reached by compensation of a majority of technological errors. The sensors of active control can provide improving the accuracy of processing by one or two quality classes, and simultaneous operation of several machines.For efficient use of sensors of active control it is necessary to develop the accuracy control methods by means of introducing the appropriate adjustments to solve this problem. Methods based on the moving average, appear to be the most promising for accuracy control, since they contain information on the change in the last several measured values of the parameter under control.When using the proposed method in calculation, the first three members of the sequence of deviations remain unchanged, therefore 1 1 x  x , 2 2 x  x , 3 3 x  x Then, for each i-th member of the sequence we calculate that way: , ' i i i x  x  k x , where instead of the i x values will be populated with the corresponding values ' i x calculated as an average of three previous members:3 ' 1  2  3  i i i i x x x x .As a criterion for the estimate of the control

  10. Simpler Adaptive Optics using a Single Device for Processing and Control

    Science.gov (United States)

    Zovaro, A.; Bennet, F.; Rye, D.; D'Orgeville, C.; Rigaut, F.; Price, I.; Ritchie, I.; Smith, C.

    The management of low Earth orbit is becoming more urgent as satellite and debris densities climb, in order to avoid a Kessler syndrome. A key part of this management is to precisely measure the orbit of both active satellites and debris. The Research School of Astronomy and Astrophysics at the Australian National University have been developing an adaptive optics (AO) system to image and range orbiting objects. The AO system provides atmospheric correction for imaging and laser ranging, allowing for the detection of smaller angular targets and drastically increasing the number of detectable objects. AO systems are by nature very complex and high cost systems, often costing millions of dollars and taking years to design. It is not unusual for AO systems to comprise multiple servers, digital signal processors (DSP) and field programmable gate arrays (FPGA), with dedicated tasks such as wavefront sensor data processing or wavefront reconstruction. While this multi-platform approach has been necessary in AO systems to date due to computation and latency requirements, this may no longer be the case for those with less demanding processing needs. In recent years, large strides have been made in FPGA and microcontroller technology, with todays devices having clock speeds in excess of 200 MHz whilst using a 1kHz) with low latency (general AO applications, such as in 1-3 m telescopes for space surveillance, or even for amateur astronomy.

  11. Fabricating process of hollow out-of-plane Ni microneedle arrays and properties of the integrated microfluidic device

    Science.gov (United States)

    Zhu, Jun; Cao, Ying; Wang, Hong; Li, Yigui; Chen, Xiang; Chen, Di

    2013-07-01

    Although microfluidic devices that integrate microfluidic chips with hollow out-of-plane microneedle arrays have many advantages in transdermal drug delivery applications, difficulties exist in their fabrication due to the special three-dimensional structures of hollow out-of-plane microneedles. A new, cost-effective process for the fabrication of a hollow out-of-plane Ni microneedle array is presented. The integration of PDMS microchips with the Ni hollow microneedle array and the properties of microfluidic devices are also presented. The integrated microfluidic devices provide a new approach for transdermal drug delivery.

  12. Teaching foreign language during adaptation process to European Union

    Directory of Open Access Journals (Sweden)

    Hidayet TOK

    2008-06-01

    Full Text Available In this study, we aimed to give information about language teaching in EU and Turkey which started negotiations for being full member in EU. Language teaching in both EU countries and in Turkey is studied comparatively in respect of some variables. These variables are: the age at which pupils are first taught foreign languages as a compulsory subject; the number of language taught during compulsory education, using “ CLIL” type provision in education; percentage distribution of all pupils according to the number of foreign languages learnt in primary and secondary education; percentage of all pupils in primary and secondary education who are learning English, German and/or French; relative priority given to the aims associated with the four major skills in curricula for compulsory foreign languages in full-time compulsory education; the minimum number of hours recommended for teaching foreign languages as a compulsory subject during a notional year in primary and secondary education; minimum number of hours recommended for teaching the first foreign language as a compulsory subject in a notional year in full-time compulsory general education and number of years spent for teaching; the proportion of minimum total time prescribed for the teaching of foreign languages as a compulsory subject, as a percentage of total teaching time in primary and fulltime compulsory general secondary education; and regulations or recommendations regarding maximum class sizes for foreign languages in primary education. In the end of this study, there are regulations and recommendations about teaching foreign languages which are foreseen to be implemented in the adaptation process of EU.

  13. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    Science.gov (United States)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  14. Recommendations for elaboration, transcultural adaptation and validation process of tests in Speech, Hearing and Language Pathology.

    Science.gov (United States)

    Pernambuco, Leandro; Espelt, Albert; Magalhães, Hipólito Virgílio; Lima, Kenio Costa de

    2017-06-08

    to present a guide with recommendations for translation, adaptation, elaboration and process of validation of tests in Speech and Language Pathology. the recommendations were based on international guidelines with a focus on the elaboration, translation, cross-cultural adaptation and validation process of tests. the recommendations were grouped into two Charts, one of them with procedures for translation and transcultural adaptation and the other for obtaining evidence of validity, reliability and measures of accuracy of the tests. a guide with norms for the organization and systematization of the process of elaboration, translation, cross-cultural adaptation and validation process of tests in Speech and Language Pathology was created.

  15. Compressive sensing-based electrostatic sensor array signal processing and exhausted abnormal debris detecting

    Science.gov (United States)

    Tang, Xin; Chen, Zhongsheng; Li, Yue; Yang, Yongmin

    2018-05-01

    When faults happen at gas path components of gas turbines, some sparsely-distributed and charged debris will be generated and released into the exhaust gas. The debris is called abnormal debris. Electrostatic sensors can detect the debris online and further indicate the faults. It is generally considered that, under a specific working condition, a more serious fault generates more and larger debris, and a piece of larger debris carries more charge. Therefore, the amount and charge of the abnormal debris are important indicators of the fault severity. However, because an electrostatic sensor can only detect the superposed effect on the electrostatic field of all the debris, it can hardly identify the amount and position of the debris. Moreover, because signals of electrostatic sensors depend on not only charge but also position of debris, and the position information is difficult to acquire, measuring debris charge accurately using the electrostatic detecting method is still a technical difficulty. To solve these problems, a hemisphere-shaped electrostatic sensors' circular array (HSESCA) is used, and an array signal processing method based on compressive sensing (CS) is proposed in this paper. To research in a theoretical framework of CS, the measurement model of the HSESCA is discretized into a sparse representation form by meshing. In this way, the amount and charge of the abnormal debris are described as a sparse vector. It is further reconstructed by constraining l1-norm when solving an underdetermined equation. In addition, a pre-processing method based on singular value decomposition and a result calibration method based on weighted-centroid algorithm are applied to ensure the accuracy of the reconstruction. The proposed method is validated by both numerical simulations and experiments. Reconstruction errors, characteristics of the results and some related factors are discussed.

  16. Remote online process measurements by a fiber optic diode array spectrometer

    International Nuclear Information System (INIS)

    Van Hare, D.R.; Prather, W.S.; O'Rourke, P.E.

    1986-01-01

    The development of remote online monitors for radioactive process streams is an active research area at the Savannah River Laboratory (SRL). A remote offline spectrophotometric measurement system has been developed and used at the Savannah River Plant (SRP) for the past year to determine the plutonium concentration of process solution samples. The system consists of a commercial diode array spectrophotometer modified with fiber optic cables that allow the instrument to be located remotely from the measurement cell. Recently, a fiber optic multiplexer has been developed for this instrument, which allows online monitoring of five locations sequentially. The multiplexer uses a motorized micrometer to drive one of five sets of optical fibers into the optical path of the instrument. A sixth optical fiber is used as an external reference and eliminates the need to flush out process lines to re-reference the spectrophotometer. The fiber optic multiplexer has been installed in a process prototype facility to monitor uranium loading and breakthrough of ion exchange columns. The design of the fiber optic multiplexer is discussed and data from the prototype facility are presented to demonstrate the capabilities of the measurement system

  17. Time in Redox Adaptation Processes: From Evolution to Hormesis

    Directory of Open Access Journals (Sweden)

    Mireille M. J. P. E. Sthijns

    2016-09-01

    Full Text Available Life on Earth has to adapt to the ever changing environment. For example, due to introduction of oxygen in the atmosphere, an antioxidant network evolved to cope with the exposure to oxygen. The adaptive mechanisms of the antioxidant network, specifically the glutathione (GSH system, are reviewed with a special focus on the time. The quickest adaptive response to oxidative stress is direct enzyme modification, increasing the GSH levels or activating the GSH-dependent protective enzymes. After several hours, a hormetic response is seen at the transcriptional level by up-regulating Nrf2-mediated expression of enzymes involved in GSH synthesis. In the long run, adaptations occur at the epigenetic and genomic level; for example, the ability to synthesize GSH by phototrophic bacteria. Apparently, in an adaptive hormetic response not only the dose or the compound, but also time, should be considered. This is essential for targeted interventions aimed to prevent diseases by successfully coping with changes in the environment e.g., oxidative stress.

  18. Country, climate change adaptation and colonisation: insights from an Indigenous adaptation planning process, Australia.

    Science.gov (United States)

    Nursey-Bray, Melissa; Palmer, Robert

    2018-03-01

    Indigenous peoples are going to be disproportionately affected by climate change. Developing tailored, place based, and culturally appropriate solutions will be necessary. Yet finding cultural and institutional 'fit' within and between competing values-based climate and environmental management governance regimes remains an ongoing challenge. This paper reports on a collaborative research project with the Arabana people of central Australia, that resulted in the production of the first Indigenous community-based climate change adaptation strategy in Australia. We aimed to try and understand what conditions are needed to support Indigenous driven adaptation initiatives, if there are any cultural differences that need accounting for and how, once developed they be integrated into existing governance arrangements. Our analysis found that climate change adaptation is based on the centrality of the connection to 'country' (traditional land), it needs to be aligned with cultural values, and focus on the building of adaptive capacity. We find that the development of climate change adaptation initiatives cannot be divorced from the historical context of how the Arabana experienced and collectively remember colonisation. We argue that in developing culturally responsive climate governance for and with Indigenous peoples, that that the history of colonisation and the ongoing dominance of entrenched Western governance regimes needs acknowledging and redressing into contemporary environmental/climate management.

  19. Adaptive frequency-difference matched field processing for high frequency source localization in a noisy shallow ocean.

    Science.gov (United States)

    Worthmann, Brian M; Song, H C; Dowling, David R

    2017-01-01

    Remote source localization in the shallow ocean at frequencies significantly above 1 kHz is virtually impossible for conventional array signal processing techniques due to environmental mismatch. A recently proposed technique called frequency-difference matched field processing (Δf-MFP) [Worthmann, Song, and Dowling (2015). J. Acoust. Soc. Am. 138(6), 3549-3562] overcomes imperfect environmental knowledge by shifting the signal processing to frequencies below the signal's band through the use of a quadratic product of frequency-domain signal amplitudes called the autoproduct. This paper extends these prior Δf-MFP results to various adaptive MFP processors found in the literature, with particular emphasis on minimum variance distortionless response, multiple constraint method, multiple signal classification, and matched mode processing at signal-to-noise ratios (SNRs) from -20 to +20 dB. Using measurements from the 2011 Kauai Acoustic Communications Multiple University Research Initiative experiment, the localization performance of these techniques is analyzed and compared to Bartlett Δf-MFP. The results show that a source broadcasting a frequency sweep from 11.2 to 26.2 kHz through a 106 -m-deep sound channel over a distance of 3 km and recorded on a 16 element sparse vertical array can be localized using Δf-MFP techniques within average range and depth errors of 200 and 10 m, respectively, at SNRs down to 0 dB.

  20. Continuous catchment-scale monitoring of geomorphic processes with a 2-D seismological array

    Science.gov (United States)

    Burtin, A.; Hovius, N.; Milodowski, D.; Chen, Y.-G.; Wu, Y.-M.; Lin, C.-W.; Chen, H.

    2012-04-01

    The monitoring of geomorphic processes during extreme climatic events is of a primary interest to estimate their impact on the landscape dynamics. However, available techniques to survey the surface activity do not provide a relevant time and/or space resolution. Furthermore, these methods hardly investigate the dynamics of the events since their detection are made a posteriori. To increase our knowledge of the landscape evolution and the influence of extreme climatic events on a catchment dynamics, we need to develop new tools and procedures. In many past works, it has been shown that seismic signals are relevant to detect and locate surface processes (landslides, debris flows). During the 2010 typhoon season, we deployed a network of 12 seismometers dedicated to monitor the surface processes of the Chenyoulan catchment in Taiwan. We test the ability of a two dimensional array and small inter-stations distances (~ 11 km) to map in continuous and at a catchment-scale the geomorphic activity. The spectral analysis of continuous records shows a high-frequency (> 1 Hz) seismic energy that is coherent with the occurrence of hillslope and river processes. Using a basic detection algorithm and a location approach running on the analysis of seismic amplitudes, we manage to locate the catchment activity. We mainly observe short-time events (> 300 occurrences) associated with debris falls and bank collapses during daily convective storms, where 69% of occurrences are coherent with the time distribution of precipitations. We also identify a couple of debris flows during a large tropical storm. In contrast, the FORMOSAT imagery does not detect any activity, which somehow reflects the lack of extreme climatic conditions during the experiment. However, high resolution pictures confirm the existence of links between most of geomorphic events and existing structures (landslide scars, gullies...). We thus conclude to an activity that is dominated by reactivation processes. It

  1. X-ray imager using solution processed organic transistor arrays and bulk heterojunction photodiodes on thin, flexible plastic substrate

    NARCIS (Netherlands)

    Gelinck, G.H.; Kumar, A.; Moet, D.; Steen, J.L. van der; Shafique, U.; Malinowski, P.E.; Myny, K.; Rand, B.P.; Simon, M.; Rütten, W.; Douglas, A.; Jorritsma, J.; Heremans, P.L.; Andriessen, H.A.J.M.

    2013-01-01

    We describe the fabrication and characterization of large-area active-matrix X-ray/photodetector array of high quality using organic photodiodes and organic transistors. All layers with the exception of the electrodes are solution processed. Because it is processed on a very thin plastic substrate

  2. Entrepreneural adaptation processes. An industry-geographic working model, illustrated by the example of Saarbergwerke AG

    International Nuclear Information System (INIS)

    Doerrenbaecher, P.

    1992-01-01

    The study has two goals: Solutions based in industrial geography and chronogeography are to be synthesized in order to develop a model of entrepreneurial adaptation processes. On the basis of this model, the development of Saarbergwerke AG in the first phase of the coal crisis (1957-1962) is reconstructed as an entrepreneurial adaptation process. (orig.) [de

  3. Adaptation as process: the future of Darwinism and the legacy of Theodosius Dobzhansky.

    Science.gov (United States)

    Depew, David J

    2011-03-01

    Conceptions of adaptation have varied in the history of genetic Darwinism depending on whether what is taken to be focal is the process of adaptation, adapted states of populations, or discrete adaptations in individual organisms. I argue that Theodosius Dobzhansky's view of adaptation as a dynamical process contrasts with so-called "adaptationist" views of natural selection figured as "design-without-a-designer" of relatively discrete, enumerable adaptations. Correlated with these respectively process and product oriented approaches to adaptive natural selection are divergent pictures of organisms themselves as developmental wholes or as "bundles" of adaptations. While even process versions of genetical Darwinism are insufficiently sensitive to the fact much of the variation on which adaptive selection works consists of changes in the timing, rate, or location of ontogenetic events, I argue that articulations of the Modern Synthesis influenced by Dobzhansky are more easily reconciled with the recent shift to evolutionary developmentalism than are versions that make discrete adaptations central. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Magnetic alloy nanowire arrays with different lengths: Insights into the crossover angle of magnetization reversal process

    Energy Technology Data Exchange (ETDEWEB)

    Samanifar, S.; Alikhani, M. [Department of Physics, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Almasi Kashi, M., E-mail: almac@kashanu.ac.ir [Department of Physics, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Institute of Nanoscience and Nanotechnology, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Ramazani, A. [Department of Physics, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Institute of Nanoscience and Nanotechnology, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Montazer, A.H. [Institute of Nanoscience and Nanotechnology, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of)

    2017-05-15

    Nanoscale magnetic alloy wires are being actively investigated, providing fundamental insights into tuning properties in magnetic data storage and processing technologies. However, previous studies give trivial information about the crossover angle of magnetization reversal process in alloy nanowires (NWs). Here, magnetic alloy NW arrays with different compositions, composed of Fe, Co and Ni have been electrochemically deposited into hard-anodic aluminum oxide templates with a pore diameter of approximately 150 nm. Under optimized conditions of alumina barrier layer and deposition bath concentrations, the resulting alloy NWs with aspect ratio and saturation magnetization (M{sub s}) up to 550 and 1900 emu cm{sup −3}, respectively, are systematically investigated in terms of composition, crystalline structure and magnetic properties. Using angular dependence of coercivity extracted from hysteresis loops, the reversal processes are evaluated, indicating non-monotonic behavior. The crossover angle (θ{sub c}) is found to depend on NW length and M{sub s}. At a constant M{sub s}, increasing NW length decreases θ{sub c}, thereby decreasing the involvement of vortex mode during the magnetization reversal process. On the other hand, decreasing M{sub s} decreases θ{sub c} in large aspect ratio (>300) alloy NWs. Phenomenologically, it is newly found that increasing Ni content in the composition decreases θ{sub c}. The angular first-order reversal curve (AFORC) measurements including the irreversibility of magnetization are also investigated to gain a more detailed insight into θ{sub c}. - Highlights: • Magnetic alloy NWs with aspect ratios up to 550 were fabricated into hard-AAO templates. • Morphology, composition, crystal structure and magnetic properties were investigated. • Angular dependence of coercivity was used to describe the magnetization reversal process. • The crossover angle of magnetization reversal was found to depend on NW length and M{sub s}.

  5. Process Development of Gallium Nitride Phosphide Core-Shell Nanowire Array Solar Cell

    Science.gov (United States)

    Chuang, Chen

    Dilute Nitride GaNP is a promising materials for opto-electronic applications due to its band gap tunability. The efficiency of GaNxP1-x /GaNyP1-y core-shell nanowire solar cell (NWSC) is expected to reach as high as 44% by 1% N and 9% N in the core and shell, respectively. By developing such high efficiency NWSCs on silicon substrate, a further reduction of the cost of solar photovoltaic can be further reduced to 61$/MWh, which is competitive to levelized cost of electricity (LCOE) of fossil fuels. Therefore, a suitable NWSC structure and fabrication process need to be developed to achieve this promising NWSC. This thesis is devoted to the study on the development of fabrication process of GaNxP 1-x/GaNyP1-y core-shell Nanowire solar cell. The thesis is divided into two major parts. In the first parts, previously grown GaP/GaNyP1-y core-shell nanowire samples are used to develop the fabrication process of Gallium Nitride Phosphide nanowire solar cell. The design for nanowire arrays, passivation layer, polymeric filler spacer, transparent col- lecting layer and metal contact are discussed and fabricated. The property of these NWSCs are also characterized to point out the future development of Gal- lium Nitride Phosphide NWSC. In the second part, a nano-hole template made by nanosphere lithography is studied for selective area growth of nanowires to improve the structure of core-shell NWSC. The fabrication process of nano-hole templates and the results are presented. To have a consistent features of nano-hole tem- plate, the Taguchi Method is used to optimize the fabrication process of nano-hole templates.

  6. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    Science.gov (United States)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  7. Low-Noise CMOS Circuits for On-Chip Signal Processing in Focal-Plane Arrays

    Science.gov (United States)

    Pain, Bedabrata

    The performance of focal-plane arrays can be significantly enhanced through the use of on-chip signal processing. Novel, in-pixel, on-focal-plane, analog signal-processing circuits for high-performance imaging are presented in this thesis. The presence of a high background-radiation is a major impediment for infrared focal-plane array design. An in-pixel, background-suppression scheme, using dynamic analog current memory circuit, is described. The scheme also suppresses spatial noise that results from response non-uniformities of photo-detectors, leading to background limited infrared detector readout performance. Two new, low-power, compact, current memory circuits, optimized for operation at ultra-low current levels required in infrared-detection, are presented. The first one is a self-cascading current memory that increases the output impedance, and the second one is a novel, switch feed-through reducing current memory, implemented using error-current feedback. This circuit can operate with a residual absolute -error of less than 0.1%. The storage-time of the memory is long enough to also find applications in neural network circuits. In addition, a voltage-mode, accurate, low-offset, low-power, high-uniformity, random-access sample-and-hold cell, implemented using a CCD with feedback, is also presented for use in background-suppression and neural network applications. A new, low noise, ultra-low level signal readout technique, implemented by individually counting photo-electrons within the detection pixel, is presented. The output of each unit-cell is a digital word corresponding to the intensity of the photon flux, and the readout is noise free. This technique requires the use of unit-cell amplifiers that feature ultra-high-gain, low-power, self-biasing capability and noise in sub-electron levels. Both single-input and differential-input implementations of such amplifiers are investigated. A noise analysis technique is presented for analyzing sampled

  8. Logarithmic Adaptive Neighborhood Image Processing (LANIP): Introduction, Connections to Human Brightness Perception, and Application Issues

    OpenAIRE

    J. Debayle; J.-C. Pinoli

    2007-01-01

    A new framework for image representation, processing, and analysis is introduced and exposed through practical applications. The proposed approach is called logarithmic adaptive neighborhood image processing (LANIP) since it is based on the logarithmic image processing (LIP) and on the general adaptive neighborhood image processing (GANIP) approaches, that allow several intensity and spatial properties of the human brightness perception to be mathematically modeled and operationalized, and c...

  9. Adaptation

    International Development Research Centre (IDRC) Digital Library (Canada)

    building skills, knowledge or networks on adaptation, ... the African partners leading the AfricaAdapt network, together with the UK-based Institute of Development Studies; and ... UNCCD Secretariat, Regional Coordination Unit for Africa, Tunis, Tunisia .... 26 Rural–urban Cooperation on Water Management in the Context of.

  10. Adapting adaptation: the English eco-town initiative as governance process

    Directory of Open Access Journals (Sweden)

    Daniel Tomozeiu

    2014-06-01

    Full Text Available Climate change adaptation and mitigation have become key policy drivers in the UK under its Climate Change Act of 2008. At the same time, urbanization has been high on the agenda, given the pressing need for substantial additional housing, particularly in southeast England. These twin policy objectives were brought together in the UK government's 'eco-town' initiative for England launched in 2007, which has since resulted in four eco-town projects currently under development. We critically analyze the eco-town initiative's policy evolution and early planning phase from a multilevel governance perspective by focusing on the following two interrelated aspects: (1 the evolving governance structures and resulting dynamics arising from the development of the eco-town initiative at UK governmental level, and the subsequent partial devolution to local stakeholders, including local authorities and nongovernmental actors, under the new 'localism' agenda; and (2 the effect of these governance dynamics on the conceptual and practical approach to adaptation through the emerging eco-town projects. As such, we problematize the impact of multilevel governance relations, and competing governance strategies and leadership, on shaping eco-town and related adaptation strategies and practice.

  11. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    NARCIS (Netherlands)

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic

  12. A novel, substrate independent three-step process for the growth of uniform ZnO nanorod arrays

    International Nuclear Information System (INIS)

    Byrne, D.; McGlynn, E.; Henry, M.O.; Kumar, K.; Hughes, G.

    2010-01-01

    We report a three-step deposition process for uniform arrays of ZnO nanorods, involving chemical bath deposition of aligned seed layers followed by nanorod nucleation sites and subsequent vapour phase transport growth of nanorods. This combines chemical bath deposition techniques, which enable substrate independent seeding and nucleation site generation with vapour phase transport growth of high crystalline and optical quality ZnO nanorod arrays. Our data indicate that the three-step process produces uniform nanorod arrays with narrow and rather monodisperse rod diameters (∼ 70 nm) across substrates of centimetre dimensions. X-ray photoelectron spectroscopy, scanning electron microscopy and X-ray diffraction were used to study the growth mechanism and characterise the nanostructures.

  13. On Representing Instance Changes in Adaptive Process Management Systems.

    NARCIS (Netherlands)

    Rinderle, S.B.; Kreher, U; Lauer, M.; Dadam, P.; Reichert, M.U.

    2006-01-01

    By separating the process logic from the application code process management systems (PMS) offer promising perspectives for automation and management of business processes. However, the added value of PMS strongly depends on their ability to support business process changes which can affect the

  14. Gap processing for adaptive maximal poisson-disk sampling

    KAUST Repository

    Yan, Dongming

    2013-10-17

    In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.

  15. Gap processing for adaptive maximal poisson-disk sampling

    KAUST Repository

    Yan, Dongming; Wonka, Peter

    2013-01-01

    In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.

  16. Adaptive Smoothing in fMRI Data Processing Neural Networks

    DEFF Research Database (Denmark)

    Vilamala, Albert; Madsen, Kristoffer Hougaard; Hansen, Lars Kai

    2017-01-01

    in isolation. With the advent of new tools for deep learning, recent work has proposed to turn these pipelines into end-to-end learning networks. This change of paradigm offers new avenues to improvement as it allows for a global optimisation. The current work aims at benefitting from this paradigm shift...... by defining a smoothing step as a layer in these networks able to adaptively modulate the degree of smoothing required by each brain volume to better accomplish a given data analysis task. The viability is evaluated on real fMRI data where subjects did alternate between left and right finger tapping tasks....

  17. Context-Aware Design for Process Flexibility and Adaptation

    Science.gov (United States)

    Yao, Wen

    2012-01-01

    Today's organizations face continuous and unprecedented changes in their business environment. Traditional process design tools tend to be inflexible and can only support rigidly defined processes (e.g., order processing in the supply chain). This considerably restricts their real-world applications value, especially in the dynamic and…

  18. Improving source discrimination performance by using an optimized acoustic array and adaptive high-resolution CLEAN-SC beamforming

    NARCIS (Netherlands)

    Luesutthiviboon, S.; Malgoezar, A.M.N.; Snellen, M.; Sijtsma, P.; Simons, D.G.

    2018-01-01

    Beamforming performance can be improved in two ways: optimizing the location of microphones on the acoustic array and applying advanced beamforming algorithms. In this study, the effects of the two approaches are studied. An optimization method is developed to optimize the location of microphones

  19. Adaptation as a political process: adjusting to drought and conflict in Kenya's drylands.

    Science.gov (United States)

    Eriksen, Siri; Lind, Jeremy

    2009-05-01

    In this article, we argue that people's adjustments to multiple shocks and changes, such as conflict and drought, are intrinsically political processes that have uneven outcomes. Strengthening local adaptive capacity is a critical component of adapting to climate change. Based on fieldwork in two areas in Kenya, we investigate how people seek to access livelihood adjustment options and promote particular adaptation interests through forming social relations and political alliances to influence collective decision-making. First, we find that, in the face of drought and conflict, relations are formed among individuals, politicians, customary institutions, and government administration aimed at retaining or strengthening power bases in addition to securing material means of survival. Second, national economic and political structures and processes affect local adaptive capacity in fundamental ways, such as through the unequal allocation of resources across regions, development policy biased against pastoralism, and competition for elected political positions. Third, conflict is part and parcel of the adaptation process, not just an external factor inhibiting local adaptation strategies. Fourth, there are relative winners and losers of adaptation, but whether or not local adjustments to drought and conflict compound existing inequalities depends on power relations at multiple geographic scales that shape how conflicting interests are negotiated locally. Climate change adaptation policies are unlikely to be successful or minimize inequity unless the political dimensions of local adaptation are considered; however, existing power structures and conflicts of interests represent political obstacles to developing such policies.

  20. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A; Fraga, F A F; Fraga, M M F R; Margato, L M S; Pereira, L [LIP-Coimbra and Departamento de Física, Universidade de Coimbra, Rua Larga, Coimbra (Portugal); Defendi, I; Jurkovic, M [Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II), TUM, Lichtenbergstr. 1, Garching (Germany); Engels, R; Kemmerling, G [Zentralinstitut für Elektronik, Forschungszentrum Jülich GmbH, Wilhelm-Johnen-Straße, Jülich (Germany); Gongadze, A; Guerard, B; Manzin, G; Niko, H; Peyaud, A; Piscitelli, F [Institut Laue Langevin, 6 Rue Jules Horowitz, Grenoble (France); Petrillo, C; Sacchetti, F [Istituto Nazionale per la Fisica della Materia, Unità di Perugia, Via A. Pascoli, Perugia (Italy); Raspino, D; Rhodes, N J; Schooneveld, E M, E-mail: andrei@coimbra.lip.pt [Science and Technology Facilities Council, Rutherford Appleton Laboratory, Harwell Oxford, Didcot (United Kingdom); others, and

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/∼andrei/.

  1. Preventing KPI Violations in Business Processes based on Decision Tree Learning and Proactive Runtime Adaptation

    Directory of Open Access Journals (Sweden)

    Dimka Karastoyanova

    2012-01-01

    Full Text Available The performance of business processes is measured and monitored in terms of Key Performance Indicators (KPIs. If the monitoring results show that the KPI targets are violated, the underlying reasons have to be identified and the process should be adapted accordingly to address the violations. In this paper we propose an integrated monitoring, prediction and adaptation approach for preventing KPI violations of business process instances. KPIs are monitored continuously while the process is executed. Additionally, based on KPI measurements of historical process instances we use decision tree learning to construct classification models which are then used to predict the KPI value of an instance while it is still running. If a KPI violation is predicted, we identify adaptation requirements and adaptation strategies in order to prevent the violation.

  2. Parents of children with cerebral palsy : a review of factors related to the process of adaptation

    NARCIS (Netherlands)

    Rentinck, I. C. M.; Ketelaar, M.; Jongmans, M. J.; Gorter, J. W.

    Background Little is known about the way parents adapt to the situation when their child is diagnosed with cerebral palsy. Methods A literature search was performed to gain a deeper insight in the process of adaptation of parents with a child with cerebral palsy and on factors related to this

  3. A quantitative formulation of the dynamic behaviour of adaptation processes to ionizing radiation

    International Nuclear Information System (INIS)

    Pfandler, S.

    1999-12-01

    The discovery of adaptation processes in cells (i.e., increased resistance to effects of a challenge dose administered after a lower adapting dose) has fuelled the debate on possible cellular processes relevant for low dose exposures. However, numerous experiments on radioadaptive response do not provide a clear picture of the nature of adaptive response and the conditions under which it occurs. This work proposes a model that succeeds in modelling data obtained from various experiments on radioadaptation. The model assumes impaired DNA integrity as triggering signal for induction of adaptation. Induction of adaptive response is seen as two-phase process. First, ionizing radiation induces radicals by water radiolysis which give rise to specific DNA lesions. On the other hand, these lesions must be perceived and, in a way, processed by the cell, thereby creating the final signal necessary for the comprehensive adaptive response. This processing occurs through some event in S-phase and can be halted by local conformational changes of chromatin induced by ionizing radiation. Thus, the model assumes two counteracting processes that have to be balanced for the triggering signal of adaptation to occur, each of them related to different target volumes. This work comprises mathematical treatment of radical formation, DNA lesion induction and inhibition of local initiation of replication which finally provides functions that quantify the reduction of double strand breaks introduced by challenge doses in adapted cells as compared to non-adapted cells. Non-linear regression analyses based upon data from experiments on radioadaptation yield regression curves which describe existing data satisfactorily. Thus, it corroborates the existence of adaptive response as, in principle, universal feature of cells and specifies conditions which favor development of radioadaptation. (author)

  4. Adapt

    Science.gov (United States)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  5. Adaptive Process Management in Highly Dynamic and Pervasive Scenarios

    Directory of Open Access Journals (Sweden)

    Massimiliano de Leoni

    2009-06-01

    Full Text Available Process Management Systems (PMSs are currently more and more used as a supporting tool for cooperative processes in pervasive and highly dynamic situations, such as emergency situations, pervasive healthcare or domotics/home automation. But in all such situations, designed processes can be easily invalidated since the execution environment may change continuously due to frequent unforeseeable events. This paper aims at illustrating the theoretical framework and the concrete implementation of SmartPM, a PMS that features a set of sound and complete techniques to automatically cope with unplanned exceptions. PMS SmartPM is based on a general framework which adopts the Situation Calculus and Indigolog.

  6. Adaptive memory: the comparative value of survival processing.

    Science.gov (United States)

    Nairne, James S; Pandeirada, Josefa N S; Thompson, Sarah R

    2008-02-01

    We recently proposed that human memory systems are "tuned" to remember information that is processed for survival, perhaps as a result of fitness advantages accrued in the ancestral past. This proposal was supported by experiments in which participants showed superior memory when words were rated for survival relevance, at least relative to when words received other forms of deep processing. The current experiments tested the mettle of survival memory by pitting survival processing against conditions that are universally accepted as producing excellent retention, including conditions in which participants rated words for imagery, pleasantness, and self-reference; participants also generated words, studied words with the intention of learning them, or rated words for relevance to a contextually rich (but non-survival-related) scenario. Survival processing yielded the best retention, which suggests that it may be one of the best encoding procedures yet discovered in the memory field.

  7. Adaptive Control of Freeze-Form Extrusion Fabrication Processes (Preprint)

    National Research Council Canada - National Science Library

    Zhao, Xiyue; Landers, Robert G; Leu, Ming C

    2008-01-01

    Freeze-form Extrusion Fabrication (FEF) is an additive manufacturing process that extrudes high solids loading aqueous ceramic pastes in a layer-by-layer fashion below the paste freezing temperature for component fabrication...

  8. Adapting bilateral directional processing to individual and situational influences

    DEFF Research Database (Denmark)

    Neher, Tobias; Wagener, Kirsten C.; Latzel, Matthias

    2017-01-01

    This study examined differences in benefit from bilateral directional processing. Groups of listeners with symmetric or asymmetric audiograms level difference, BILD......), and no difference in age or overall degree of hearing loss took part. Aided speech reception was measured using virtual acoustics together with a simulation of a linked pair of closed-fit behind-the-ear hearing aids. Five processing schemes and three acoustic scenarios were used. The processing schemes differed...... in the trade-off between signal-to-noise ratio (SNR) improvement and binaural cue preservation. The acoustic scenarios consisted of a frontal target talker and two lateral speech maskers or spatially diffuse noise. For both groups, a significant interaction between BILD, processing scheme and acoustic scenario...

  9. A single-rate context-dependent learning process underlies rapid adaptation to familiar object dynamics.

    Science.gov (United States)

    Ingram, James N; Howard, Ian S; Flanagan, J Randall; Wolpert, Daniel M

    2011-09-01

    Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics

  10. A single-rate context-dependent learning process underlies rapid adaptation to familiar object dynamics.

    Directory of Open Access Journals (Sweden)

    James N Ingram

    2011-09-01

    Full Text Available Motor learning has been extensively studied using dynamic (force-field perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar

  11. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    Energy Technology Data Exchange (ETDEWEB)

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-30

    primarily the result of spurious identification and incorrect association of phases, and of excessive variability in estimates for the velocity and direction of incoming seismic phases. The mitigation of these causes has led to the development of two complimentary techniques for classifying seismic sources by testing detected signals under mutually exclusive event hypotheses. Both of these techniques require appropriate calibration data from the region to be monitored, and are therefore ideally suited to mining areas or other sites with recurring seismicity. The first such technique is a classification and location algorithm where a template is designed for each site being monitored which defines which phases should be observed, and at which times, for all available regional array stations. For each phase, the variability of measurements (primarily the azimuth and apparent velocity) from previous events is examined and it is determined which processing parameters (array configuration, data window length, frequency band) provide the most stable results. This allows us to define optimal diagnostic tests for subsequent occurrences of the phase in question. The calibration of templates for this project revealed significant results with major implications for seismic processing in both automatic and analyst reviewed contexts: • one or more fixed frequency bands should be chosen for each phase tested for. • the frequency band providing the most stable parameter estimates varies from site to site and a frequency band which provides optimal measurements for one site may give substantially worse measurements for a nearby site. • slowness corrections applied depend strongly on the frequency band chosen. • the frequency band providing the most stable estimates is often neither the band providing the greatest SNR nor the band providing the best array gain. For this reason, the automatic template location estimates provided here are frequently far better than those obtained by

  12. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    International Nuclear Information System (INIS)

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-01

    primarily the result of spurious identification and incorrect association of phases, and of excessive variability in estimates for the velocity and direction of incoming seismic phases. The mitigation of these causes has led to the development of two complimentary techniques for classifying seismic sources by testing detected signals under mutually exclusive event hypotheses. Both of these techniques require appropriate calibration data from the region to be monitored, and are therefore ideally suited to mining areas or other sites with recurring seismicity. The first such technique is a classification and location algorithm where a template is designed for each site being monitored which defines which phases should be observed, and at which times, for all available regional array stations. For each phase, the variability of measurements (primarily the azimuth and apparent velocity) from previous events is examined and it is determined which processing parameters (array configuration, data window length, frequency band) provide the most stable results. This allows us to define optimal diagnostic tests for subsequent occurrences of the phase in question. The calibration of templates for this project revealed significant results with major implications for seismic processing in both automatic and analyst reviewed contexts: (1) one or more fixed frequency bands should be chosen for each phase tested for; (2) the frequency band providing the most stable parameter estimates varies from site to site and a frequency band which provides optimal measurements for one site may give substantially worse measurements for a nearby site; (3) slowness corrections applied depend strongly on the frequency band chosen; (4) the frequency band providing the most stable estimates is often neither the band providing the greatest SNR nor the band providing the best array gain. For this reason, the automatic template location estimates provided here are frequently far better than those obtained by

  13. CCD and IR array controllers

    Science.gov (United States)

    Leach, Robert W.; Low, Frank J.

    2000-08-01

    A family of controllers has bene developed that is powerful and flexible enough to operate a wide range of CCD and IR focal plane arrays in a variety of ground-based applications. These include fast readout of small CCD and IR arrays for adaptive optics applications, slow readout of large CCD and IR mosaics, and single CCD and IR array operation at low background/low noise regimes as well as high background/high speed regimes. The CCD and IR controllers have a common digital core based on user- programmable digital signal processors that are used to generate the array clocking and signal processing signals customized for each application. A fiber optic link passes image data and commands to VME or PCI interface boards resident in a host computer to the controller. CCD signal processing is done with a dual slope integrator operating at speeds of up to one Megapixel per second per channel. Signal processing of IR arrays is done either with a dual channel video processor or a four channel video processor that has built-in image memory and a coadder to 32-bit precision for operating high background arrays. Recent developments underway include the implementation of a fast fiber optic data link operating at a speed of 12.5 Megapixels per second for fast image transfer from the controller to the host computer, and supporting image acquisition software and device drivers for the PCI interface board for the Sun Solaris, Linux and Windows 2000 operating systems.

  14. "The Gaze Heuristic:" Biography of an Adaptively Rational Decision Process.

    Science.gov (United States)

    Hamlin, Robert P

    2017-04-01

    This article is a case study that describes the natural and human history of the gaze heuristic. The gaze heuristic is an interception heuristic that utilizes a single input (deviation from a constant angle of approach) repeatedly as a task is performed. Its architecture, advantages, and limitations are described in detail. A history of the gaze heuristic is then presented. In natural history, the gaze heuristic is the only known technique used by predators to intercept prey. In human history the gaze heuristic was discovered accidentally by Royal Air Force (RAF) fighter command just prior to World War II. As it was never discovered by the Luftwaffe, the technique conferred a decisive advantage upon the RAF throughout the war. After the end of the war in America, German technology was combined with the British heuristic to create the Sidewinder AIM9 missile, the most successful autonomous weapon ever built. There are no plans to withdraw it or replace its guiding gaze heuristic. The case study demonstrates that the gaze heuristic is a specific heuristic type that takes a single best input at the best time (take the best 2 ). Its use is an adaptively rational response to specific, rapidly evolving decision environments that has allowed those animals/humans/machines who use it to survive, prosper, and multiply relative to those who do not. Copyright © 2017 Cognitive Science Society, Inc.

  15. Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Beckingsale, D. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Gaudin, W. P. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Hornung, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gunney, B. T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Herdman, J. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Jarvis, S. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom)

    2014-11-17

    Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.

  16. A Comprehensive Approach to Adaptive Processing Both on Transmit and Receive Including Presence of Waveform Diversity

    National Research Council Canada - National Science Library

    Sarkar, Tapan

    2006-01-01

    .... Use of waveform diversity and a comprehensive approach to adaptive processing may not be useful if the sensors deviate from their true positions, due to environmental effects or due to mechanical...

  17. Contextualizing Individual Competencies for Managing the Corporate Social Responsibility Adaptation Process

    NARCIS (Netherlands)

    Osagie, E.R.; Wesselink, R.; Blok, V.; Mulder, M.

    2016-01-01

    Companies committed to corporate social responsibility (CSR) should ensure that their managers possess the appropriate competencies to effectively manage the CSR adaptation process. The literature provides insights into the individual competencies these managers need but fails to prioritize them and

  18. Adaptive Soa Stack-Based Business Process Monitoring Platform

    Directory of Open Access Journals (Sweden)

    Przemysław Dadel

    2014-01-01

    Full Text Available Executable business processes that formally describe company activities are well placed in the SOA environment as they allow for declarative organization of high-level system logic.However, for both technical and non-technical users, to fully benet from that element of abstractionappropriate business process monitoring systems are required and existing solutions remain unsatisfactory.The paper discusses the problem of business process monitoring in the context of the service orientation paradigm in order to propose an architectural solution and provide implementation of a system for business process monitoring that alleviates the shortcomings of the existing solutions.Various platforms are investigated to obtain a broader view of the monitoring problem and to gather functional and non-functional requirements. These requirements constitute input forthe further analysis and the system design. The monitoring software is then implemented and evaluated according to the specied criteria.An extensible business process monitoring system was designed and built on top of OSGiMM - a dynamic, event-driven, congurable communications layer that provides real-time monitoring capabilities for various types of resources. The system was tested against the stated functional requirements and its implementation provides a starting point for the further work.It is concluded that providing a uniform business process monitoring solution that satises a wide range of users and business process platform vendors is a dicult endeavor. It is furthermore reasoned that only an extensible, open-source, monitoring platform built on top of a scalablecommunication core has a chance to address all the stated and future requirements.

  19. Fast But Fleeting: Adaptive Motor Learning Processes Associated with Aging and Cognitive Decline

    Science.gov (United States)

    Trewartha, Kevin M.; Garcia, Angeles; Wolpert, Daniel M.

    2014-01-01

    Motor learning has been shown to depend on multiple interacting learning processes. For example, learning to adapt when moving grasped objects with novel dynamics involves a fast process that adapts and decays quickly—and that has been linked to explicit memory—and a slower process that adapts and decays more gradually. Each process is characterized by a learning rate that controls how strongly motor memory is updated based on experienced errors and a retention factor determining the movement-to-movement decay in motor memory. Here we examined whether fast and slow motor learning processes involved in learning novel dynamics differ between younger and older adults. In addition, we investigated how age-related decline in explicit memory performance influences learning and retention parameters. Although the groups adapted equally well, they did so with markedly different underlying processes. Whereas the groups had similar fast processes, they had different slow processes. Specifically, the older adults exhibited decreased retention in their slow process compared with younger adults. Within the older group, who exhibited considerable variation in explicit memory performance, we found that poor explicit memory was associated with reduced retention in the fast process, as well as the slow process. These findings suggest that explicit memory resources are a determining factor in impairments in the both the fast and slow processes for motor learning but that aging effects on the slow process are independent of explicit memory declines. PMID:25274819

  20. Fast but fleeting: adaptive motor learning processes associated with aging and cognitive decline.

    Science.gov (United States)

    Trewartha, Kevin M; Garcia, Angeles; Wolpert, Daniel M; Flanagan, J Randall

    2014-10-01

    Motor learning has been shown to depend on multiple interacting learning processes. For example, learning to adapt when moving grasped objects with novel dynamics involves a fast process that adapts and decays quickly-and that has been linked to explicit memory-and a slower process that adapts and decays more gradually. Each process is characterized by a learning rate that controls how strongly motor memory is updated based on experienced errors and a retention factor determining the movement-to-movement decay in motor memory. Here we examined whether fast and slow motor learning processes involved in learning novel dynamics differ between younger and older adults. In addition, we investigated how age-related decline in explicit memory performance influences learning and retention parameters. Although the groups adapted equally well, they did so with markedly different underlying processes. Whereas the groups had similar fast processes, they had different slow processes. Specifically, the older adults exhibited decreased retention in their slow process compared with younger adults. Within the older group, who exhibited considerable variation in explicit memory performance, we found that poor explicit memory was associated with reduced retention in the fast process, as well as the slow process. These findings suggest that explicit memory resources are a determining factor in impairments in the both the fast and slow processes for motor learning but that aging effects on the slow process are independent of explicit memory declines. Copyright © 2014 the authors 0270-6474/14/3413411-11$15.00/0.

  1. Framework for adaptive multiscale analysis of nonhomogeneous point processes.

    Science.gov (United States)

    Helgason, Hannes; Bartroff, Jay; Abry, Patrice

    2011-01-01

    We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.

  2. Sonar waveforms for reverberation rejection, Part IV: Adaptive processing.

    NARCIS (Netherlands)

    IJsselmuide, S.P. van; Deruaz, L.; Been, R.; Doisy, Y.; Beerens, S.P.

    2002-01-01

    For littoral ASW, reverberation is a big problem and rejection of reverberation is of utmost importance. The influence of the transmitted signal on the signal to reverberation ratio has been presented in three preceding papers. In this paper, the influence of improved signal processing on the

  3. Psychosocial intervention effects on adaptation, disease course and biobehavioral processes in cancer.

    Science.gov (United States)

    Antoni, Michael H

    2013-03-01

    A diagnosis of cancer and subsequent treatments place demands on psychological adaptation. Behavioral research suggests the importance of cognitive, behavioral, and social factors in facilitating adaptation during active treatment and throughout cancer survivorship, which forms the rationale for the use of many psychosocial interventions in cancer patients. This cancer experience may also affect physiological adaptation systems (e.g., neuroendocrine) in parallel with psychological adaptation changes (negative affect). Changes in adaptation may alter tumor growth-promoting processes (increased angiogenesis, migration and invasion, and inflammation) and tumor defense processes (decreased cellular immunity) relevant for cancer progression and the quality of life of cancer patients. Some evidence suggests that psychosocial intervention can improve psychological and physiological adaptation indicators in cancer patients. However, less is known about whether these interventions can influence tumor activity and tumor growth-promoting processes and whether changes in these processes could explain the psychosocial intervention effects on recurrence and survival documented to date. Documenting that psychosocial interventions can modulate molecular activities (e.g., transcriptional indicators of cell signaling) that govern tumor promoting and tumor defense processes on the one hand, and clinical disease course on the other is a key challenge for biobehavioral oncology research. This mini-review will summarize current knowledge on psychological and physiological adaptation processes affected throughout the stress of the cancer experience, and the effects of psychosocial interventions on psychological adaptation, cancer disease progression, and changes in stress-related biobehavioral processes that may mediate intervention effects on clinical cancer outcomes. Very recent intervention work in breast cancer will be used to illuminate emerging trends in molecular probes of

  4. Process-morphology scaling relations quantify self-organization in capillary densified nanofiber arrays.

    Science.gov (United States)

    Kaiser, Ashley L; Stein, Itai Y; Cui, Kehang; Wardle, Brian L

    2018-02-07

    Capillary-mediated densification is an inexpensive and versatile approach to tune the application-specific properties and packing morphology of bulk nanofiber (NF) arrays, such as aligned carbon nanotubes. While NF length governs elasto-capillary self-assembly, the geometry of cellular patterns formed by capillary densified NFs cannot be precisely predicted by existing theories. This originates from the recently quantified orders of magnitude lower than expected NF array effective axial elastic modulus (E), and here we show via parametric experimentation and modeling that E determines the width, area, and wall thickness of the resulting cellular pattern. Both experiments and models show that further tuning of the cellular pattern is possible by altering the NF-substrate adhesion strength, which could enable the broad use of this facile approach to predictably pattern NF arrays for high value applications.

  5. Adaptive digital image processing in real time: First clinical experiences

    International Nuclear Information System (INIS)

    Andre, M.P.; Baily, N.A.; Hier, R.G.; Edwards, D.K.; Tainer, L.B.; Sartoris, D.J.

    1986-01-01

    The promise of computer image processing has generally not been realized in radiology, partly because the methods advanced to date have been expensive, time-consuming, or inconvenient for clinical use. The authors describe a low-cost system which performs complex image processing operations on-line at video rates. The method uses a combination of unsharp mask subtraction (for low-frequency suppression) and statistical differencing (which adjusts the gain at each point of the image on the basis of its variation from a local mean). The operator interactively adjusts aperture size, contrast gain, background subtraction, and spatial noise reduction. The system is being evaluated for on-line fluoroscopic enhancement, for which phantom measurements and clinical results, including lithotripsy, are presented. When used with a video camera, postprocessing of radiographs was advantageous in a variety of studies, including neonatal chest studies. Real-time speed allows use of the system in the reading room as a ''variable view box.''

  6. Physical exercise, inflammatory process and adaptive condition: an overview

    OpenAIRE

    Silva, Fernando Oliveira Catanho da; Macedo, Denise Vaz

    2011-01-01

    Physical exercise induces inflammation, a physiological response that is part of immune system activity and promotes tissue remodeling after exercise overload. The activation of the inflammatory process is local and systemic and is mediated by different cells and secreted compounds. The objective is to reestablish organ homeostasis after a single bout of exercise or after several exercise sessions. The acute-phase response involves the combined actions of activated leukocytes, cytokines, acut...

  7. Distributed Sensing and Processing Adaptive Collaboration Environment (D-SPACE)

    Science.gov (United States)

    2014-07-01

    RISC 525 Brooks Road Rome NY 13441-4505 10. SPONSOR/MONITOR’S ACRONYM(S) AFRL/RI 11. SPONSOR/MONITOR’S REPORT NUMBER AFRL-RI-RS-TR-2014-195 12...cloud” technologies are not appropriate for situation understanding in areas of denial, where computation resources are limited, data not easily...graph matching process. D-SPACE distributes graph exploitation among a network of autonomous computational resources, designs the collaboration policy

  8. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    OpenAIRE

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic illumination pattern, i.e., the light pattern of a single LED, and the setting of LED illumination levels, respectively. Thereafter, we propose to use the relative mean squared error as the cost function ...

  9. Adaptive multiparameter control: application to a Rapid Thermal Processing process; Commande Adaptative Multivariable: Application a un Procede de Traitement Thermique Rapide

    Energy Technology Data Exchange (ETDEWEB)

    Morales Mago, S J

    1995-12-20

    In this work the problem of temperature uniformity control in rapid thermal processing is addressed by means of multivariable adaptive control. Rapid Thermal Processing (RTP) is a set of techniques proposed for semiconductor fabrication processes such as annealing, oxidation, chemical vapour deposition and others. The product quality depends on two mains issues: precise trajectory following and spatial temperature uniformity. RTP is a fabrication technique that requires a sophisticated real-time multivariable control system to achieve acceptable results. Modelling of the thermal behaviour of the process leads to very complex mathematical models. These are the reasons why adaptive control techniques are chosen. A multivariable linear discrete time model of the highly non-linear process is identified on-line, using an identification scheme which includes supervisory actions. This identified model, combined with a multivariable predictive control law allows to prevent the controller from systems variations. The control laws are obtained by minimization of a quadratic cost function or by pole placement. In some of these control laws, a partial state reference model was included. This reference model allows to incorporate an appropriate tracking capability into the control law. Experimental results of the application of the involved multivariable adaptive control laws on a RTP system are presented. (author) refs

  10. Influencing adaptation processes on the Australian rangelands for social and ecological resilience

    Directory of Open Access Journals (Sweden)

    Nadine A. Marshall

    2014-06-01

    Full Text Available Resource users require the capacity to cope and adapt to climate changes affecting resource condition if they, and their industries, are to remain viable. Understanding individual-scale responses to a changing climate will be an important component of designing well-targeted, broad-scale strategies and policies. Because of the interdependencies between people and ecosystems, understanding and supporting resilience of resource-dependent people may be as important an aspect of effective resource management as managing the resilience of ecological components. We refer to the northern Australian rangelands as an example of a system that is particularly vulnerable to the impacts of climate change and look for ways to enhance the resilience of the system. Vulnerability of the social system comprises elements of adaptive capacity and sensitivity to change (resource dependency as well as exposure, which is not examined here. We assessed the adaptive capacity of 240 cattle producers, using four established dimensions, and investigated the association between adaptive capacity and climate sensitivity (or resource dependency as measured through 14 established dimensions. We found that occupational identity, employability, networks, strategic approach, environmental awareness, dynamic resource use, and use of technology were all positively correlated with at least one dimension of adaptive capacity and that place attachment was negatively correlated with adaptive capacity. These results suggest that adaptation processes could be influenced by focusing on adaptive capacity and these aspects of climate sensitivity. Managing the resilience of individuals is critical to processes of adaptation at higher levels and needs greater attention if adaptation processes are to be shaped and influenced.

  11. Root locus analysis and design of the adaptation process in active noise control.

    Science.gov (United States)

    Tabatabaei Ardekani, Iman; Abdulla, Waleed H

    2012-10-01

    This paper applies root locus theory to develop a graphical tool for the analysis and design of adaptive active noise control systems. It is shown that the poles of the adaptation process performed in these systems move on typical trajectories in the z-plane as the adaptation step-size varies. Based on this finding, the dominant root of the adaptation process and its trajectory can be determined. The first contribution of this paper is formulating parameters of the adaptation process root locus. The next contribution is introducing a mechanism for modifying the trajectory of the dominant root in the root locus. This mechanism creates a single open loop zero in the original root locus. It is shown that appropriate localization of this zero can cause the dominant root of the locus to be pushed toward the origin, and thereby the adaptation process becomes faster. The validity of the theoretical findings is confirmed in an experimental setup which is implemented using real-time multi-threading and multi-core processing techniques.

  12. Processing And Display Of Medical Three Dimensional Arrays Of Numerical Data Using Octree Encoding

    Science.gov (United States)

    Amans, Jean-Louis; Darier, Pierre

    1986-05-01

    imaging modalities such as X-Ray computerized Tomography (CT), Nuclear Medecine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging.

  13. Mathematical analysis of the real time array PCR (RTA PCR) process

    NARCIS (Netherlands)

    Dijksman, Johan Frederik; Pierik, A.

    2012-01-01

    Real time array PCR (RTA PCR) is a recently developed biochemical technique that measures amplification curves (like with quantitative real time Polymerase Chain Reaction (qRT PCR)) of a multitude of different templates in a sample. It combines two different methods in order to profit from the

  14. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    Science.gov (United States)

    2018-03-10

    circuit boards. A computational electromagnetics software package, FEKO [24], is used to model the antenna arrays, and the RMIM [12] is used to...Symposium on Intelligent Signal Processing and Communications Systems, Chengdu, China, 2010. [24] FEKO Suite 6.3, EM Software & Systems- S.A. (Pty) Ltd...including suggestions for reducing the burden, to Department of Defense, Washington Headquarters Services , Directorate for Information Operations and

  15. Preliminary Investigation of Transmedia Narratives and the Process of Narrative Brand Expansion: Transmedia Adaptation in Picturebooks

    Directory of Open Access Journals (Sweden)

    Yu-Chai Lai

    2016-01-01

    Full Text Available Transmedia narrators can use the intermediacy of images and text as a foundation to develop story networks. These narrators can also use various forms of technology to recreate a variety of aesthetic responses in readers. In this study, we analyzed the narrative strategies of adaptation in examples of transmedia adaptation among winners of international picture book awards. In artistic terms, the horizons of expectation of adapters, the readers of fiction, and the inviting structures extended from intermediacy play key roles in aesthetic communication. How adapters use the materials of intermediacy as filler or to expand on negative speculation also influences the relaying process. In this study, we clarified that in addition to considering aesthetic judgments, adaptation must also adhere to the economy of aesthetics.

  16. The influence of negative stimulus features on conflict adaption:Evidence from fluency of processing

    Directory of Open Access Journals (Sweden)

    Julia eFritz

    2015-02-01

    Full Text Available Cognitive control enables adaptive behavior in a dynamically changing environment. In this context, one prominent adaptation effect is the sequential conflict adjustment, i.e. the observation of reduced response interference on trials following conflict trials. Increasing evidence suggests that such response conflicts are registered as aversive signals. So far, however, the functional role of this aversive signal for conflict adaptation to occur has not been put to test directly. In two experiments, the affective valence of conflict stimuli was manipulated by fluency of processing (stimulus contrast. Experiment 1 used a flanker interference task, Experiment 2 a color-word Stroop task. In both experiments, conflict adaptation effects were only present in fluent, but absent in disfluent trials. Results thus speak against the simple idea that any aversive stimulus feature is suited to promote specific conflict adjustments. Two alternative but not mutually exclusive accounts, namely resource competition and adaptation-by-motivation, will be discussed.

  17. Linear-Array Photoacoustic Imaging Using Minimum Variance-Based Delay Multiply and Sum Adaptive Beamforming Algorithm

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2017-01-01

    In Photoacoustic imaging (PA), Delay-and-Sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely Delay-Multiply-and-Sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a novel beamformer is introduced using Minimum Variance (MV) adaptive beamforming combined with DMAS, so-called Minimum Variance-Based D...

  18. Real-time data acquisition and parallel data processing solution for TJ-II Bolometer arrays diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, E. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain)]. E-mail: eduardo.barrera@upm.es; Ruiz, M. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Lopez, S. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Machon, D. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain); Ochando, M. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain)

    2006-07-15

    Maps of local plasma emissivity of TJ-II plasmas are determined using three-array cameras of silicon photodiodes (AXUV type from IRD). They have assigned the top and side ports of the same sector of the vacuum vessel. Each array consists of 20 unfiltered detectors. The signals from each of these detectors are the inputs to an iterative algorithm of tomographic reconstruction. Currently, these signals are acquired by a PXI standard system at approximately 50 kS/s, with 12 bits of resolution and are stored for off-line processing. A 0.5 s discharge generates 3 Mbytes of raw data. The algorithm's load exceeds the CPU capacity of the PXI system's controller in a continuous mode, making unfeasible to process the samples in parallel with their acquisition in a PXI standard system. A new architecture model has been developed, making possible to add one or several processing cards to a standard PXI system. With this model, it is possible to define how to distribute, in real-time, the data from all acquired signals in the system among the processing cards and the PXI controller. This way, by distributing the data processing among the system controller and two processing cards, the data processing can be done in parallel with the acquisition. Hence, this system configuration would be able to measure even in long pulse devices.

  19. The Process of Adaptation Following a New Diagnosis of Type 1 Diabetes in Adulthood

    DEFF Research Database (Denmark)

    Due-Christensen, Mette; Zoffmann, Vibeke; Willaing, Ingrid

    2018-01-01

    While Type 1 diabetes (T1D) is generally associated with childhood, half of all cases occur in adulthood. The adaptive strategies individuals employ during the initial adaptive phase may have an important impact on their risk of future diabetes complications and their psychosocial well-being. We...... conducted a systematic review of six databases and included nine qualitative studies in a meta-synthesis, the aims of which were to develop a better understanding of how adults newly diagnosed with T1D experience the diagnosis and the phenomena associated with the early process of adaptation to life...

  20. Adaptive nonparametric estimation for L\\'evy processes observed at low frequency

    OpenAIRE

    Kappus, Johanna

    2013-01-01

    This article deals with adaptive nonparametric estimation for L\\'evy processes observed at low frequency. For general linear functionals of the L\\'evy measure, we construct kernel estimators, provide upper risk bounds and derive rates of convergence under regularity assumptions. Our focus lies on the adaptive choice of the bandwidth, using model selection techniques. We face here a non-standard problem of model selection with unknown variance. A new approach towards this problem is proposed, ...

  1. Adaptive Convergence Rates of a Dirichlet Process Mixture of Multivariate Normals

    OpenAIRE

    Tokdar, Surya T.

    2011-01-01

    It is shown that a simple Dirichlet process mixture of multivariate normals offers Bayesian density estimation with adaptive posterior convergence rates. Toward this, a novel sieve for non-parametric mixture densities is explored, and its rate adaptability to various smoothness classes of densities in arbitrary dimension is demonstrated. This sieve construction is expected to offer a substantial technical advancement in studying Bayesian non-parametric mixture models based on stick-breaking p...

  2. Catalyzing alignment processes - Impacts of local adaptations of EMS standards in Thailand

    DEFF Research Database (Denmark)

    Jørgensen, Ulrik; Lauridsen, Erik Hagelskjær

    2004-01-01

    ISO14000 as an EMS can be followed as a travelling standard that has to be adapted and domesticated in the local context, where it is applied. By following the processes of this adaptation and how it changes the coherence between the companies, the regulators and other stakeholders the role...... of the standard is identified. The article is based on a number of case-studies of implementation of EMS in Thai companies....

  3. Analysis of an M/G/1 queue with customer impatience and an adaptive arrival process

    NARCIS (Netherlands)

    Boxma, O.J.; Prabhu, B.J.

    2009-01-01

    We study an M/G/1 queue with impatience and an adaptive arrival process. The rate of the arrival process changes according to whether an incoming customer is accepted or rejected. We analyse two different models for impatience : (i) based on workload, and (ii) based on queue length. For the

  4. [Role adaptation process of elementary school health teachers: establishing their own positions].

    Science.gov (United States)

    Lee, Jeong Hee; Lee, Byoung Sook

    2014-06-01

    The purpose of this study was to explore and identify patterns from the phenomenon of the role adaptation process in elementary school health teachers and finally, suggest a model to describe the process. Grounded theory methodology and focus group interviews were used. Data were collected from 24 participants of four focus groups. The questions used were about their experience of role adaptation including situational contexts and interactional coping strategies. Transcribed data and field notes were analyzed with continuous comparative analysis. The core category was 'establishing their own positions', an interactional coping strategy. The phenomenon identified by participants was confusion and wandering in their role performance. Influencing contexts were unclear beliefs for their role as health teachers and non-supportive job environments. The result of the adaptation process was consolidation of their positions. Pride as health teachers and social recognition and supports intervened to produce that result. The process had three stages; entry, growth, and maturity. The role adaptation process of elementary school health teachers can be explained as establishing, strengthening and consolidating their own positions. Results of this study can be used as fundamental information for developing programs to support the role adaptation of health teachers.

  5. Fabrication process for CMUT arrays with polysilicon electrodes, nanometre precision cavity gaps and through-silicon vias

    International Nuclear Information System (INIS)

    Due-Hansen, J; Poppe, E; Summanwar, A; Jensen, G U; Breivik, L; Wang, D T; Schjølberg-Henriksen, K; Midtbø, K

    2012-01-01

    Capacitive micromachined ultrasound transducers (CMUTs) can be used to realize miniature ultrasound probes. Through-silicon vias (TSVs) allow for close integration of the CMUT and read-out electronics. A fabrication process enabling the realization of a CMUT array with TSVs is being developed. The integrated process requires the formation of highly doped polysilicon electrodes with low surface roughness. A process for polysilicon film deposition, doping, CMP, RIE and thermal annealing that resulted in a film with sheet resistance of 4.0 Ω/□ and a surface roughness of 1 nm rms has been developed. The surface roughness of the polysilicon film was found to increase with higher phosphorus concentrations. The surface roughness also increased when oxygen was present in the thermal annealing ambient. The RIE process for etching CMUT cavities in the doped polysilicon gave a mean etch depth of 59.2 ± 3.9 nm and a uniformity across the wafer ranging from 1.0 to 4.7%. The two presented processes are key processes that enable the fabrication of CMUT arrays suitable for applications in for instance intravascular cardiology and gastrointestinal imaging. (paper)

  6. Conversion of electromagnetic energy in Z-pinch process of single planar wire arrays at 1.5 MA

    International Nuclear Information System (INIS)

    Liangping, Wang; Mo, Li; Juanjuan, Han; Ning, Guo; Jian, Wu; Aici, Qiu

    2014-01-01

    The electromagnetic energy conversion in the Z-pinch process of single planar wire arrays was studied on Qiangguang generator (1.5 MA, 100 ns). Electrical diagnostics were established to monitor the voltage of the cathode-anode gap and the load current for calculating the electromagnetic energy. Lumped-element circuit model of wire arrays was employed to analyze the electromagnetic energy conversion. Inductance as well as resistance of a wire array during the Z-pinch process was also investigated. Experimental data indicate that the electromagnetic energy is mainly converted to magnetic energy and kinetic energy and ohmic heating energy can be neglected before the final stagnation. The kinetic energy can be responsible for the x-ray radiation before the peak power. After the stagnation, the electromagnetic energy coupled by the load continues increasing and the resistance of the load achieves its maximum of 0.6–1.0 Ω in about 10–20 ns

  7. Optimal control of stretching process of flexible solar arrays on spacecraft based on a hybrid optimization strategy

    Directory of Open Access Journals (Sweden)

    Qijia Yao

    2017-07-01

    Full Text Available The optimal control of multibody spacecraft during the stretching process of solar arrays is investigated, and a hybrid optimization strategy based on Gauss pseudospectral method (GPM and direct shooting method (DSM is presented. First, the elastic deformation of flexible solar arrays was described approximately by the assumed mode method, and a dynamic model was established by the second Lagrangian equation. Then, the nonholonomic motion planning problem is transformed into a nonlinear programming problem by using GPM. By giving fewer LG points, initial values of the state variables and control variables were obtained. A serial optimization framework was adopted to obtain the approximate optimal solution from a feasible solution. Finally, the control variables were discretized at LG points, and the precise optimal control inputs were obtained by DSM. The optimal trajectory of the system can be obtained through numerical integration. Through numerical simulation, the stretching process of solar arrays is stable with no detours, and the control inputs match the various constraints of actual conditions. The results indicate that the method is effective with good robustness. Keywords: Motion planning, Multibody spacecraft, Optimal control, Gauss pseudospectral method, Direct shooting method

  8. Fully Integrated Linear Single Photon Avalanche Diode (SPAD) Array with Parallel Readout Circuit in a Standard 180 nm CMOS Process

    Science.gov (United States)

    Isaak, S.; Bull, S.; Pitter, M. C.; Harrison, Ian.

    2011-05-01

    This paper reports on the development of a SPAD device and its subsequent use in an actively quenched single photon counting imaging system, and was fabricated in a UMC 0.18 μm CMOS process. A low-doped p- guard ring (t-well layer) encircling the active area to prevent the premature reverse breakdown. The array is a 16×1 parallel output SPAD array, which comprises of an active quenched SPAD circuit in each pixel with the current value being set by an external resistor RRef = 300 kΩ. The SPAD I-V response, ID was found to slowly increase until VBD was reached at excess bias voltage, Ve = 11.03 V, and then rapidly increase due to avalanche multiplication. Digital circuitry to control the SPAD array and perform the necessary data processing was designed in VHDL and implemented on a FPGA chip. At room temperature, the dark count was found to be approximately 13 KHz for most of the 16 SPAD pixels and the dead time was estimated to be 40 ns.

  9. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  10. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  11. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  12. Environmentally adaptive processing for shallow ocean applications: A sequential Bayesian approach.

    Science.gov (United States)

    Candy, J V

    2015-09-01

    The shallow ocean is a changing environment primarily due to temperature variations in its upper layers directly affecting sound propagation throughout. The need to develop processors capable of tracking these changes implies a stochastic as well as an environmentally adaptive design. Bayesian techniques have evolved to enable a class of processors capable of performing in such an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean environment. A solution to this problem is addressed by developing a sequential Bayesian processor capable of providing a joint solution to the modal function tracking and environmental adaptivity problem. Here, the focus is on the development of both a particle filter and an unscented Kalman filter capable of providing reasonable performance for this problem. These processors are applied to hydrophone measurements obtained from a vertical array. The adaptivity problem is attacked by allowing the modal coefficients and/or wavenumbers to be jointly estimated from the noisy measurement data along with tracking of the modal functions while simultaneously enhancing the noisy pressure-field measurements.

  13. Design of a phased array for the generation of adaptive radiation force along a path surrounding a breast lesion for dynamic ultrasound elastography imaging.

    Science.gov (United States)

    Ekeom, Didace; Hadj Henni, Anis; Cloutier, Guy

    2013-03-01

    This work demonstrates, with numerical simulations, the potential of an octagonal probe for the generation of radiation forces in a set of points following a path surrounding a breast lesion in the context of dynamic ultrasound elastography imaging. Because of the in-going wave adaptive focusing strategy, the proposed method is adapted to induce shear wave fronts to interact optimally with complex lesions. Transducer elements were based on 1-3 piezocomposite material. Three-dimensional simulations combining the finite element method and boundary element method with periodic boundary conditions in the elevation direction were used to predict acoustic wave radiation in a targeted region of interest. The coupling factor of the piezocomposite material and the radiated power of the transducer were optimized. The transducer's electrical impedance was targeted to 50 Ω. The probe was simulated by assembling the designed transducer elements to build an octagonal phased-array with 256 elements on each edge (for a total of 2048 elements). The central frequency is 4.54 MHz; simulated transducer elements are able to deliver enough power and can generate the radiation force with a relatively low level of voltage excitation. Using dynamic transmitter beamforming techniques, the radiation force along a path and resulting acoustic pattern in the breast were simulated assuming a linear isotropic medium. Magnitude and orientation of the acoustic intensity (radiation force) at any point of a generation path could be controlled for the case of an example representing a heterogeneous medium with an embedded soft mechanical inclusion.

  14. Adaption of an array spectroradiometer for total ozone column retrieval using direct solar irradiance measurements in the UV spectral range

    Science.gov (United States)

    Zuber, Ralf; Sperfeld, Peter; Riechelmann, Stefan; Nevas, Saulius; Sildoja, Meelis; Seckmeyer, Gunther

    2018-04-01

    A compact array spectroradiometer that enables precise and robust measurements of solar UV spectral direct irradiance is presented. We show that this instrument can retrieve total ozone column (TOC) accurately. The internal stray light, which is often the limiting factor for measurements in the UV spectral range and increases the uncertainty for TOC analysis, is physically reduced so that no other stray-light reduction methods, such as mathematical corrections, are necessary. The instrument has been extensively characterised at the Physikalisch-Technische Bundesanstalt (PTB) in Germany. During an international total ozone measurement intercomparison at the Izaña Atmospheric Observatory in Tenerife, the high-quality applicability of the instrument was verified with measurements of the direct solar irradiance and subsequent TOC evaluations based on the spectral data measured between 12 and 30 September 2016. The results showed deviations of the TOC of less than 1.5 % from most other instruments in most situations and not exceeding 3 % from established TOC measurement systems such as Dobson or Brewer.

  15. The application of adaptive Luenberger observer concept in chemical process control: An algorithmic approach

    Science.gov (United States)

    Doko, Marthen Luther

    2017-05-01

    When developing a wide class of on-line parameter estimation scheme for estimating the unknown parameter vector that appears in certain general linear and bilinear parametric model will be parametrizations of LTI processes or plants as well as of some special classes of nonlinear processes or plants. The resuls is used to design one of the important tools in control, i.e., adaptive observer and for stable LTI processes or plants. In this paper it will consider the design of schemes that simultaneously estimate the plant state variables and parameters by processing the plant I/O measurements on-line and such schemes is refered to as adaptive observers. The design of an adaptive observer is based on the combination of a state observer that could be used to estimate the state variables of aparticular plant state-space representation with an on-line estimation scheme. The choice of the plant state-space representation is crucial for the design and stability analysis of the adaptive observer. The paper will discuss a class of observer called Adaptive Luenberger Observer and its application. Begin with observable canonical form one can find observability matrix of n linear independent rows. By using this fact or their linear combination chosen as a basis, various canonical forms known also as Luenberger canonical form can be obtained. Also,this formation will leads to various algorithm for computing including computation of observable canonical form, observable Hessenberg form and reduced-order state observer design.

  16. An adaptive deep-coupled GNSS/INS navigation system with hybrid pre-filter processing

    Science.gov (United States)

    Wu, Mouyan; Ding, Jicheng; Zhao, Lin; Kang, Yingyao; Luo, Zhibin

    2018-02-01

    The deep-coupling of a global navigation satellite system (GNSS) with an inertial navigation system (INS) can provide accurate and reliable navigation information. There are several kinds of deeply-coupled structures. These can be divided mainly into coherent and non-coherent pre-filter based structures, which have their own strong advantages and disadvantages, especially in accuracy and robustness. In this paper, the existing pre-filters of the deeply-coupled structures are analyzed and modified to improve them firstly. Then, an adaptive GNSS/INS deeply-coupled algorithm with hybrid pre-filters processing is proposed to combine the advantages of coherent and non-coherent structures. An adaptive hysteresis controller is designed to implement the hybrid pre-filters processing strategy. The simulation and vehicle test results show that the adaptive deeply-coupled algorithm with hybrid pre-filters processing can effectively improve navigation accuracy and robustness, especially in a GNSS-challenged environment.

  17. A Climate Change Adaptation Planning Process for Low-Lying, Communities Vulnerable to Sea Level Rise

    Directory of Open Access Journals (Sweden)

    Kristi Tatebe

    2012-09-01

    Full Text Available While the province of British Columbia (BC, Canada, provides guidelines for flood risk management, it is local governments’ responsibility to delineate their own flood vulnerability, assess their risk, and integrate these with planning policies to implement adaptive action. However, barriers such as the lack of locally specific data and public perceptions about adaptation options mean that local governments must address the need for adaptation planning within a context of scientific uncertainty, while building public support for difficult choices on flood-related climate policy and action. This research demonstrates a process to model, visualize and evaluate potential flood impacts and adaptation options for the community of Delta, in Metro Vancouver, across economic, social and environmental perspectives. Visualizations in 2D and 3D, based on hydrological modeling of breach events for existing dike infrastructure, future sea level rise and storm surges, are generated collaboratively, together with future adaptation scenarios assessed against quantitative and qualitative indicators. This ‘visioning package’ is being used with staff and a citizens’ Working Group to assess the performance, policy implications and social acceptability of the adaptation strategies. Recommendations based on the experience of the initiative are provided that can facilitate sustainable future adaptation actions and decision-making in Delta and other jurisdictions.

  18. Interconnection of socio-cultural adaptation and identity in the socialization process

    Directory of Open Access Journals (Sweden)

    L Y Rakhmanova

    2015-12-01

    Full Text Available The article considers the influence of the socio-cultural adaptation of an individual on his personality and identity structure; analyzes the processes of primary and secondary socialization in comparison with subsequent adaptation processes, as well as the possibility of a compromise between the unchanging, rigid identity and the ability to adapt flexibly to the changing context. The author identifies positive and negative aspects of adaptation in the contemporary society while testing the hypothesis that if the adaptation is successful and proceeds within the normal range, it helps to preserve the stability of social structures, but does not contribute to their development for the maladaptive behavior of individuals and groups stimulates social transformations. In the second part of the article, the author shows the relationship of the socio-cultural identity and the individual status in various social communities and tries to answer the question whether the existence and functioning of the social community as a pure ‘form’ without individuals (its members is possible. The author describes the identity phenomenon in the context of the opposition of the universal and unique, similarities and differences. The article also introduces the concept of the involvement in the socio-cultural context as one of the indicators of the completeness and depth of individual socio-cultural adaptation to a certain environment, which is quite important for the internal hierarchy of individual identity.

  19. Dependently typed array programs don’t go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2009-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  20. Dependently typed array programs don't go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2008-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  1. ORGANIZATIONAL CULTURE AND LEADERSHIP STYLE: KEY FACTORS IN THE ORGANIZATIONAL ADAPTATION PROCESS

    Directory of Open Access Journals (Sweden)

    Ivona Vrdoljak Raguž

    2017-01-01

    Full Text Available This paper intends to theorize about how the specific leadership style affects the organizational adaptation in terms of its external environment through fostering the desired organizational culture. Adaptation success, the dimensions of organizational culture and the executive leadership role in fostering the desired corporate culture conducive to the organizational adaptation process are discussed in this paper. The objective of this paper is to highlight the top executive managers’ crucial role and their leadership style in creating such an internal climate within an organization that, in turn, encourages and strengthens the implementation of changes and adaptation to its environment. The limitations of this paper lie in the consideration that this subject matter is discussed only at a theoretical level and that its validity should be proved through practical application.

  2. Extending CPN tools with ontologies to support the management of context-adaptive business processes

    OpenAIRE

    Serral Asensio, Estefanía; De Smedt, Johannes; Vanthienen, Jan

    2015-01-01

    Colored Petri Nets (CPN) are a widely used graphical modeling language to manage business processes. Business processes often appear in dynamic environments; therefore, context adaptation has recently emerged as a new challenge to explicitly addressfitness between business process modeling and its execution environment. Although CPN can introduce data by dedefining internal data records, this is not enough to capture the complexity and dynamics of the execution context data. This paper ext...

  3. Modelling and L1 Adaptive Control of pH in Bioethanol Enzymatic Process

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail; Blanke, Mogens; Sin, Gürkan

    2013-01-01

    for pH level regulation: one is a classical PI controller; the other an L1 adaptive output feedback controller. Model-based feed-forward terms are added to the controllers to enhance their performances. A new tuning method of the L1 adaptive controller is also proposed. Further, a new performance...... function is formulated and tailored to this type of processes and is used to monitor the performances of the process in closed loop. The L1 design is found to outperform the PI controller in all tests....

  4. A Spatiotemporal Indexing Approach for Efficient Processing of Big Array-Based Climate Data with MapReduce

    Science.gov (United States)

    Li, Zhenlong; Hu, Fei; Schnase, John L.; Duffy, Daniel Q.; Lee, Tsengdar; Bowen, Michael K.; Yang, Chaowei

    2016-01-01

    Climate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient processing of these data is essential for assessing global challenges such as climate change, natural disasters, and diseases. This is challenging not only because of the large data volume, but also because of the intrinsic high-dimensional nature of geoscience data. To tackle this challenge, we propose a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries. Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balancing the workload. The proposed indexing approach is evaluated using the National Aeronautics and Space Administration (NASA) Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. The experimental results show that the index can significantly accelerate querying and processing (10 speedup compared to the baseline test using the same computing cluster), while keeping the index-to-data ratio small (0.0328). The applicability of the indexing approach is demonstrated by a climate anomaly detection deployed on a NASA Hadoop cluster. This approach is also able to support efficient processing of general array-based spatiotemporal data in various geoscience domains without special configuration on a Hadoop cluster.

  5. A Novel Self-aligned and Maskless Process for Formation of Highly Uniform Arrays of Nanoholes and Nanopillars

    Directory of Open Access Journals (Sweden)

    Wu Wei

    2008-01-01

    Full Text Available AbstractFabrication of a large area of periodic structures with deep sub-wavelength features is required in many applications such as solar cells, photonic crystals, and artificial kidneys. We present a low-cost and high-throughput process for realization of 2D arrays of deep sub-wavelength features using a self-assembled monolayer of hexagonally close packed (HCP silica and polystyrene microspheres. This method utilizes the microspheres as super-lenses to fabricate nanohole and pillar arrays over large areas on conventional positive and negative photoresist, and with a high aspect ratio. The period and diameter of the holes and pillars formed with this technique can be controlled precisely and independently. We demonstrate that the method can produce HCP arrays of hole of sub-250 nm size using a conventional photolithography system with a broadband UV source centered at 400 nm. We also present our 3D FDTD modeling, which shows a good agreement with the experimental results.

  6. Solution-Processed Wide-Bandgap Organic Semiconductor Nanostructures Arrays for Nonvolatile Organic Field-Effect Transistor Memory.

    Science.gov (United States)

    Li, Wen; Guo, Fengning; Ling, Haifeng; Liu, Hui; Yi, Mingdong; Zhang, Peng; Wang, Wenjun; Xie, Linghai; Huang, Wei

    2018-01-01

    In this paper, the development of organic field-effect transistor (OFET) memory device based on isolated and ordered nanostructures (NSs) arrays of wide-bandgap (WBG) small-molecule organic semiconductor material [2-(9-(4-(octyloxy)phenyl)-9H-fluoren-2-yl)thiophene]3 (WG 3 ) is reported. The WG 3 NSs are prepared from phase separation by spin-coating blend solutions of WG 3 /trimethylolpropane (TMP), and then introduced as charge storage elements for nonvolatile OFET memory devices. Compared to the OFET memory device with smooth WG 3 film, the device based on WG 3 NSs arrays exhibits significant improvements in memory performance including larger memory window (≈45 V), faster switching speed (≈1 s), stable retention capability (>10 4 s), and reliable switching properties. A quantitative study of the WG 3 NSs morphology reveals that enhanced memory performance is attributed to the improved charge trapping/charge-exciton annihilation efficiency induced by increased contact area between the WG 3 NSs and pentacene layer. This versatile solution-processing approach to preparing WG 3 NSs arrays as charge trapping sites allows for fabrication of high-performance nonvolatile OFET memory devices, which could be applicable to a wide range of WBG organic semiconductor materials. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Nanofabrication and characterization of ZnO nanorod arrays and branched microrods by aqueous solution route and rapid thermal processing

    International Nuclear Information System (INIS)

    Lupan, Oleg; Chow, Lee; Chai, Guangyu; Roldan, Beatriz; Naitabdi, Ahmed; Schulte, Alfons; Heinrich, Helge

    2007-01-01

    This paper presents an inexpensive and fast fabrication method for one-dimensional (1D) ZnO nanorod arrays and branched two-dimensional (2D), three-dimensional (3D) - nanoarchitectures. Our synthesis technique includes the use of an aqueous solution route and post-growth rapid thermal annealing. It permits rapid and controlled growth of ZnO nanorod arrays of 1D - rods, 2D - crosses, and 3D - tetrapods without the use of templates or seeds. The obtained ZnO nanorods are uniformly distributed on the surface of Si substrates and individual or branched nano/microrods can be easily transferred to other substrates. Process parameters such as concentration, temperature and time, type of substrate and the reactor design are critical for the formation of nanorod arrays with thin diameter and transferable nanoarchitectures. X-ray diffraction, scanning electron microscopy, X-ray photoelectron spectroscopy, transmission electron microscopy and Micro-Raman spectroscopy have been used to characterize the samples

  8. Anti-Hebbian Spike Timing Dependent Plasticity and Adaptive Sensory Processing

    Directory of Open Access Journals (Sweden)

    Patrick D Roberts

    2010-12-01

    Full Text Available Adaptive processing influences the central nervous system's interpretation of incoming sensory information. One of the functions of this adaptative sensory processing is to allow the nervous system to ignore predictable sensory information so that it may focus on important new information needed to improve performance of specific tasks. The mechanism of spike timing-dependent plasticity (STDP has proven to be intriguing in this context because of its dual role in long-term memory and ongoing adaptation to maintain optimal tuning of neural responses. Some of the clearest links between STDP and adaptive sensory processing have come from in vitro, in vivo, and modeling studies of the electrosensory systems of fish. Plasticity in such systems is anti-Hebbian, i.e. presynaptic inputs that repeatedly precede and hence could contribute to a postsynaptic neuron’s firing are weakened. The learning dynamics of anti-Hebbian STDP learning rules are stable if the timing relations obey strict constraints. The stability of these learning rules leads to clear predictions of how functional consequences can arise from the detailed structure of the plasticity. Here we review the connection between theoretical predictions and functional consequences of anti-Hebbian STDP, focusing on adaptive processing in the electrosensory system of weakly electric fish. After introducing electrosensory adaptive processing and the dynamics of anti-Hebbian STDP learning rules, we address issues of predictive sensory cancellation and novelty detection, descending control of plasticity, synaptic scaling, and optimal sensory tuning. We conclude with examples in other systems where these principles may apply.

  9. Anti-hebbian spike-timing-dependent plasticity and adaptive sensory processing.

    Science.gov (United States)

    Roberts, Patrick D; Leen, Todd K

    2010-01-01

    Adaptive sensory processing influences the central nervous system's interpretation of incoming sensory information. One of the functions of this adaptive sensory processing is to allow the nervous system to ignore predictable sensory information so that it may focus on important novel information needed to improve performance of specific tasks. The mechanism of spike-timing-dependent plasticity (STDP) has proven to be intriguing in this context because of its dual role in long-term memory and ongoing adaptation to maintain optimal tuning of neural responses. Some of the clearest links between STDP and adaptive sensory processing have come from in vitro, in vivo, and modeling studies of the electrosensory systems of weakly electric fish. Plasticity in these systems is anti-Hebbian, so that presynaptic inputs that repeatedly precede, and possibly could contribute to, a postsynaptic neuron's firing are weakened. The learning dynamics of anti-Hebbian STDP learning rules are stable if the timing relations obey strict constraints. The stability of these learning rules leads to clear predictions of how functional consequences can arise from the detailed structure of the plasticity. Here we review the connection between theoretical predictions and functional consequences of anti-Hebbian STDP, focusing on adaptive processing in the electrosensory system of weakly electric fish. After introducing electrosensory adaptive processing and the dynamics of anti-Hebbian STDP learning rules, we address issues of predictive sensory cancelation and novelty detection, descending control of plasticity, synaptic scaling, and optimal sensory tuning. We conclude with examples in other systems where these principles may apply.

  10. Application of adaptive digital signal processing to speech enhancement for the hearing impaired.

    Science.gov (United States)

    Chabries, D M; Christiansen, R W; Brey, R H; Robinette, M S; Harris, R W

    1987-01-01

    A major complaint of individuals with normal hearing and hearing impairments is a reduced ability to understand speech in a noisy environment. This paper describes the concept of adaptive noise cancelling for removing noise from corrupted speech signals. Application of adaptive digital signal processing has long been known and is described from a historical as well as technical perspective. The Widrow-Hoff LMS (least mean square) algorithm developed in 1959 forms the introduction to modern adaptive signal processing. This method uses a "primary" input which consists of the desired speech signal corrupted with noise and a second "reference" signal which is used to estimate the primary noise signal. By subtracting the adaptively filtered estimate of the noise, the desired speech signal is obtained. Recent developments in the field as they relate to noise cancellation are described. These developments include more computationally efficient algorithms as well as algorithms that exhibit improved learning performance. A second method for removing noise from speech, for use when no independent reference for the noise exists, is referred to as single channel noise suppression. Both adaptive and spectral subtraction techniques have been applied to this problem--often with the result of decreased speech intelligibility. Current techniques applied to this problem are described, including signal processing techniques that offer promise in the noise suppression application.

  11. Fabrication of metal-matrix composites and adaptive composites using ultrasonic consolidation process

    International Nuclear Information System (INIS)

    Kong, C.Y.; Soar, R.C.

    2005-01-01

    Ultrasonic consolidation (UC) has been used to embed thermally sensitive and damage intolerant fibres within aluminium matrix structures using high frequency, low amplitude, mechanical vibrations. The UC process can induce plastic flow in the metal foils being bonded, to allow the embedding of fibres at typically 25% of the melting temperature of the base metal and at a fraction of the clamping force when compared to fusion processes. To date, the UC process has successfully embedded Sigma silicon carbide (SiC) fibres, shape memory alloy wires and optical fibres, which are presented in this paper. The eventual aim of this research is targeted at the fabrication of adaptive composite structures having the ability to measure external stimuli and respond by adapting their structure accordingly, through the action of embedded active and passive functional fibres within a freeform fabricated metal-matrix structure. This paper presents the fundamental studies of this research to identify embedding methods and working range for the fabrication of adaptive composite structures. The methods considered have produced embedded fibre specimens in which large amounts of plastic flow have been observed, within the matrix, as it is deformed around the fibres, resulting in fully consolidated specimens without damage to the fibres. The microscopic observation techniques and macroscopic functionality tests confirms that the UC process could be applied to the fabrication of metal-matrix composites and adaptive composites, where fusion techniques are not feasible and where a 'cold' process is necessary

  12. A new process for fabricating nanodot arrays on selective regions with diblock copolymer thin film

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dae-Ho [Department of Materials Science and Engineering, Polymer Research Institute, Pohang University of Science and Technology, San 31, Hyoja-Dong, Nam-Gu, Pohang 790-784 (Korea, Republic of)

    2007-09-12

    A procedure for micropatterning a single layer of nanodot arrays in selective regions is demonstrated by using thin films of polystyrene-b-poly(t-butyl acrylate) (PS-b-PtBA) diblock copolymer. The thin-film self-assembled into hexagonally arranged PtBA nanodomains in a PS matrix on a substrate by solvent annealing with 1,4-dioxane. The PtBA nanodomains were converted into poly(acrylic acid) (PAA) having carboxylic-acid-functionalized nanodomains by exposure to hydrochloric acid vapor, or were removed by ultraviolet (UV) irradiation to generate vacant sites without any functional groups due to the elimination of PtBA domains. By sequential treatment with aqueous sodium bicarbonate and aqueous zinc acetate solution, zinc cations were selectively loaded only on the carboxylic-acid-functionalized nanodomains prepared via hydrolysis. Macroscopic patterning through a photomask via UV irradiation, hydrolysis, sequential zinc cation loading and calcination left a nanodot array of zinc oxide on a selectively UV-shaded region.

  13. An adaptive algorithm for simulation of stochastic reaction-diffusion processes

    International Nuclear Information System (INIS)

    Ferm, Lars; Hellander, Andreas; Loetstedt, Per

    2010-01-01

    We propose an adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes. For such systems, simulation of the diffusion requires the predominant part of the computing time. In order to reduce the computational work, the diffusion in parts of the domain is treated macroscopically, in other parts with the tau-leap method and in the remaining parts with Gillespie's stochastic simulation algorithm (SSA) as implemented in the next subvolume method (NSM). The chemical reactions are handled by SSA everywhere in the computational domain. A trajectory of the process is advanced in time by an operator splitting technique and the timesteps are chosen adaptively. The spatial adaptation is based on estimates of the errors in the tau-leap method and the macroscopic diffusion. The accuracy and efficiency of the method are demonstrated in examples from molecular biology where the domain is discretized by unstructured meshes.

  14. A mixed signal ECG processing platform with an adaptive sampling ADC for portable monitoring applications.

    Science.gov (United States)

    Kim, Hyejung; Van Hoof, Chris; Yazicioglu, Refet Firat

    2011-01-01

    This paper describes a mixed-signal ECG processing platform with an 12-bit ADC architecture that can adapt its sampling rate according to the input signals rate of change. This enables the sampling of ECG signals with significantly reduced data rate without loss of information. The presented adaptive sampling scheme reduces the ADC power consumption, enables the processing of ECG signals with lower power consumption, and reduces the power consumption of the radio while streaming the ECG signals. The test results show that running a CWT-based R peak detection algorithm using the adaptively sampled ECG signals consumes only 45.6 μW and it leads to 36% less overall system power consumption.

  15. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    Science.gov (United States)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  16. Psychological and socio-cultural adaptation of international journalism students in Russia: The role of communication skills in the adaptation process

    Directory of Open Access Journals (Sweden)

    Gladkova A.A.

    2017-12-01

    Full Text Available Background. The study of both Russian and international publications issued in the last twenty years revealed a significant gap in the number of studies examining adaptation (general living, psychological, socio-cultural, etc. in general, i.e., without regard to specific characteristics of the audience, and those describing adaptation of a particular group of people (specific age, ethnic, professional groups, etc.. Objective. The current paper aims to overcome this gap by offering a closer look at the adaptation processes of international journalism students at Russian universities, in particular, their psychological and socio-cultural types of adaptation. The question that interests us the most is how psychological and socio-cultural adaptation of international journalists to-be can be made easier and whether communication-oriented techniques can somehow facilitate this process. Design. In this paper, we provide an overview of current research analyzing adaptation from different angles, which is essential for creating a context for further narrower studies. Results. We discuss adaptation of journalism students in Russia, suggesting ways to make their adaptation in a host country easier and arguing that the development of communication skills can be important for successful adaptation to new living and learning conditions. Conclusion. We argue that there is a need for more detailed, narrow-focused research discussing the specifics of adaptation of different groups of people to a new environment (since we believe different people tend to adapt to new conditions in different ways as well as research outlining the role of communication competences in their adaptation processes.

  17. Free-running ADC- and FPGA-based signal processing method for brain PET using GAPD arrays

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Choi, Yong, E-mail: ychoi.image@gmail.com [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Hong, Key Jo [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kang, Jihoon [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Jung, Jin Ho [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Huh, Youn Suk [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Lim, Hyun Keong; Kim, Sang Su [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kim, Byung-Tae [Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Chung, Yonghyun [Department of Radiological Science, Yonsei University College of Health Science, 234 Meaji, Heungup Wonju, Kangwon-Do 220-710 (Korea, Republic of)

    2012-02-01

    Currently, for most photomultiplier tube (PMT)-based PET systems, constant fraction discriminators (CFD) and time to digital converters (TDC) have been employed to detect gamma ray signal arrival time, whereas anger logic circuits and peak detection analog-to-digital converters (ADCs) have been implemented to acquire position and energy information of detected events. As compared to PMT the Geiger-mode avalanche photodiodes (GAPDs) have a variety of advantages, such as compactness, low bias voltage requirement and MRI compatibility. Furthermore, the individual read-out method using a GAPD array coupled 1:1 with an array scintillator can provide better image uniformity than can be achieved using PMT and anger logic circuits. Recently, a brain PET using 72 GAPD arrays (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm) coupled 1:1 with LYSO scintillators (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm Multiplication-Sign 20 mm) has been developed for simultaneous PET/MRI imaging in our laboratory. Eighteen 64:1 position decoder circuits (PDCs) were used to reduce GAPD channel number and three off-the-shelf free-running ADC and field programmable gate array (FPGA) combined data acquisition (DAQ) cards were used for data acquisition and processing. In this study, a free-running ADC- and FPGA-based signal processing method was developed for the detection of gamma ray signal arrival time, energy and position information all together for each GAPD channel. For the method developed herein, three DAQ cards continuously acquired 18 channels of pre-amplified analog gamma ray signals and 108-bit digital addresses from 18 PDCs. In the FPGA, the digitized gamma ray pulses and digital addresses were processed to generate data packages containing pulse arrival time, baseline value, energy value and GAPD channel ID. Finally, these data packages were saved to a 128 Mbyte on-board synchronous dynamic random access memory (SDRAM) and

  18. A fuzzy model based adaptive PID controller design for nonlinear and uncertain processes.

    Science.gov (United States)

    Savran, Aydogan; Kahraman, Gokalp

    2014-03-01

    We develop a novel adaptive tuning method for classical proportional-integral-derivative (PID) controller to control nonlinear processes to adjust PID gains, a problem which is very difficult to overcome in the classical PID controllers. By incorporating classical PID control, which is well-known in industry, to the control of nonlinear processes, we introduce a method which can readily be used by the industry. In this method, controller design does not require a first principal model of the process which is usually very difficult to obtain. Instead, it depends on a fuzzy process model which is constructed from the measured input-output data of the process. A soft limiter is used to impose industrial limits on the control input. The performance of the system is successfully tested on the bioreactor, a highly nonlinear process involving instabilities. Several tests showed the method's success in tracking, robustness to noise, and adaptation properties. We as well compared our system's performance to those of a plant with altered parameters with measurement noise, and obtained less ringing and better tracking. To conclude, we present a novel adaptive control method that is built upon the well-known PID architecture that successfully controls highly nonlinear industrial processes, even under conditions such as strong parameter variations, noise, and instabilities. © 2013 Published by ISA on behalf of ISA.

  19. Approaches to evaluating climate change impacts on species: A guide to initiating the adaptation planning process

    Science.gov (United States)

    Erika L. Rowland; Jennifer E. Davison; Lisa J. Graumlich

    2011-01-01

    Assessing the impact of climate change on species and associated management objectives is a critical initial step for engaging in the adaptation planning process. Multiple approaches are available. While all possess limitations to their application associated with the uncertainties inherent in the data and models that inform their results, conducting and incorporating...

  20. Radiation adaptation as one of the factors for microevolution processes in animals populations

    Directory of Open Access Journals (Sweden)

    V. A. Gaychenko

    2013-03-01

    Full Text Available Study of the main directions of response populations of individual species and faunistic complexes provides opportunity to address the relationship of adaptations of animals to radiation press by raising the level of epige-netic variability and features of a survival strategy. It is suggested to increase the level of microevolutionary processes in conditions of radioactive contamination biocenosis.

  1. Adaptive interpolation of discrete-time signals that can be modeled as autoregressive processes

    NARCIS (Netherlands)

    Janssen, A.J.E.M.; Veldhuis, R.N.J.; Vries, L.B.

    1986-01-01

    The authors present an adaptive algorithm for the restoration of lost sample values in discrete-time signals that can locally be described by means of autoregressive processes. The only restrictions are that the positions of the unknown samples should be known and that they should be embedded in a

  2. Adaptive interpolation of discrete-time signals that can be modeled as autoregressive processes

    NARCIS (Netherlands)

    Janssen, A.J.E.M.; Veldhuis, Raymond N.J.; Vries, Lodewijk B.

    1986-01-01

    This paper presents an adaptive algorithm for the restoration of lost sample values in discrete-time signals that can locally be described by means of autoregressive processes. The only restrictions are that the positions of the unknown samples should be known and that they should be embedded in a

  3. Normalized value coding explains dynamic adaptation in the human valuation process.

    Science.gov (United States)

    Khaw, Mel W; Glimcher, Paul W; Louie, Kenway

    2017-11-28

    The notion of subjective value is central to choice theories in ecology, economics, and psychology, serving as an integrated decision variable by which options are compared. Subjective value is often assumed to be an absolute quantity, determined in a static manner by the properties of an individual option. Recent neurobiological studies, however, have shown that neural value coding dynamically adapts to the statistics of the recent reward environment, introducing an intrinsic temporal context dependence into the neural representation of value. Whether valuation exhibits this kind of dynamic adaptation at the behavioral level is unknown. Here, we show that the valuation process in human subjects adapts to the history of previous values, with current valuations varying inversely with the average value of recently observed items. The dynamics of this adaptive valuation are captured by divisive normalization, linking these temporal context effects to spatial context effects in decision making as well as spatial and temporal context effects in perception. These findings suggest that adaptation is a universal feature of neural information processing and offer a unifying explanation for contextual phenomena in fields ranging from visual psychophysics to economic choice.

  4. Scalable stacked array piezoelectric deformable mirror for astronomy and laser processing applications

    Energy Technology Data Exchange (ETDEWEB)

    Wlodarczyk, Krystian L., E-mail: K.L.Wlodarczyk@hw.ac.uk; Maier, Robert R. J.; Hand, Duncan P. [Institute of Photonics and Quantum Sciences, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Bryce, Emma; Hutson, David; Kirk, Katherine [School of Engineering and Science, University of the West of Scotland, Paisley PA1 2BE (United Kingdom); Schwartz, Noah; Atkinson, David; Beard, Steven; Baillie, Tom; Parr-Burman, Phil [UK Astronomy Technology Centre, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom); Strachan, Mel [Institute of Photonics and Quantum Sciences, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); UK Astronomy Technology Centre, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom)

    2014-02-15

    A prototype of a scalable and potentially low-cost stacked array piezoelectric deformable mirror (SA-PDM) with 35 active elements is presented in this paper. This prototype is characterized by a 2 μm maximum actuator stroke, a 1.4 μm mirror sag (measured for a 14 mm × 14 mm area of the unpowered SA-PDM), and a ±200 nm hysteresis error. The initial proof of concept experiments described here show that this mirror can be successfully used for shaping a high power laser beam in order to improve laser machining performance. Various beam shapes have been obtained with the SA-PDM and examples of laser machining with the shaped beams are presented.

  5. A High Performance Backend for Array-Oriented Programming on Next-Generation Processing Units

    DEFF Research Database (Denmark)

    Lund, Simon Andreas Frimann

    The financial crisis, which started in 2008, spawned the HIPERFIT research center as a preventive measure against future financial crises. The goal of prevention is to be met by improving mathematical models for finance, the verifiable description of them in domain-specific languages...... and the efficient execution of them on high performance systems. This work investigates the requirements for, and the implementation of, a high performance backend supporting these goals. This involves an outline of the hardware available today, in the near future and how to program it for high performance....... The main challenge is to bridge the gaps between performance, productivity and portability. A declarative high-level array-oriented programming model is explored to achieve this goal and a backend implemented to support it. Different strategies to the backend design and application of optimizations...

  6. Primary Dendrite Array Morphology: Observations from Ground-based and Space Station Processed Samples

    Science.gov (United States)

    Tewari, Surendra; Rajamure, Ravi; Grugel, Richard; Erdmann, Robert; Poirier, David

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, "Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST)". Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K/cm (MICAST6) and 28 K/cm (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 micron/s) and a speed decrease (MICAST7-from 20 to 10 micron/s). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  7. Focal plane array with modular pixel array components for scalability

    Science.gov (United States)

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  8. Adaptation of the continuous cold trap system of fluidized-bed to the fluoride volatility process

    International Nuclear Information System (INIS)

    1976-02-01

    A continuous cold trap system consisting of fluidized condenser and stripper has been evaluated with a view to adapt it to the Fluoride Volatility Process in establishing the continuous purification process without radiation decomposition of PuF 6 . Its feasibility is shown by the test with UF 6 -air. Necessary conditions for the cold trap, and performance of the two inch-dia. fluidized bed cold trap system are presented, and also a model of mist formation in the condenser. (auth.)

  9. [The Psychosocial Adaptation Process of Psychiatric Nurses Working in Community Mental Health Centers].

    Science.gov (United States)

    Min, So Young

    2015-12-01

    The aim of this study was to verify psychosocial issues faced by psychiatric and community mental health nurse practitioners (PCMHNP) working in community mental health centers, and to identify the adaptation processes used to resolve the issues. Data were collected through in-depth interviews between December 2013 and August 2014. Participants were 11 PCMHNP working in community mental health centers. Analysis was done using the grounded theory methodology. The first question was "How did you start working at a community mental health center; what were the difficulties you faced during your employment and how did you resolve them?" The core category was 'regulating within relationships.' The adaptation process was categorized into three sequential stages: 'nesting,' 'hanging around the nest,' and 'settling into the nest.' Various action/interaction strategies were employed in these stages. The adaptation results from using these strategies were 'psychiatric nursing within life' and 'a long way to go.' The results of this study are significant as they aid in understanding the psychosocial adaptation processes of PCMHNP working in community mental health centers, and indicate areas to be addressed in the future in order for PCMHNP to fulfill their professional role in the local community.

  10. The process of adapting a universal dating abuse prevention program to adolescents exposed to domestic violence.

    Science.gov (United States)

    Foshee, Vangie A; Dixon, Kimberly S; Ennett, Susan T; Moracco, Kathryn E; Bowling, J Michael; Chang, Ling-Yin; Moss, Jennifer L

    2015-07-01

    Adolescents exposed to domestic violence are at increased risk of dating abuse, yet no evaluated dating abuse prevention programs have been designed specifically for this high-risk population. This article describes the process of adapting Families for Safe Dates (FSD), an evidenced-based universal dating abuse prevention program, to this high-risk population, including conducting 12 focus groups and 107 interviews with the target audience. FSD includes six booklets of dating abuse prevention information, and activities for parents and adolescents to do together at home. We adapted FSD for mothers who were victims of domestic violence, but who no longer lived with the abuser, to do with their adolescents who had been exposed to the violence. Through the adaptation process, we learned that families liked the program structure and valued being offered the program and that some of our initial assumptions about this population were incorrect. We identified practices and beliefs of mother victims and attributes of these adolescents that might increase their risk of dating abuse that we had not previously considered. In addition, we learned that some of the content of the original program generated negative family interactions for some. The findings demonstrate the utility of using a careful process to adapt evidence-based interventions (EBIs) to cultural sub-groups, particularly the importance of obtaining feedback on the program from the target audience. Others can follow this process to adapt EBIs to groups other than the ones for which the original EBI was designed. © The Author(s) 2014.

  11. Enterprise System Adaptation: a Combination of Institutional Structures and Sensemaking Processes

    DEFF Research Database (Denmark)

    Svejvig, Per; Jensen, Tina Blegind

    2009-01-01

    In this paper we set out to investigate how an Enterprise System (ES) adaptation in a Scandinavian high-tech organization, SCANDI, can be understood using a combination of institutional and sensemaking theory. Institutional theory is useful in providing an account for the role that the social...... and historical structures play in ES adaptations, and sensemaking can help us investigate how organizational members make sense of and enact ES in their local context. Based on an analytical framework, where we combine institutional theory and sensemaking theory to provide rich insights into ES adaptation, we...... show: 1) how changing institutional structures provide a shifting context for the way users make sense of and enact ES, 2) how users' sensemaking processes of the ES are played out in practice, and 3) how sensemaking reinforces institutional structures....

  12. Double-sided anodic titania nanotube arrays: a lopsided growth process.

    Science.gov (United States)

    Sun, Lidong; Zhang, Sam; Sun, Xiao Wei; Wang, Xiaoyan; Cai, Yanli

    2010-12-07

    In the past decade, the pore diameter of anodic titania nanotubes was reported to be influenced by a number of factors in organic electrolyte, for example, applied potential, working distance, water content, and temperature. All these were closely related to potential drop in the organic electrolyte. In this work, the essential role of electric field originating from the potential drop was directly revealed for the first time using a simple two-electrode anodizing method. Anodic titania nanotube arrays were grown simultaneously at both sides of a titanium foil, with tube length being longer at the front side than that at the back side. This lopsided growth was attributed to the higher ionic flux induced by electric field at the front side. Accordingly, the nanotube length was further tailored to be comparable at both sides by modulating the electric field. These results are promising to be used in parallel configuration dye-sensitized solar cells, water splitting, and gas sensors, as a result of high surface area produced by the double-sided architecture.

  13. Single event upset susceptibilities of latchup immune CMOS process programmable gate arrays

    Science.gov (United States)

    Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.; Lau, D. D.; Tsubota, T. K.

    Single event upsets (SEU) and latchup susceptibilities of complementary metal oxide semiconductor programmable gate arrays (CMOS PPGA's) were measured at the Lawrence Berkeley Laboratory 88-in. cyclotron facility with Xe (603 MeV), Cu (290 MeV), and Ar (180 MeV) ion beams. The PPGA devices tested were those which may be used in space. Most of the SEU measurements were taken with a newly constructed tester called the Bus Access Storage and Comparison System (BASACS) operating via a Macintosh II computer. When BASACS finds that an output does not match a prerecorded pattern, the state of all outputs, position in the test cycle, and other necessary information is transmitted and stored in the Macintosh. The upset rate was kept between 1 and 3 per second. After a sufficient number of errors are stored, the test is stopped and the total fluence of particles and total errors are recorded. The device power supply current was closely monitored to check for occurrence of latchup. Results of the tests are presented, indicating that some of the PPGA's are good candidates for selected space applications.

  14. High-resolution focal plane array IR detection modules and digital signal processing technologies at AIM

    Science.gov (United States)

    Cabanski, Wolfgang A.; Breiter, Rainer; Koch, R.; Mauk, Karl-Heinz; Rode, Werner; Ziegler, Johann; Eberhardt, Kurt; Oelmaier, Reinhard; Schneider, Harald; Walther, Martin

    2000-07-01

    Full video format focal plane array (FPA) modules with up to 640 X 512 pixels have been developed for high resolution imaging applications in either mercury cadmium telluride (MCT) mid wave (MWIR) infrared (IR) or platinum silicide (PtSi) and quantum well infrared photodetector (QWIP) technology as low cost alternatives to MCT for high performance IR imaging in the MWIR or long wave spectral band (LWIR). For the QWIP's, a new photovoltaic technology was introduced for improved NETD performance and higher dynamic range. MCT units provide fast frame rates > 100 Hz together with state of the art thermal resolution NETD hardware platforms and software for image visualization and nonuniformity correction including scene based self learning algorithms had to be developed to accomplish for the high data rates of up to 18 M pixels/s with 14-bit deep data, allowing to take into account nonlinear effects to access the full NETD by accurate reduction of residual fixed pattern noise. The main features of these modules are summarized together with measured performance data for long range detection systems with moderately fast to slow F-numbers like F/2.0 - F/3.5. An outlook shows most recent activities at AIM, heading for multicolor and faster frame rate detector modules based on MCT devices.

  15. Optical technology for microwave applications VI and optoelectronic signal processing for phased-array antennas III; Proceedings of the Meeting, Orlando, FL, Apr. 20-23, 1992

    Science.gov (United States)

    Yao, Shi-Kay; Hendrickson, Brian M.

    The following topics related to optical technology for microwave applications are discussed: advanced acoustooptic devices, signal processing device technologies, optical signal processor technologies, microwave and optomicrowave devices, advanced lasers and sources, wideband electrooptic modulators, and wideband optical communications. The topics considered in the discussion of optoelectronic signal processing for phased-array antennas include devices, signal processing, and antenna systems.

  16. Effects of practice schedule and task specificity on the adaptive process of motor learning.

    Science.gov (United States)

    Barros, João Augusto de Camargo; Tani, Go; Corrêa, Umberto Cesar

    2017-10-01

    This study investigated the effects of practice schedule and task specificity based on the perspective of adaptive process of motor learning. For this purpose, tasks with temporal and force control learning requirements were manipulated in experiments 1 and 2, respectively. Specifically, the task consisted of touching with the dominant hand the three sequential targets with specific movement time or force for each touch. Participants were children (N=120), both boys and girls, with an average age of 11.2years (SD=1.0). The design in both experiments involved four practice groups (constant, random, constant-random, and random-constant) and two phases (stabilisation and adaptation). The dependent variables included measures related to the task goal (accuracy and variability of error of the overall movement and force patterns) and movement pattern (macro- and microstructures). Results revealed a similar error of the overall patterns for all groups in both experiments and that they adapted themselves differently in terms of the macro- and microstructures of movement patterns. The study concludes that the effects of practice schedules on the adaptive process of motor learning were both general and specific to the task. That is, they were general to the task goal performance and specific regarding the movement pattern. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. When noise is beneficial for sensory encoding: Noise adaptation can improve face processing.

    Science.gov (United States)

    Menzel, Claudia; Hayn-Leichsenring, Gregor U; Redies, Christoph; Németh, Kornél; Kovács, Gyula

    2017-10-01

    The presence of noise usually impairs the processing of a stimulus. Here, we studied the effects of noise on face processing and show, for the first time, that adaptation to noise patterns has beneficial effects on face perception. We used noiseless faces that were either surrounded by random noise or presented on a uniform background as stimuli. In addition, the faces were either preceded by noise adaptors or not. Moreover, we varied the statistics of the noise so that its spectral slope either matched that of the faces or it was steeper or shallower. Results of parallel ERP recordings showed that the background noise reduces the amplitude of the face-evoked N170, indicating less intensive face processing. Adaptation to a noise pattern, however, led to reduced P1 and enhanced N170 amplitudes as well as to a better behavioral performance in two of the three noise conditions. This effect was also augmented by the presence of background noise around the target stimuli. Additionally, the spectral slope of the noise pattern affected the size of the P1, N170 and P2 amplitudes. We reason that the observed effects are due to the selective adaptation of noise-sensitive neurons present in the face-processing cortical areas, which may enhance the signal-to-noise-ratio. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. A biological inspired fuzzy adaptive window median filter (FAWMF) for enhancing DNA signal processing.

    Science.gov (United States)

    Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin

    2017-10-01

    Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary

  19. Hydrogen Detection With a Gas Sensor ArrayProcessing and Recognition of Dynamic Responses Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Gwiżdż Patryk

    2015-03-01

    Full Text Available An array consisting of four commercial gas sensors with target specifications for hydrocarbons, ammonia, alcohol, explosive gases has been constructed and tested. The sensors in the array operate in the dynamic mode upon the temperature modulation from 350°C to 500°C. Changes in the sensor operating temperature lead to distinct resistance responses affected by the gas type, its concentration and the humidity level. The measurements are performed upon various hydrogen (17-3000 ppm, methane (167-3000 ppm and propane (167-3000 ppm concentrations at relative humidity levels of 0-75%RH. The measured dynamic response signals are further processed with the Discrete Fourier Transform. Absolute values of the dc component and the first five harmonics of each sensor are analysed by a feed-forward back-propagation neural network. The ultimate aim of this research is to achieve a reliable hydrogen detection despite an interference of the humidity and residual gases.

  20. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  1. Physical features of the wire-array Z-pinch plasmas imploding process

    International Nuclear Information System (INIS)

    Gao Chunming; Feng Kaiming

    2001-01-01

    In the process of research on controlled fusion reactors, scientists found that the Z-pinch plasma can produce very strong X-rays, comparing with other X-ray sources. In researching the process of imploding, the snowplow model and Haines model are introduced and proved. About amassing X-rays, several ways of discharging X-rays are carefully analyzed and the relative theories are proved. In doing simulations, the one dimension model is used in writing codes, the match relationships are calculated and the process of imploding is also simulated. Some useful and reasonable results are obtained

  2. A Module Experimental Process System Development Unit (MEPSDU). [flat plate solar arrays

    Science.gov (United States)

    1981-01-01

    The development of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which meet the price goal in 1986 of 70 cents or less per Watt peak is described. The major accomplishments include (1) an improved AR coating technique; (2) the use of sand blast back clean-up to reduce clean up costs and to allow much of the Al paste to serve as a back conductor; and (3) the development of wave soldering for use with solar cells. Cells were processed to evaluate different process steps, a cell and minimodule test plan was prepared and data were collected for preliminary Samics cost analysis.

  3. ADAPTATION OF TEACHING PROCESS BASED ON A STUDENTS INDIVIDUAL LEARNING NEEDS

    Directory of Open Access Journals (Sweden)

    TAKÁCS, Ondřej

    2011-03-01

    Full Text Available Development of current society requires integration of information technology to every sector, including education. The idea of adaptive teaching in e-learning environment is based on paying attention and giving support to various learning styles. More effective, user friendly thus better quality education can be achieved through such an environment. Learning can be influenced by many factors. In the paper we deal with such factors as student’s personality and qualities – particularly learning style and motivation. In addition we want to prepare study materials and study environment which respects students’ differences. Adaptive e-learning means an automated way of teaching which adapts to different qualities of students which are characteristic for their learning styles. In the last few years we can see a gradual individualization of study not only in distance forms of study but also with full-time study students. Instructional supports, namely those of e-learning, should take this trend into account and adapt the educational processes to individual students’ qualities. The present learning management systems (LMS offers this possibility only to a very limited extent. This paper deals with a design of intelligent virtual tutor behavior, which would adapt its learning ability to both static and dynamically changing student’s qualities. Virtual tutor, in order to manage all that, has to have a sufficiently rich supply of different styles and forms of teaching, with enough information about styles of learning, kinds of memory and other student’s qualities. This paper describes a draft adaptive education model and the results of the first part of the solution – definition of learning styles, pilot testing on students and an outline of further research.

  4. Adaptation of instruments developed to study the effectiveness of psychotherapeutic processes

    Directory of Open Access Journals (Sweden)

    Shushanikova, Anastasia A.

    2016-06-01

    Full Text Available The objective of the research was to adapt for use in Russian-language contexts a set of instruments that assess the effectiveness of psychotherapeutic practices. The instruments explore the effectiveness of different types of therapy, without evaluating the abstract, idealized characteristics or specifics of each approach, specialist, or therapeutic case. The adapted instruments are based on reflective data about the significance of therapeutic events, from the point of view of both the client and the therapist. We translated, edited, and adapted forms developed by John McLeod and Mick Cooper — a “Goals Form”, a “Goal Assessment Form”, a “Post-Session Form”, and a “Therapy Personalization Form”. The adaption was intended to cohere with the stylistic and cultural aspects of the Russian language. The research showed that the instruments and the methods have great potential for practical and theoretical application in qualitative studies to formulate hypotheses and to verify them in quantitative studies. The phenomenological analysis reveals the reliability, appropriateness, and validity of the adapted instruments for identifying specific meanings of the psychotherapeutic cases considered. The instruments can be used in studies exploring helpful aspects and effectiveness in different types of therapy (cognitive, existential, outdoor therapy, online counseling, etc. with different groups of clients. It is reasonable to continue the use of the Russian-language version of the instruments in further studies exploring the effectiveness of psychological practices. The adapted instruments facilitate comparison and cross-cultural studies, and formulation of meaningful hypotheses about the effectiveness and quality of the psychotherapeutic process.

  5. Spectroscopic analyses of chemical adaptation processes within microalgal biomass in response to changing environments

    Energy Technology Data Exchange (ETDEWEB)

    Vogt, Frank, E-mail: fvogt@utk.edu; White, Lauren

    2015-03-31

    Highlights: • Microalgae transform large quantities of inorganics into biomass. • Microalgae interact with their growing environment and adapt their chemical composition. • Sequestration capabilities are dependent on cells’ chemical environments. • We develop a chemometric hard-modeling to describe these chemical adaptation dynamics. • This methodology will enable studies of microalgal compound sequestration. - Abstract: Via photosynthesis, marine phytoplankton transforms large quantities of inorganic compounds into biomass. This has considerable environmental impacts as microalgae contribute for instance to counter-balancing anthropogenic releases of the greenhouse gas CO{sub 2}. On the other hand, high concentrations of nitrogen compounds in an ecosystem can lead to harmful algae blooms. In previous investigations it was found that the chemical composition of microalgal biomass is strongly dependent on the nutrient availability. Therefore, it is expected that algae’s sequestration capabilities and productivity are also determined by the cells’ chemical environments. For investigating this hypothesis, novel analytical methodologies are required which are capable of monitoring live cells exposed to chemically shifting environments followed by chemometric modeling of their chemical adaptation dynamics. FTIR-ATR experiments have been developed for acquiring spectroscopic time series of live Dunaliella parva cultures adapting to different nutrient situations. Comparing experimental data from acclimated cultures to those exposed to a chemically shifted nutrient situation reveals insights in which analyte groups participate in modifications of microalgal biomass and on what time scales. For a chemometric description of these processes, a data model has been deduced which explains the chemical adaptation dynamics explicitly rather than empirically. First results show that this approach is feasible and derives information about the chemical biomass

  6. Spectroscopic analyses of chemical adaptation processes within microalgal biomass in response to changing environments

    International Nuclear Information System (INIS)

    Vogt, Frank; White, Lauren

    2015-01-01

    Highlights: • Microalgae transform large quantities of inorganics into biomass. • Microalgae interact with their growing environment and adapt their chemical composition. • Sequestration capabilities are dependent on cells’ chemical environments. • We develop a chemometric hard-modeling to describe these chemical adaptation dynamics. • This methodology will enable studies of microalgal compound sequestration. - Abstract: Via photosynthesis, marine phytoplankton transforms large quantities of inorganic compounds into biomass. This has considerable environmental impacts as microalgae contribute for instance to counter-balancing anthropogenic releases of the greenhouse gas CO 2 . On the other hand, high concentrations of nitrogen compounds in an ecosystem can lead to harmful algae blooms. In previous investigations it was found that the chemical composition of microalgal biomass is strongly dependent on the nutrient availability. Therefore, it is expected that algae’s sequestration capabilities and productivity are also determined by the cells’ chemical environments. For investigating this hypothesis, novel analytical methodologies are required which are capable of monitoring live cells exposed to chemically shifting environments followed by chemometric modeling of their chemical adaptation dynamics. FTIR-ATR experiments have been developed for acquiring spectroscopic time series of live Dunaliella parva cultures adapting to different nutrient situations. Comparing experimental data from acclimated cultures to those exposed to a chemically shifted nutrient situation reveals insights in which analyte groups participate in modifications of microalgal biomass and on what time scales. For a chemometric description of these processes, a data model has been deduced which explains the chemical adaptation dynamics explicitly rather than empirically. First results show that this approach is feasible and derives information about the chemical biomass adaptations

  7. A conceptual model for the development process of confirmatory adaptive clinical trials within an emergency research network.

    Science.gov (United States)

    Mawocha, Samkeliso C; Fetters, Michael D; Legocki, Laurie J; Guetterman, Timothy C; Frederiksen, Shirley; Barsan, William G; Lewis, Roger J; Berry, Donald A; Meurer, William J

    2017-06-01

    Adaptive clinical trials use accumulating data from enrolled subjects to alter trial conduct in pre-specified ways based on quantitative decision rules. In this research, we sought to characterize the perspectives of key stakeholders during the development process of confirmatory-phase adaptive clinical trials within an emergency clinical trials network and to build a model to guide future development of adaptive clinical trials. We used an ethnographic, qualitative approach to evaluate key stakeholders' views about the adaptive clinical trial development process. Stakeholders participated in a series of multidisciplinary meetings during the development of five adaptive clinical trials and completed a Strengths-Weaknesses-Opportunities-Threats questionnaire. In the analysis, we elucidated overarching themes across the stakeholders' responses to develop a conceptual model. Four major overarching themes emerged during the analysis of stakeholders' responses to questioning: the perceived statistical complexity of adaptive clinical trials and the roles of collaboration, communication, and time during the development process. Frequent and open communication and collaboration were viewed by stakeholders as critical during the development process, as were the careful management of time and logistical issues related to the complexity of planning adaptive clinical trials. The Adaptive Design Development Model illustrates how statistical complexity, time, communication, and collaboration are moderating factors in the adaptive design development process. The intensity and iterative nature of this process underscores the need for funding mechanisms for the development of novel trial proposals in academic settings.

  8. Cognitive and social processes predicting partner psychological adaptation to early stage breast cancer.

    Science.gov (United States)

    Manne, Sharon; Ostroff, Jamie; Fox, Kevin; Grana, Generosa; Winkel, Gary

    2009-02-01

    The diagnosis and subsequent treatment for early stage breast cancer is stressful for partners. Little is known about the role of cognitive and social processes predicting the longitudinal course of partners' psychosocial adaptation. This study evaluated the role of cognitive and social processing in partner psychological adaptation to early stage breast cancer, evaluating both main and moderator effect models. Moderating effects for meaning making, acceptance, and positive reappraisal on the predictive association of searching for meaning, emotional processing, and emotional expression on partner psychological distress were examined. Partners of women diagnosed with early stage breast cancer were evaluated shortly after the ill partner's diagnosis (N=253), 9 (N=167), and 18 months (N=149) later. Partners completed measures of emotional expression, emotional processing, acceptance, meaning making, and general and cancer-specific distress at all time points. Lower satisfaction with partner support predicted greater global distress, and greater use of positive reappraisal was associated with greater distress. The predicted moderator effects for found meaning on the associations between the search for meaning and cancer-specific distress were found and similar moderating effects for positive reappraisal on the associations between emotional expression and global distress and for acceptance on the association between emotional processing and cancer-specific distress were found. Results indicate several cognitive-social processes directly predict partner distress. However, moderator effect models in which the effects of partners' processing depends upon whether these efforts result in changes in perceptions of the cancer experience may add to the understanding of partners' adaptation to cancer.

  9. Nonparametric adaptive estimation of linear functionals for low frequency observed Lévy processes

    OpenAIRE

    Kappus, Johanna

    2012-01-01

    For a Lévy process X having finite variation on compact sets and finite first moments, µ( dx) = xv( dx) is a finite signed measure which completely describes the jump dynamics. We construct kernel estimators for linear functionals of µ and provide rates of convergence under regularity assumptions. Moreover, we consider adaptive estimation via model selection and propose a new strategy for the data driven choice of the smoothing parameter.

  10. Adaptive control of biomass and substrate concentration in a continuous-flow. Fermentation process

    Energy Technology Data Exchange (ETDEWEB)

    Chamilothoris, G; Sevely, Y; Sevely, Y

    1988-01-01

    This paper presents a simple adaptive control scheme for the simultaneous regulation of biomass and substrate concentration in a continuous fermentation process. The proposed algorithm includes the on-line estimation of a time-varying parameter (namely the specific growth rate) and two cascaded regulators of self-tuning inspiration. Convergence of the control algorithm, in the BIBO sense, is theoretically established and its effectiveness is illustrated by simulation examples.

  11. ADAPTIVE PARAMETER ESTIMATION OF PERSON RECOGNITION MODEL IN A STOCHASTIC HUMAN TRACKING PROCESS

    OpenAIRE

    W. Nakanishi; T. Fuse; T. Ishikawa

    2015-01-01

    This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation ...

  12. Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis

    Science.gov (United States)

    Arendt, Richard; Kashlinsky, A.; Moseley, S.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale ([greater, similar]30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the [approx]1-5 [mu]m mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low ([greater, similar]1 nW m-2 sr-1 at 3-5 [mu]m), and thus consistent with current [gamma]-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs

  13. COSMIC INFRARED BACKGROUND FLUCTUATIONS IN DEEP SPITZER INFRARED ARRAY CAMERA IMAGES: DATA PROCESSING AND ANALYSIS

    International Nuclear Information System (INIS)

    Arendt, Richard G.; Kashlinsky, A.; Moseley, S. H.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale (∼>30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the ∼1-5 μm mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low (∼>1 nW m -2 sr -1 at 3-5 μm), and thus consistent with current γ-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs inhabited by the populations producing these

  14. The adaptation process of mothers raising a child with complex congenital heart disease.

    Science.gov (United States)

    Ahn, Jeong-Ah; Lee, Sunhee

    2018-01-01

    Mothers of children with congenital heart disease (CHD) tend to be concerned about their child's normal life. The majority of these mothers tend to experience negative psychological problems. In this study, the adaptation process of mothers raising a child with complex CHD was investigated based on the sociocultural context of Korea. The data collection was conducted by in-depth interviews and theoretical sampling was performed until the data were saturated. The collected data were analyzed using continuous theoretical comparisons. The results of the present study showed that the core category in the mothers' adaptation process was 'anxiety regarding the future', and the mothers' adaptation process consisted of the impact phase, standing against phase, and accepting phase. In the impact phase, the participants emotionally fluctuated between 'feelings of abandonment' and 'entertaining hope'. In the standing against phase, participants tended to dedicate everything to child-rearing while being affected by 'being encouraged by support' and 'being frustrated by tasks beyond their limits'. In the accepting phase, the subjects attempted to 'accept the child as is', 'resist hard feelings', and 'share hope'. Health-care providers need to develop programs that include information regarding CHD, how to care for a child with CHD, and effective child-rearing behaviors.

  15. Image processing system design for microcantilever-based optical readout infrared arrays

    Science.gov (United States)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  16. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  17. Transgender women and the Gender Reassignment Process: subjection experiences, suffering and pleasure in body adaptation

    Directory of Open Access Journals (Sweden)

    Analídia Rodolpho Petry

    Full Text Available OBJECTIVE: This article seeks to understand the experiences of transgender women in relation to the hormone therapy and sex reassignment surgery that make up the Gender Reassignment Process. METHOD: It is a qualitative study inserted into the field of cultural and gender studies. Data collection used narrative interviews, conducted in 2010 and 2011, with seven transsexual women who had been undergoing the Gender Reassignment Process for at least two years. The data was submitted to a thematic analysis. RESULTS: The results show that the transformation processes for construction of the female body include behavior adaptation, posture modification, voice modulation, hormone use, vaginal canal dilation and surgical complications. Such processes subject the body to be built as idealized to fit the gender identity, infringing on pleasures and afflictions. CONCLUSION: We concluded that the discussion involving the Gender Reassignment Process brings allowances for nursing regarding body changes experienced by transgender women.

  18. Numerical microstructural analysis of automotive-grade steels when joined with an array of welding processes

    International Nuclear Information System (INIS)

    Gould, J.E.; Khurana, S.P.; Li, T.

    2004-01-01

    Weld strength, formability, and impact resistance for joints on automotive steels is dependent on the underlying microstructure. A martensitic weld area is often a precursor to reduced mechanical performance. In this paper, efforts are made to predict underlying joint microstructures for a range of processing approaches, steel types, and gauges. This was done first by calculating cooling rates for some typical automotive processes [resistance spot welding (RSW), resistance mash seam welding (RMSEW), laser beam welding (LBW), and gas metal arc welding (GMAW)]. Then, critical cooling rates for martensite formation were calculated for a range of automotive steels using an available thermodynamically based phase transformation model. These were then used to define combinations of process type, steel type, and gauge where welds could be formed avoiding martensite in the weld area microstructure

  19. Flat-plate solar array project process development area process research of non-CZ silicon material

    Science.gov (United States)

    1985-01-01

    Three sets of samples were laser processed and then cell processed. The laser processing was carried out on P-type and N-type web at laser power levels from 0.5 joule/sq cm to 2.5 joule/sq cm. Six different liquid dopants were tested (3 phosphorus dopants, 2 boron dopants, 1 aluminum dopant). The laser processed web strips were fabricated into solar cells immediately after laser processing and after various annealing cycles. Spreading resistance measurements made on a number of these samples indicate that the N(+)P (phosphorus doped) junction is approx. 0.2 micrometers deep and suitable for solar cells. However, the P(+)N (or P(+)P) junction is very shallow ( 0.1 micrometers) with a low surface concentration and resulting high resistance. Due to this effect, the fabricated cells are of low efficiency. The maximum efficiency attained was 9.6% on P-type web after a 700 C anneal. The main reason for the low efficiency was a high series resistance in the cell due to a high resistance back contact.

  20. The Contextualized Technology Adaptation Process (CTAP): Optimizing Health Information Technology to Improve Mental Health Systems.

    Science.gov (United States)

    Lyon, Aaron R; Wasse, Jessica Knaster; Ludwig, Kristy; Zachry, Mark; Bruns, Eric J; Unützer, Jürgen; McCauley, Elizabeth

    2016-05-01

    Health information technologies have become a central fixture in the mental healthcare landscape, but few frameworks exist to guide their adaptation to novel settings. This paper introduces the contextualized technology adaptation process (CTAP) and presents data collected during Phase 1 of its application to measurement feedback system development in school mental health. The CTAP is built on models of human-centered design and implementation science and incorporates repeated mixed methods assessments to guide the design of technologies to ensure high compatibility with a destination setting. CTAP phases include: (1) Contextual evaluation, (2) Evaluation of the unadapted technology, (3) Trialing and evaluation of the adapted technology, (4) Refinement and larger-scale implementation, and (5) Sustainment through ongoing evaluation and system revision. Qualitative findings from school-based practitioner focus groups are presented, which provided information for CTAP Phase 1, contextual evaluation, surrounding education sector clinicians' workflows, types of technologies currently available, and influences on technology use. Discussion focuses on how findings will inform subsequent CTAP phases, as well as their implications for future technology adaptation across content domains and service sectors.

  1. The Contextualized Technology Adaptation Process (CTAP): Optimizing Health Information Technology to Improve Mental Health Systems

    Science.gov (United States)

    Lyon, Aaron R.; Wasse, Jessica Knaster; Ludwig, Kristy; Zachry, Mark; Bruns, Eric J.; Unützer, Jürgen; McCauley, Elizabeth

    2015-01-01

    Health information technologies have become a central fixture in the mental healthcare landscape, but few frameworks exist to guide their adaptation to novel settings. This paper introduces the Contextualized Technology Adaptation Process (CTAP) and presents data collected during Phase 1 of its application to measurement feedback system development in school mental health. The CTAP is built on models of human-centered design and implementation science and incorporates repeated mixed methods assessments to guide the design of technologies to ensure high compatibility with a destination setting. CTAP phases include: (1) Contextual evaluation, (2) Evaluation of the unadapted technology, (3) Trialing and evaluation of the adapted technology, (4) Refinement and larger-scale implementation, and (5) Sustainment through ongoing evaluation and system revision. Qualitative findings from school-based practitioner focus groups are presented, which provided information for CTAP Phase 1, contextual evaluation, surrounding education sector clinicians’ workflows, types of technologies currently available, and influences on technology use. Discussion focuses on how findings will inform subsequent CTAP phases, as well as their implications for future technology adaptation across content domains and service sectors. PMID:25677251

  2. Optimizing laser beam profiles using micro-lens arrays for efficient material processing: applications to solar cells

    Science.gov (United States)

    Hauschild, Dirk; Homburg, Oliver; Mitra, Thomas; Ivanenko, Mikhail; Jarczynski, Manfred; Meinschien, Jens; Bayer, Andreas; Lissotschenko, Vitalij

    2009-02-01

    High power laser sources are used in various production tools for microelectronic products and solar cells, including the applications annealing, lithography, edge isolation as well as dicing and patterning. Besides the right choice of the laser source suitable high performance optics for generating the appropriate beam profile and intensity distribution are of high importance for the right processing speed, quality and yield. For industrial applications equally important is an adequate understanding of the physics of the light-matter interaction behind the process. In advance simulations of the tool performance can minimize technical and financial risk as well as lead times for prototyping and introduction into series production. LIMO has developed its own software founded on the Maxwell equations taking into account all important physical aspects of the laser based process: the light source, the beam shaping optical system and the light-matter interaction. Based on this knowledge together with a unique free-form micro-lens array production technology and patented micro-optics beam shaping designs a number of novel solar cell production tool sub-systems have been built. The basic functionalities, design principles and performance results are presented with a special emphasis on resilience, cost reduction and process reliability.

  3. ZnO nanorods arrays with Ag nanoparticles on the (002) plane derived by liquid epitaxy growth and electrodeposition process

    International Nuclear Information System (INIS)

    Yin Xingtian; Que Wenxiu; Shen Fengyu

    2011-01-01

    Well-aligned ZnO nanorods (NRs) arrays with Ag nanoparticles (NPs) on the (002) plane are obtained by combining a liquid epitaxy technique with an electrodeposition process. Cyclic voltammetry study is employed to understand the electrochemical behaviors of the electrodeposition system, and potentiostatic method is employed to deposit silver NPs on the ZnO NRs in the electrolyte with an Ag + concentration of 1 mM. X-ray diffraction analysis is used to study the crystalline properties of the as-prepared samples, and energy dispersive X-ray is adopted to confirm the composition at the surface of the deposited samples. Results indicate only a small quantity of silver can be deposited on the surface of the samples. Effect of the deposition potential and time on the morphological properties of the resultant Ag NPs/ZnO NRs are investigated in detail. Scanning electron microscopy images and transmission electron microscopy images indicate that the Ag NPs deposited on the (002) plane of the ZnO NRs with a large dispersion in diameter can be obtained by a single potentiostatic deposition process, while dense Ag NPs with a much smaller diameter dispersion on the top of the ZnO NRs, most of which locate on the conical tip of the ZnO NRs, can be obtained by a two-potentiostatic deposition process, The mechanism of this deposition process is also suggested.

  4. Processing-Efficient Distributed Adaptive RLS Filtering for Computationally Constrained Platforms

    Directory of Open Access Journals (Sweden)

    Noor M. Khan

    2017-01-01

    Full Text Available In this paper, a novel processing-efficient architecture of a group of inexpensive and computationally incapable small platforms is proposed for a parallely distributed adaptive signal processing (PDASP operation. The proposed architecture runs computationally expensive procedures like complex adaptive recursive least square (RLS algorithm cooperatively. The proposed PDASP architecture operates properly even if perfect time alignment among the participating platforms is not available. An RLS algorithm with the application of MIMO channel estimation is deployed on the proposed architecture. Complexity and processing time of the PDASP scheme with MIMO RLS algorithm are compared with sequentially operated MIMO RLS algorithm and liner Kalman filter. It is observed that PDASP scheme exhibits much lesser computational complexity parallely than the sequential MIMO RLS algorithm as well as Kalman filter. Moreover, the proposed architecture provides an improvement of 95.83% and 82.29% decreased processing time parallely compared to the sequentially operated Kalman filter and MIMO RLS algorithm for low doppler rate, respectively. Likewise, for high doppler rate, the proposed architecture entails an improvement of 94.12% and 77.28% decreased processing time compared to the Kalman and RLS algorithms, respectively.

  5. Processing Optimization of Typed Resources with Synchronized Storage and Computation Adaptation in Fog Computing

    Directory of Open Access Journals (Sweden)

    Zhengyang Song

    2018-01-01

    Full Text Available Wide application of the Internet of Things (IoT system has been increasingly demanding more hardware facilities for processing various resources including data, information, and knowledge. With the rapid growth of generated resource quantity, it is difficult to adapt to this situation by using traditional cloud computing models. Fog computing enables storage and computing services to perform at the edge of the network to extend cloud computing. However, there are some problems such as restricted computation, limited storage, and expensive network bandwidth in Fog computing applications. It is a challenge to balance the distribution of network resources. We propose a processing optimization mechanism of typed resources with synchronized storage and computation adaptation in Fog computing. In this mechanism, we process typed resources in a wireless-network-based three-tier architecture consisting of Data Graph, Information Graph, and Knowledge Graph. The proposed mechanism aims to minimize processing cost over network, computation, and storage while maximizing the performance of processing in a business value driven manner. Simulation results show that the proposed approach improves the ratio of performance over user investment. Meanwhile, conversions between resource types deliver support for dynamically allocating network resources.

  6. Investigation of cold extrusion process using coupled thermo-mechanical FEM analysis and adaptive friction modeling

    Science.gov (United States)

    Görtan, Mehmet Okan

    2017-10-01

    Cold extrusion processes are known for their excellent material usage as well as high efficiency in the production of large batches. Although the process starts at room temperature, workpiece temperatures may rise above 200°C. Moreover, contact normal stresses can exceed 2500 MPa, whereas surface enlargement values can reach up to 30. These changes affects friction coefficients in cold extrusion processes. In the current study, friction coefficients between a plain carbon steel C4C (1.0303) and a tool steel (1.2379) are determined dependent on temperature and contact pressure using the sliding compression test (SCT). In order to represent contact normal stress and temperature effects on friction coefficients, an empirical adaptive friction model has been proposed. The validity of the model has been tested with experiments and finite element simulations for a cold forward extrusion process. By using the proposed adaptive friction model together with thermo-mechanical analysis, the deviation in the process loads between numerical simulations and model experiments could be reduced from 18.6% to 3.3%.

  7. A review of culturally adapted versions of the Oswestry Disability Index: the adaptation process, construct validity, test-retest reliability and internal consistency.

    Science.gov (United States)

    Sheahan, Peter J; Nelson-Wong, Erika J; Fischer, Steven L

    2015-01-01

    The Oswestry Disability Index (ODI) is a self-report-based outcome measure used to quantify the extent of disability related to low back pain (LBP), a substantial contributor to workplace absenteeism. The ODI tool has been adapted for use by patients in several non-English speaking nations. It is unclear, however, if these adapted versions of the ODI are as credible as the original ODI developed for English-speaking nations. The objective of this study was to conduct a review of the literature to identify culturally adapted versions of the ODI and to report on the adaptation process, construct validity, test-retest reliability and internal consistency of these ODIs. Following a pragmatic review process, data were extracted from each study with regard to these four outcomes. While most studies applied adaptation processes in accordance with best-practice guidelines, there were some deviations. However, all studies reported high-quality psychometric properties: group mean construct validity was 0.734 ± 0.094 (indicated via a correlation coefficient), test-retest reliability was 0.937 ± 0.032 (indicated via an intraclass correlation coefficient) and internal consistency was 0.876 ± 0.047 (indicated via Cronbach's alpha). Researchers can be confident when using any of these culturally adapted ODIs, or when comparing and contrasting results between cultures where these versions were employed. Implications for Rehabilitation Low back pain is the second leading cause of disability in the world, behind only cancer. The Oswestry Disability Index (ODI) has been developed as a self-report outcome measure of low back pain for administration to patients. An understanding of the various cross-cultural adaptations of the ODI is important for more concerted multi-national research efforts. This review examines 16 cross-cultural adaptations of the ODI and should inform the work of health care and rehabilitation professionals.

  8. Assessing the Depth of Cognitive Processing as the Basis for Potential User-State Adaptation

    Science.gov (United States)

    Nicolae, Irina-Emilia; Acqualagna, Laura; Blankertz, Benjamin

    2017-01-01

    Objective: Decoding neurocognitive processes on a single-trial basis with Brain-Computer Interface (BCI) techniques can reveal the user's internal interpretation of the current situation. Such information can potentially be exploited to make devices and interfaces more user aware. In this line of research, we took a further step by studying neural correlates of different levels of cognitive processes and developing a method that allows to quantify how deeply presented information is processed in the brain. Methods/Approach: Seventeen participants took part in an EEG study in which we evaluated different levels of cognitive processing (no processing, shallow, and deep processing) within three distinct domains (memory, language, and visual imagination). Our investigations showed gradual differences in the amplitudes of event-related potentials (ERPs) and in the extend and duration of event-related desynchronization (ERD) which both correlate with task difficulty. We performed multi-modal classification to map the measured correlates of neurocognitive processing to the corresponding level of processing. Results: Successful classification of the neural components was achieved, which reflects the level of cognitive processing performed by the participants. The results show performances above chance level for each participant and a mean performance of 70–90% for all conditions and classification pairs. Significance: The successful estimation of the level of cognition on a single-trial basis supports the feasibility of user-state adaptation based on ongoing neural activity. There is a variety of potential use cases such as: a user-friendly adaptive design of an interface or the development of assistance systems in safety critical workplaces. PMID:29046625

  9. Assessing the Depth of Cognitive Processing as the Basis for Potential User-State Adaptation

    Directory of Open Access Journals (Sweden)

    Irina-Emilia Nicolae

    2017-10-01

    Full Text Available Objective: Decoding neurocognitive processes on a single-trial basis with Brain-Computer Interface (BCI techniques can reveal the user's internal interpretation of the current situation. Such information can potentially be exploited to make devices and interfaces more user aware. In this line of research, we took a further step by studying neural correlates of different levels of cognitive processes and developing a method that allows to quantify how deeply presented information is processed in the brain.Methods/Approach: Seventeen participants took part in an EEG study in which we evaluated different levels of cognitive processing (no processing, shallow, and deep processing within three distinct domains (memory, language, and visual imagination. Our investigations showed gradual differences in the amplitudes of event-related potentials (ERPs and in the extend and duration of event-related desynchronization (ERD which both correlate with task difficulty. We performed multi-modal classification to map the measured correlates of neurocognitive processing to the corresponding level of processing.Results: Successful classification of the neural components was achieved, which reflects the level of cognitive processing performed by the participants. The results show performances above chance level for each participant and a mean performance of 70–90% for all conditions and classification pairs.Significance: The successful estimation of the level of cognition on a single-trial basis supports the feasibility of user-state adaptation based on ongoing neural activity. There is a variety of potential use cases such as: a user-friendly adaptive design of an interface or the development of assistance systems in safety critical workplaces.

  10. Assessing the Depth of Cognitive Processing as the Basis for Potential User-State Adaptation.

    Science.gov (United States)

    Nicolae, Irina-Emilia; Acqualagna, Laura; Blankertz, Benjamin

    2017-01-01

    Objective: Decoding neurocognitive processes on a single-trial basis with Brain-Computer Interface (BCI) techniques can reveal the user's internal interpretation of the current situation. Such information can potentially be exploited to make devices and interfaces more user aware. In this line of research, we took a further step by studying neural correlates of different levels of cognitive processes and developing a method that allows to quantify how deeply presented information is processed in the brain. Methods/Approach: Seventeen participants took part in an EEG study in which we evaluated different levels of cognitive processing (no processing, shallow, and deep processing) within three distinct domains (memory, language, and visual imagination). Our investigations showed gradual differences in the amplitudes of event-related potentials (ERPs) and in the extend and duration of event-related desynchronization (ERD) which both correlate with task difficulty. We performed multi-modal classification to map the measured correlates of neurocognitive processing to the corresponding level of processing. Results: Successful classification of the neural components was achieved, which reflects the level of cognitive processing performed by the participants. The results show performances above chance level for each participant and a mean performance of 70-90% for all conditions and classification pairs. Significance: The successful estimation of the level of cognition on a single-trial basis supports the feasibility of user-state adaptation based on ongoing neural activity. There is a variety of potential use cases such as: a user-friendly adaptive design of an interface or the development of assistance systems in safety critical workplaces.

  11. Defense Profiles in Adaptation Process to Sport Competition and Their Relationships with Coping, Stress and Control

    Directory of Open Access Journals (Sweden)

    Michel Nicolas

    2017-12-01

    Full Text Available The purpose of this study was to identify the potentially distinct defense profiles of athletes in order to provide insight into the complex associations that can exist between defenses and other important variables tied to performance in sports (e.g., coping, perceived stress and control and to further our understanding of the complexity of the adaptation process in sports. Two hundred and ninety-six (N = 296 athletes participated in a naturalistic study that involved a highly stressful situation: a sports competition. Participants were assessed before and after the competition. Hierarchical cluster analysis and a series of MANOVAs with post hoc comparisons indicated two stable defense profiles (high and low defense profiles of athletes both before and during sport competition. These profiles differed with regards to coping, stress and control. Athletes with high defense profiles reported higher levels of coping strategies, perceived stress and control than athletes with low defense profiles. This study confirmed that defenses are involved in the psychological adaptation process and that research and intervention should not be based only on coping, but rather must include defense mechanisms in order to improve our understanding of psychological adaptation in competitive sports.

  12. Compensation for the signal processing characteristics of ultrasound B-mode scanners in adaptive speckle reduction.

    Science.gov (United States)

    Crawford, D C; Bell, D S; Bamber, J C

    1993-01-01

    A systematic method to compensate for nonlinear amplification of individual ultrasound B-scanners has been investigated in order to optimise performance of an adaptive speckle reduction (ASR) filter for a wide range of clinical ultrasonic imaging equipment. Three potential methods have been investigated: (1) a method involving an appropriate selection of the speckle recognition feature was successful when the scanner signal processing executes simple logarithmic compressions; (2) an inverse transform (decompression) of the B-mode image was effective in correcting for the measured characteristics of image data compression when the algorithm was implemented in full floating point arithmetic; (3) characterising the behaviour of the statistical speckle recognition feature under conditions of speckle noise was found to be the method of choice for implementation of the adaptive speckle reduction algorithm in limited precision integer arithmetic. In this example, the statistical features of variance and mean were investigated. The third method may be implemented on commercially available fast image processing hardware and is also better suited for transfer into dedicated hardware to facilitate real-time adaptive speckle reduction. A systematic method is described for obtaining ASR calibration data from B-mode images of a speckle producing phantom.

  13. Adaptive constructive processes and memory accuracy: Consequences of counterfactual simulations in young and older adults

    Science.gov (United States)

    Gerlach, Kathy D.; Dornblaser, David W.; Schacter, Daniel L.

    2013-01-01

    People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterized as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b, young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test, participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2, younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterization as an adaptive constructive process. PMID:23560477

  14. Adaptive constructive processes and memory accuracy: consequences of counterfactual simulations in young and older adults.

    Science.gov (United States)

    Gerlach, Kathy D; Dornblaser, David W; Schacter, Daniel L

    2014-01-01

    People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterised as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2 younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterisation as an adaptive constructive process.

  15. Defense Profiles in Adaptation Process to Sport Competition and Their Relationships with Coping, Stress and Control.

    Science.gov (United States)

    Nicolas, Michel; Martinent, Guillaume; Drapeau, Martin; Chahraoui, Khadija; Vacher, Philippe; de Roten, Yves

    2017-01-01

    The purpose of this study was to identify the potentially distinct defense profiles of athletes in order to provide insight into the complex associations that can exist between defenses and other important variables tied to performance in sports (e.g., coping, perceived stress and control) and to further our understanding of the complexity of the adaptation process in sports. Two hundred and ninety-six ( N = 296) athletes participated in a naturalistic study that involved a highly stressful situation: a sports competition. Participants were assessed before and after the competition. Hierarchical cluster analysis and a series of MANOVAs with post hoc comparisons indicated two stable defense profiles (high and low defense profiles) of athletes both before and during sport competition. These profiles differed with regards to coping, stress and control. Athletes with high defense profiles reported higher levels of coping strategies, perceived stress and control than athletes with low defense profiles. This study confirmed that defenses are involved in the psychological adaptation process and that research and intervention should not be based only on coping, but rather must include defense mechanisms in order to improve our understanding of psychological adaptation in competitive sports.

  16. Basal ganglia-dependent processes in recalling learned visual-motor adaptations.

    Science.gov (United States)

    Bédard, Patrick; Sanes, Jerome N

    2011-03-01

    Humans learn and remember motor skills to permit adaptation to a changing environment. During adaptation, the brain develops new sensory-motor relationships that become stored in an internal model (IM) that may be retained for extended periods. How the brain learns new IMs and transforms them into long-term memory remains incompletely understood since prior work has mostly focused on the learning process. A current model suggests that basal ganglia, cerebellum, and their neocortical targets actively participate in forming new IMs but that a cerebellar cortical network would mediate automatization. However, a recent study (Marinelli et al. 2009) reported that patients with Parkinson's disease (PD), who have basal ganglia dysfunction, had similar adaptation rates as controls but demonstrated no savings at recall tests (24 and 48 h). Here, we assessed whether a longer training session, a feature known to increase long-term retention of IM in healthy individuals, could allow PD patients to demonstrate savings. We recruited PD patients and age-matched healthy adults and used a visual-motor adaptation paradigm similar to the study by Marinelli et al. (2009), doubling the number of training trials and assessed recall after a short and a 24-h delay. We hypothesized that a longer training session would allow PD patients to develop an enhanced representation of the IM as demonstrated by savings at the recall tests. Our results showed that PD patients had similar adaptation rates as controls but did not demonstrate savings at both recall tests. We interpret these results as evidence that fronto-striatal networks have involvement in the early to late phase of motor memory formation, but not during initial learning.

  17. ADAPT: building conceptual models of the physical and biological processes across permafrost landscapes

    Science.gov (United States)

    Allard, M.; Vincent, W. F.; Lemay, M.

    2012-12-01

    Fundamental and applied permafrost research is called upon in Canada in support of environmental protection, economic development and for contributing to the international efforts in understanding climatic and ecological feedbacks of permafrost thawing under a warming climate. The five year "Arctic Development and Adaptation to Permafrost in Transition" program (ADAPT) funded by NSERC brings together 14 scientists from 10 Canadian universities and involves numerous collaborators from academia, territorial and provincial governments, Inuit communities and industry. The geographical coverage of the program encompasses all of the permafrost regions of Canada. Field research at a series of sites across the country is being coordinated. A common protocol for measuring ground thermal and moisture regime, characterizing terrain conditions (vegetation, topography, surface water regime and soil organic matter contents) is being applied in order to provide inputs for designing a general model to provide an understanding of transfers of energy and matter in permafrost terrain, and the implications for biological and human systems. The ADAPT mission is to produce an 'Integrated Permafrost Systems Science' framework that will be used to help generate sustainable development and adaptation strategies for the North in the context of rapid socio-economic and climate change. ADAPT has three major objectives: to examine how changing precipitation and warming temperatures affect permafrost geosystems and ecosystems, specifically by testing hypotheses concerning the influence of the snowpack, the effects of water as a conveyor of heat, sediments, and carbon in warming permafrost terrain and the processes of permafrost decay; to interact directly with Inuit communities, the public sector and the private sector for development and adaptation to changes in permafrost environments; and to train the new generation of experts and scientists in this critical domain of research in Canada

  18. The influence of creativity on the process of adaptation in the period of teenagers’ crisis

    Directory of Open Access Journals (Sweden)

    Chernaya Yu.S.

    2017-05-01

    Full Text Available this paper studies the influence of regular pictorial creativity class and the environment of creative groups on overcoming the adolescent crisis. Each of 60 students was given a battery of tests. Psychological adaptation, self-esteem and level of aspiration, identity, the subjective sense of loneliness and school anxiety have been studied. The data of descriptive statistics, Mann-Whitney U criterion for nonparametric tests for two independent samples has been processed. It is concluded that adolescents in non-permanent creative groups have a reduced level of neuropsychic adaptation and self-esteem and also high levels of subjective loneliness and frustration in achieving success, compared with adolescents from the constant creative and uncreative groups.

  19. Development of a Process for a High Capacity Arc Heater Production of Silicon for Solar Arrays

    Science.gov (United States)

    Reed, W. H.

    1979-01-01

    A program was established to develop a high temperature silicon production process using existing electric arc heater technology. Silicon tetrachloride and a reductant (sodium) are injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction is expected to occur and proceed essentially to completion, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection were developed. Included in this report are: test system preparation; testing; injection techniques; kinetics; reaction demonstration; conclusions; and the project status.

  20. Low cost silicon solar array project large area silicon sheet task: Silicon web process development

    Science.gov (United States)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.

    1977-01-01

    Growth configurations were developed which produced crystals having low residual stress levels. The properties of a 106 mm diameter round crucible were evaluated and it was found that this design had greatly enhanced temperature fluctuations arising from convection in the melt. Thermal modeling efforts were directed to developing finite element models of the 106 mm round crucible and an elongated susceptor/crucible configuration. Also, the thermal model for the heat loss modes from the dendritic web was examined for guidance in reducing the thermal stress in the web. An economic analysis was prepared to evaluate the silicon web process in relation to price goals.

  1. Flexible Description and Adaptive Processing of Earth Observation Data through the BigEarth Platform

    Science.gov (United States)

    Gorgan, Dorian; Bacu, Victor; Stefanut, Teodor; Nandra, Cosmin; Mihon, Danut

    2016-04-01

    The Earth Observation data repositories extending periodically by several terabytes become a critical issue for organizations. The management of the storage capacity of such big datasets, accessing policy, data protection, searching, and complex processing require high costs that impose efficient solutions to balance the cost and value of data. Data can create value only when it is used, and the data protection has to be oriented toward allowing innovation that sometimes depends on creative people, which achieve unexpected valuable results through a flexible and adaptive manner. The users need to describe and experiment themselves different complex algorithms through analytics in order to valorize data. The analytics uses descriptive and predictive models to gain valuable knowledge and information from data analysis. Possible solutions for advanced processing of big Earth Observation data are given by the HPC platforms such as cloud. With platforms becoming more complex and heterogeneous, the developing of applications is even harder and the efficient mapping of these applications to a suitable and optimum platform, working on huge distributed data repositories, is challenging and complex as well, even by using specialized software services. From the user point of view, an optimum environment gives acceptable execution times, offers a high level of usability by hiding the complexity of computing infrastructure, and supports an open accessibility and control to application entities and functionality. The BigEarth platform [1] supports the entire flow of flexible description of processing by basic operators and adaptive execution over cloud infrastructure [2]. The basic modules of the pipeline such as the KEOPS [3] set of basic operators, the WorDeL language [4], the Planner for sequential and parallel processing, and the Executor through virtual machines, are detailed as the main components of the BigEarth platform [5]. The presentation exemplifies the development

  2. Low cost solar array project production process and equipment task: A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Several major modifications were made to the design presented at the PDR. The frame was deleted in favor of a "frameless" design which will provide a substantially improved cell packing factor. Potential shaded cell damage resulting from operation into a short circuit can be eliminated by a change in the cell series/parallel electrical interconnect configuration. The baseline process sequence defined for the MEPSON was refined and equipment design and specification work was completed. SAMICS cost analysis work accelerated, format A's were prepared and computer simulations completed. Design work on the automated cell interconnect station was focused on bond technique selection experiments.

  3. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    Science.gov (United States)

    2018-03-01

    to develop novel co-prime sampling and array design strategies that achieve high-resolution estimation of spectral power distributions and signal...by the array geometry and the frequency offset. We overcome this limitation by introducing a novel sparsity-based multi-target localization approach...estimation using a sparse uniform linear array with two CW signals of co-prime frequencies,” IEEE International Workshop on Computational Advances

  4. Adaptive Fault Detection for Complex Dynamic Processes Based on JIT Updated Data Set

    Directory of Open Access Journals (Sweden)

    Jinna Li

    2012-01-01

    Full Text Available A novel fault detection technique is proposed to explicitly account for the nonlinear, dynamic, and multimodal problems existed in the practical and complex dynamic processes. Just-in-time (JIT detection method and k-nearest neighbor (KNN rule-based statistical process control (SPC approach are integrated to construct a flexible and adaptive detection scheme for the control process with nonlinear, dynamic, and multimodal cases. Mahalanobis distance, representing the correlation among samples, is used to simplify and update the raw data set, which is the first merit in this paper. Based on it, the control limit is computed in terms of both KNN rule and SPC method, such that we can identify whether the current data is normal or not by online approach. Noted that the control limit obtained changes with updating database such that an adaptive fault detection technique that can effectively eliminate the impact of data drift and shift on the performance of detection process is obtained, which is the second merit in this paper. The efficiency of the developed method is demonstrated by the numerical examples and an industrial case.

  5. A qualitative approach of psychosocial adaptation process in patients undergoing long-term hemodialysis.

    Science.gov (United States)

    Lin, Chun-Chih; Han, Chin-Yen; Pan, I-Ju

    2015-03-01

    Professional hemodialysis (HD) nursing tends to be task-oriented and lack consideration of the client's viewpoint. This study aims to interpret the process of psychosocial adaptation to dealing with HD in people with end-stage renal disease (ESRD). A grounded theory guided this study. Theoretical sampling included 15 people receiving HD at the HD center of a hospital from July to November 2010. Participants received an information sheet in writing, a verbal invitation, and informed consent forms before interviews were conducted. A constant comparative data analysis was analyzed using open, axial and selective coding. The computer software ATLAS.ti assisted data management. Credibility, transferability, dependability, and confirmability ensured the rigor of study process. This study identified "adopting life with hemodialysis", which captures the process of the psychosocial adaptation in people with ESRD as one transformation. Four categories that evolved from "adopting HD life" are (a) slipping into, (b) restricted to a renal world, (c) losing self control, and (d) stuck in an endless process. The findings of this investigation indicate the multidimensional requirements of people receiving maintenance dialysis, with an emphasis on the deficiency in psychosocial and emotional care. The study's findings contribute to clinical practice by increasing the understanding of the experience of chronic HD treatment from the recipient's viewpoint. The better our understanding, the better the care provided will meet the needs of the people receiving HD. Copyright © 2015. Published by Elsevier B.V.

  6. Development of a process for high capacity arc heater production of silicon for solar arrays

    Science.gov (United States)

    Meyer, T. N.

    1980-01-01

    A high temperature silicon production process using existing electric arc heater technology is discussed. Silicon tetrachloride and a reductant, liquid sodium, were injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction occurred, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection of the molten silicon were developed. The desired degree of separation was not achieved. The electrical, control and instrumentation, cooling water, gas, SiCl4, and sodium systems are discussed. The plasma reactor, silicon collection, effluent disposal, the gas burnoff stack, and decontamination and safety are also discussed. Procedure manuals, shakedown testing, data acquisition and analysis, product characterization, disassembly and decontamination, and component evaluation are reviewed.

  7. Site Effect Assessment of Earthquake Ground Motion Based on Advanced Data Processing of Microtremor Array Measurements

    Science.gov (United States)

    Liu, L.; He, K.; Mehl, R.; Wang, W.; Chen, Q.

    2008-12-01

    High-resolution near-surface geologic information is essential for earthquake ground motion prediction. The near-surface geology forms the critical constituent to influence seismic wave propagation, which is known as the local site effects. We have collected microtremor data over 1000 sites in Beijing area for extracting the much needed earthquake engineering parameters (primarily sediment thickness, with the shear wave velocity profiling at a few important control points) in this heavily populated urban area. Advanced data processing algorithms are employed in various stages in assessing the local site effect on earthquake ground motion. First, we used the empirical mode decomposition (EMD), also known as the Hilbert-Huang transform (HHT), to enhance the microtremor data analysis by excluding the local transients and continuous monochromic industrial noises. With this enhancement we have significantly increased the number of data points to be useful in delineating sediment thickness in this area. Second, we have used the cross-correlation of microtremor data acquired for the pairs of two adjacent sites to generate a 'pseudo-reflection' record, which can be treated as the Green function of the 1D layered earth model at the site. The sediment thickness information obtained this way is also consistent with the results obtained by the horizontal to vertical spectral ratio method (HVSR). For most sites in this area, we can achieve 'self consistent' results among different processing skechems regarding to the sediment thickness - the fundamental information to be used in assessing the local site effect. Finally, the pseudo-spectral time domain method was used to simulate the seismic wave propagation caused by a scenario earthquake in this area - the 1679 M8 Sanhe-pinggu earthquake. The characteristics of the simulated earthquake ground motion have found a general correlation with the thickness of the sediments in this area. And more importantly, it is also in agreement

  8. Flat-plate solar array project process development area: Process research of non-CZ silicon material

    Science.gov (United States)

    Campbell, R. B.

    1986-01-01

    Several different techniques to simultaneously diffuse the front and back junctions in dendritic web silicon were investigated. A successful simultaneous diffusion reduces the cost of the solar cell by reducing the number of processing steps, the amount of capital equipment, and the labor cost. The three techniques studied were: (1) simultaneous diffusion at standard temperatures and times using a tube type diffusion furnace or a belt furnace; (2) diffusion using excimer laser drive-in; and (3) simultaneous diffusion at high temperature and short times using a pulse of high intensity light as the heat source. The use of an excimer laser and high temperature short time diffusion experiment were both more successful than the diffusion at standard temperature and times. The three techniques are described in detail and a cost analysis of the more successful techniques is provided.

  9. A dual-directional light-control film with a high-sag and high-asymmetrical-shape microlens array fabricated by a UV imprinting process

    International Nuclear Information System (INIS)

    Lin, Ta-Wei; Liao, Yunn-Shiuan; Chen, Chi-Feng; Yang, Jauh-Jung

    2008-01-01

    A dual-directional light-control film with a high-sag and high-asymmetric-shape long gapless hexagonal microlens array fabricated by an ultra-violent (UV) imprinting process is presented. Such a lens array is designed by ray-tracing simulation and fabricated by a micro-replication process including gray-scale lithography, electroplating process and UV curing. The shape of the designed lens array is similar to that of a near half-cylindrical lens array with a periodical ripple. The measurement results of a prototype show that the incident lights using a collimated LED with the FWHM of dispersion angle, 12°, are diversified differently in short and long axes. The numerical and experimental results show that the FWHMs of the view angle for angular brightness in long and short axis directions through the long hexagonal lens are about 34.3° and 18.1° and 31° and 13°, respectively. Compared with the simulation result, the errors in long and short axes are about 5% and 16%, respectively. Obviously, the asymmetric gapless microlens array can realize the aim of the controlled asymmetric angular brightness. Such a light-control film can be used as a power saving screen compared with convention diffusing film for the application of a rear projection display

  10. Process Research On Polycrystalline Silicon Material (PROPSM). [flat plate solar array project

    Science.gov (United States)

    Culik, J. S.

    1983-01-01

    The performance-limiting mechanisms in large-grain (greater than 1 to 2 mm in diameter) polycrystalline silicon solar cells were investigated by fabricating a matrix of 4 sq cm solar cells of various thickness from 10 cm x 10 cm polycrystalline silicon wafers of several bulk resistivities. Analysis of the illuminated I-V characteristics of these cells suggests that bulk recombination is the dominant factor limiting the short-circuit current. The average open-circuit voltage of the polycrystalline solar cells is 30 to 70 mV lower than that of co-processed single-crystal cells; the fill-factor is comparable. Both open-circuit voltage and fill-factor of the polycrystalline cells have substantial scatter that is not related to either thickness or resistivity. This implies that these characteristics are sensitive to an additional mechanism that is probably spatial in nature. A damage-gettering heat-treatment improved the minority-carrier diffusion length in low lifetime polycrystalline silicon, however, extended high temperature heat-treatment degraded the lifetime.

  11. Adaptation of the IBM ECR [electric cantilever robot] robot to plutonium processing applications

    International Nuclear Information System (INIS)

    Armantrout, G.A.; Pedrotti, L.R.; Halter, E.A.; Crossfield, M.

    1990-12-01

    The changing regulatory climate in the US is adding increasing incentive to reduce operator dose and TRU waste for DOE plutonium processing operations. To help achieve that goal the authors have begun adapting a small commercial overhead gantry robot, the IBM electric cantilever robot (ECR), to plutonium processing applications. Steps are being taken to harden this robot to withstand the dry, often abrasive, environment within a plutonium glove box and to protect the electronic components against alpha radiation. A mock-up processing system for the reduction of the oxide to a metal was prepared and successfully demonstrated. Design of a working prototype is now underway using the results of this mock-up study. 7 figs., 4 tabs

  12. Case for a field-programmable gate array multicore hybrid machine for an image-processing application

    Science.gov (United States)

    Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos

    2011-01-01

    General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.

  13. Demonstration of array eddy current technology for real-time monitoring of laser powder bed fusion additive manufacturing process

    Science.gov (United States)

    Todorov, Evgueni; Boulware, Paul; Gaah, Kingsley

    2018-03-01

    Nondestructive evaluation (NDE) at various fabrication stages is required to assure quality of feedstock and solid builds. Industry efforts are shifting towards solutions that can provide real-time monitoring of additive manufacturing (AM) fabrication process layer-by-layer while the component is being built to reduce or eliminate dependence on post-process inspection. Array eddy current (AEC), electromagnetic NDE technique was developed and implemented to directly scan the component without physical contact with the powder and fused layer surfaces at elevated temperatures inside a LPBF chamber. The technique can detect discontinuities, surface irregularities, and undesirable metallurgical phase transformations in magnetic and nonmagnetic conductive materials used for laser fusion. The AEC hardware and software were integrated with the L-PBF test bed. Two layer-by-layer tests of Inconel 625 coupons with AM built discontinuities and lack of fusion were conducted inside the L-PBF chamber. The AEC technology demonstrated excellent sensitivity to seeded, natural surface, and near-surface-embedded discontinuities, while also detecting surface topography. The data was acquired and imaged in a layer-by-layer sequence demonstrating the real-time monitoring capabilities of this new technology.

  14. OFDM Radar Space-Time Adaptive Processing by Exploiting Spatio-Temporal Sparsity

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL

    2013-01-01

    We propose a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly-moving target using an orthogonal frequency division multiplexing (OFDM) radar. We observe that the target and interference spectra are inherently sparse in the spatio-temporal domain. Hence, we exploit that sparsity to develop an efficient STAP technique that utilizes considerably lesser number of secondary data and produces an equivalent performance as the other existing STAP techniques. In addition, the use of an OFDM signal increases the frequency diversity of our system, as different scattering centers of a target resonate at different frequencies, and thus improves the target detectability. First, we formulate a realistic sparse-measurement model for an OFDM radar considering both the clutter and jammer as the interfering sources. Then, we apply a residual sparse-recovery technique based on the LASSO estimator to estimate the target and interference covariance matrices, and subsequently compute the optimal STAP-filter weights. Our numerical results demonstrate a comparative performance analysis of the proposed sparse-STAP algorithm with four other existing STAP methods. Furthermore, we discover that the OFDM-STAP filter-weights are adaptable to the frequency-variabilities of the target and interference responses, in addition to the spatio-temporal variabilities. Hence, by better utilizing the frequency variabilities, we propose an adaptive OFDM-waveform design technique, and consequently gain a significant amount of STAP-performance improvement.

  15. Developing Quality Assurance Processes for Image-Guided Adaptive Radiation Therapy

    International Nuclear Information System (INIS)

    Yan Di

    2008-01-01

    Quality assurance has long been implemented in radiation treatment as systematic actions necessary to provide adequate confidence that the radiation oncology service will satisfy the given requirements for quality care. The existing reports from the American Association of Physicists in Medicine Task Groups 40 and 53 have provided highly detailed QA guidelines for conventional radiotherapy and treatment planning. However, advanced treatment processes recently developed with emerging high technology have introduced new QA requirements that have not been addressed previously in the conventional QA program. Therefore, it is necessary to expand the existing QA guidelines to also include new considerations. Image-guided adaptive radiation therapy (IGART) is a closed-loop treatment process that is designed to include the individual treatment information, such as patient-specific anatomic variation and delivered dose assessed during the therapy course in treatment evaluation and planning optimization. Clinical implementation of IGART requires high levels of automation in image acquisition, registration, segmentation, treatment dose construction, and adaptive planning optimization, which brings new challenges to the conventional QA program. In this article, clinical QA procedures for IGART are outlined. The discussion focuses on the dynamic or four-dimensional aspects of the IGART process, avoiding overlap with conventional QA guidelines

  16. Stakeholder participation and sustainable fisheries: an integrative framework for assessing adaptive comanagement processes

    Directory of Open Access Journals (Sweden)

    Christian Stöhr

    2014-09-01

    Full Text Available Adaptive comanagement (ACM has been suggested as the way to successfully achieve sustainable environmental governance. Despite excellent research, the field still suffers from underdeveloped frameworks of causality. To address this issue, we suggest a framework that integrates the structural frame of Plummer and Fitzgibbons' "adaptive comanagement" with the specific process characteristics of Senecah's "Trinity of Voice." The resulting conceptual hybrid is used to guide the comparison of two cases of stakeholder participation in fisheries management - the Swedish Co-management Initiative and the Polish Fisheries Roundtable. We examine how different components of preconditions and the process led to the observed outcomes. The analysis shows that despite the different cultural and ecological contexts, the cases developed similar results. Triggered by a crisis, the participating stakeholders were successful in developing trust and better communication and enhanced learning. This can be traced back to a combination of respected leadership, skilled mediation, and a strong focus on deliberative approaches and the creation of respectful dialogue. We also discuss the difficulties of integrating outcomes of the work of such initiatives into the actual decision-making process. Finally, we specify the lessons learned for the cases and the benefits of applying our integrated framework.

  17. Global References, Local Translation: Adaptation of the Bologna Process Degree Structure and Credit System at Universities in Cameroon

    Science.gov (United States)

    Eta, Elizabeth Agbor; Vubo, Emmanuel Yenshu

    2016-01-01

    This article uses temporal comparison and thematic analytical approaches to analyse text documents and interviews, examining the adaptation of the Bologna Process degree structure and credit system in two sub-systems of education in Cameroon: the Anglo-Saxon and the French systems. The central aim is to verify whether such adaptation has replaced,…

  18. Adaptive neural network controller for the molten steel level control of strip casting processes

    International Nuclear Information System (INIS)

    Chen, Hung Yi; Huang, Shiuh Jer

    2010-01-01

    The twin-roll strip casting process is a steel-strip production method which combines continuous casting and hot rolling processes. The production line from molten liquid steel to the final steel-strip is shortened and the production cost is reduced significantly as compared to conventional continuous casting. The quality of strip casting process depends on many process parameters, such as molten steel level in the pool, solidification position, and roll gap. Their relationships are complex and the strip casting process has the properties of nonlinear uncertainty and time-varying characteristics. It is difficult to establish an accurate process model for designing a model-based controller to monitor the strip quality. In this paper, a model-free adaptive neural network controller is developed to overcome this problem. The proposed control strategy is based on a neural network structure combined with a sliding-mode control scheme. An adaptive rule is employed to on-line adjust the weights of radial basis functions by using the reaching condition of a specified sliding surface. This surface has the on-line learning ability to respond to the system's nonlinear and time-varying behaviors. Since this model-free controller has a simple control structure and small number of control parameters, it is easy to implement. Simulation results, based on a semi experimental system dynamic model and parameters, are executed to show the control performance of the proposed intelligent controller. In addition, the control performance is compared with that of a traditional Pid controller

  19. Adapting high-level language programs for parallel processing using data flow

    Science.gov (United States)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  20. Processes of Metastudy: A Study of Psychosocial Adaptation to Childhood Chronic Health Conditions

    Directory of Open Access Journals (Sweden)

    David B. Nichola

    2006-03-01

    Full Text Available Metastudy introduces a systematically aggregated interpretive portrayal of a body of literature, based on saturation and the synthesis of findings. In this metastudy, the authors examined qualitative studies addressing psychosocial adaptation to childhood chronic health conditions, published over a 30-year period (1970–2000. They describe metastudy processes, including study identification, strategies for study search and retrieval, adjudication of difference in study design and rigor, and analysis of findings. They also illustrate metastudy components through examples drawn from this project and discuss implications for practice and recommendations.

  1. Implementation of RLS-based Adaptive Filterson nVIDIA GeForce Graphics Processing Unit

    OpenAIRE

    Hirano, Akihiro; Nakayama, Kenji

    2011-01-01

    This paper presents efficient implementa- tion of RLS-based adaptive filters with a large number of taps on nVIDIA GeForce graphics processing unit (GPU) and CUDA software development environment. Modification of the order and the combination of calcu- lations reduces the number of accesses to slow off-chip memory. Assigning tasks into multiple threads also takes memory access order into account. For a 4096-tap case, a GPU program is almost three times faster than a CPU program.

  2. Experimental verification of preset time count rate meters based on adaptive digital signal processing algorithms

    Directory of Open Access Journals (Sweden)

    Žigić Aleksandar D.

    2005-01-01

    Full Text Available Experimental verifications of two optimized adaptive digital signal processing algorithms implemented in two pre set time count rate meters were per formed ac cording to appropriate standards. The random pulse generator realized using a personal computer, was used as an artificial radiation source for preliminary system tests and performance evaluations of the pro posed algorithms. Then measurement results for background radiation levels were obtained. Finally, measurements with a natural radiation source radioisotope 90Sr-90Y, were carried out. Measurement results, con ducted without and with radio isotopes for the specified errors of 10% and 5% showed to agree well with theoretical predictions.

  3. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    Science.gov (United States)

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.

  4. A DETAILED GRAVITATIONAL LENS MODEL BASED ON SUBMILLIMETER ARRAY AND KECK ADAPTIVE OPTICS IMAGING OF A HERSCHEL-ATLAS SUBMILLIMETER GALAXY AT z = 4.243 {sup ,} {sup ,}

    Energy Technology Data Exchange (ETDEWEB)

    Bussmann, R. S.; Gurwell, M. A. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Fu Hai; Cooray, A. [Department of Physics and Astronomy, University of California, Irvine, CA 92697 (United States); Smith, D. J. B.; Bonfield, D.; Dunne, L. [Centre for Astrophysics, Science and Technology Research Institute, University of Hertfordshire, Hatfield, Herts AL10 9AB (United Kingdom); Dye, S.; Eales, S. [School of Physics and Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD (United Kingdom); Auld, R. [Cardiff University, School of Physics and Astronomy, Queens Buildings, The Parade, Cardiff CF24 3AA (United Kingdom); Baes, M.; Fritz, J. [Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, B-9000 Gent (Belgium); Baker, A. J. [Department of Physics and Astronomy, Rutgers, the State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854-8019 (United States); Cava, A. [Departamento de Astrofisica, Facultad de CC. Fisicas, Universidad Complutense de Madrid, E-28040 Madrid (Spain); Clements, D. L.; Dariush, A. [Imperial College London, Blackett Laboratory, Prince Consort Road, London SW7 2AZ (United Kingdom); Coppin, K. [Department of Physics, McGill University, Ernest Rutherford Building, 3600 Rue University, Montreal, Quebec, H3A 2T8 (Canada); Dannerbauer, H. [Universitaet Wien, Institut fuer Astronomie, Tuerkenschanzstrasse 17, 1180 Wien, Oesterreich (Austria); De Zotti, G. [Universita di Padova, Dipto di Astronomia, Vicolo dell' Osservatorio 2, IT 35122, Padova (Italy); Hopwood, R., E-mail: rbussmann@cfa.harvard.edu [Department of Physics and Astronomy, Open University, Walton Hall, Milton Keynes, MK7 6AA (United Kingdom); and others

    2012-09-10

    We present high-spatial resolution imaging obtained with the Submillimeter Array (SMA) at 880 {mu}m and the Keck adaptive optics (AO) system at the K{sub S}-band of a gravitationally lensed submillimeter galaxy (SMG) at z = 4.243 discovered in the Herschel Astrophysical Terahertz Large Area Survey. The SMA data (angular resolution Almost-Equal-To 0.''6) resolve the dust emission into multiple lensed images, while the Keck AO K{sub S}-band data (angular resolution Almost-Equal-To 0.''1) resolve the lens into a pair of galaxies separated by 0.''3. We present an optical spectrum of the foreground lens obtained with the Gemini-South telescope that provides a lens redshift of z{sub lens} = 0.595 {+-} 0.005. We develop and apply a new lens modeling technique in the visibility plane that shows that the SMG is magnified by a factor of {mu} = 4.1 {+-} 0.2 and has an intrinsic infrared (IR) luminosity of L{sub IR} = (2.1 {+-} 0.2) Multiplication-Sign 10{sup 13} L{sub Sun }. We measure a half-light radius of the background source of r{sub s} = 4.4 {+-} 0.5 kpc which implies an IR luminosity surface density of {Sigma}{sub IR} (3.4 {+-} 0.9) Multiplication-Sign 10{sup 11} L{sub Sun} kpc{sup -2}, a value that is typical of z > 2 SMGs but significantly lower than IR luminous galaxies at z {approx} 0. The two lens galaxies are compact (r{sub lens} Almost-Equal-To 0.9 kpc) early-types with Einstein radii of {theta}{sub E1} 0.57 {+-} 0.01 and {theta}{sub E2} = 0.40 {+-} 0.01 that imply masses of M{sub lens1} = (7.4 {+-} 0.5) Multiplication-Sign 10{sup 10} M{sub Sun} and M{sub lens2} = (3.7 {+-} 0.3) Multiplication-Sign 10{sup 10} M{sub Sun }. The two lensing galaxies are likely about to undergo a dissipationless merger, and the mass and size of the resultant system should be similar to other early-type galaxies at z {approx} 0.6. This work highlights the importance of high spatial resolution imaging in developing models of strongly lensed galaxies

  5. On-Line Testing and Reconfiguration of Field Programmable Gate Arrays (FPGAs) for Fault-Tolerant (FT) Applications in Adaptive Computing Systems (ACS)

    National Research Council Canada - National Science Library

    Abramovici, Miron

    2002-01-01

    Adaptive computing systems (ACS) rely on reconfigurable hardware to adapt the system operation to changes in the external environment, and to extend mission capability by implementing new functions on the same hardware platform...

  6. Development and testing of methods for adaptive image processing in odontology and medicine

    Energy Technology Data Exchange (ETDEWEB)

    Sund, Torbjoern

    2005-07-01

    Medical diagnostic imaging has undergone radical changes during the last ten years. In the early 1990'ies, the medical imaging department was almost exclusively film-based. Today, all major hospitals have converted to digital acquisition and handling of their diagnostic imaging, or are in the process of conversion. It is therefore important to investigate whether diagnostic reading of digitally acquired images on computer display screens can match or even surpass film recording and viewing. At the same time, the digitalisation opens new possibilities for image processing, which may challenge the traditional way of studying medical images. The current work explores some of the possibilities of digital processing techniques, and evaluates the results both by quantitative methods (ROC analysis) and by subjective qualification by real users. Summary of papers: Paper I: Locally adaptive image binarization with a sliding window threshold was used for the detection of bone ridges in radiotherapy portal images. A new thresholding criterion suitable for incremental update within the sliding window was developed, and it was shown that the algorithm gave better results on difficult portal images than various publicly available adaptive thresholding routines. For small windows the routine was also faster than an adaptive implementation of the Otsu algorithm that uses interpolation between fixed tiles, and the resulting images had equal quality. Paper II: It was investigated whether contrast enhancement by non-interactive, sliding window adaptive histogram equalization could enhance the diagnostic quality of intra-oral radiographs in the dental clinic. Three dentists read 22 periapical and 12 bitewing storage phosphor (SP) radiographs. For the periapical readings they graded the quality of the examination with regard to visually locating the root apex. For the bitewing readings they registered all occurrences of approximal caries on a confidence scale. Each reading was

  7. Development and testing of methods for adaptive image processing in odontology and medicine

    International Nuclear Information System (INIS)

    Sund, Torbjoern

    2005-01-01

    Medical diagnostic imaging has undergone radical changes during the last ten years. In the early 1990'ies, the medical imaging department was almost exclusively film-based. Today, all major hospitals have converted to digital acquisition and handling of their diagnostic imaging, or are in the process of conversion. It is therefore important to investigate whether diagnostic reading of digitally acquired images on computer display screens can match or even surpass film recording and viewing. At the same time, the digitalisation opens new possibilities for image processing, which may challenge the traditional way of studying medical images. The current work explores some of the possibilities of digital processing techniques, and evaluates the results both by quantitative methods (ROC analysis) and by subjective qualification by real users. Summary of papers: Paper I: Locally adaptive image binarization with a sliding window threshold was used for the detection of bone ridges in radiotherapy portal images. A new thresholding criterion suitable for incremental update within the sliding window was developed, and it was shown that the algorithm gave better results on difficult portal images than various publicly available adaptive thresholding routines. For small windows the routine was also faster than an adaptive implementation of the Otsu algorithm that uses interpolation between fixed tiles, and the resulting images had equal quality. Paper II: It was investigated whether contrast enhancement by non-interactive, sliding window adaptive histogram equalization could enhance the diagnostic quality of intra-oral radiographs in the dental clinic. Three dentists read 22 periapical and 12 bitewing storage phosphor (SP) radiographs. For the periapical readings they graded the quality of the examination with regard to visually locating the root apex. For the bitewing readings they registered all occurrences of approximal caries on a confidence scale. Each reading was first

  8. Self-adaptive Green-Ampt infiltration parameters obtained from measured moisture processes

    Directory of Open Access Journals (Sweden)

    Long Xiang

    2016-07-01

    Full Text Available The Green-Ampt (G-A infiltration model (i.e., the G-A model is often used to characterize the infiltration process in hydrology. The parameters of the G-A model are critical in applications for the prediction of infiltration and associated rainfall-runoff processes. Previous approaches to determining the G-A parameters have depended on pedotransfer functions (PTFs or estimates from experimental results, usually without providing optimum values. In this study, rainfall simulators with soil moisture measurements were used to generate rainfall in various experimental plots. Observed runoff data and soil moisture dynamic data were jointly used to yield the infiltration processes, and an improved self-adaptive method was used to optimize the G-A parameters for various types of soil under different rainfall conditions. The two G-A parameters, i.e., the effective hydraulic conductivity and the effective capillary drive at the wetting front, were determined simultaneously to describe the relationships between rainfall, runoff, and infiltration processes. Through a designed experiment, the method for determining the G-A parameters was proved to be reliable in reflecting the effects of pedologic background in G-A type infiltration cases and deriving the optimum G-A parameters. Unlike PTF methods, this approach estimates the G-A parameters directly from infiltration curves obtained from rainfall simulation experiments so that it can be used to determine site-specific parameters. This study provides a self-adaptive method of optimizing the G-A parameters through designed field experiments. The parameters derived from field-measured rainfall-infiltration processes are more reliable and applicable to hydrological models.

  9. Adaptability and specificity of inhibition processes in distractor-induced blindness.

    Science.gov (United States)

    Winther, Gesche N; Niedeggen, Michael

    2017-12-01

    In a rapid serial visual presentation task, inhibition processes cumulatively impair processing of a target possessing distractor properties. This phenomenon-known as distractor-induced blindness-has thus far only been elicited using dynamic visual features, such as motion and orientation changes. In three ERP experiments, we used a visual object feature-color-to test for the adaptability and specificity of the effect. In Experiment I, participants responded to a color change (target) in the periphery whose onset was signaled by a central cue. Presentation of irrelevant color changes prior to the cue (distractors) led to reduced target detection, accompanied by a frontal ERP negativity that increased with increasing number of distractors, similar to the effects previously found for dynamic targets. This suggests that distractor-induced blindness is adaptable to color features. In Experiment II, the target consisted of coherent motion contrasting the color distractors. Correlates of distractor-induced blindness were found neither in the behavioral nor in the ERP data, indicating a feature specificity of the process. Experiment III confirmed the strict distinction between congruent and incongruent distractors: A single color distractor was embedded in a stream of motion distractors with the target consisting of a coherent motion. While behavioral performance was affected by the distractors, the color distractor did not elicit a frontal negativity. The experiments show that distractor-induced blindness is also triggered by visual stimuli predominantly processed in the ventral stream. The strict specificity of the central inhibition process also applies to these stimulus features. © 2017 Society for Psychophysiological Research.

  10. Genes involved in complex adaptive processes tend to have highly conserved upstream regions in mammalian genomes

    Directory of Open Access Journals (Sweden)

    Kohane Isaac

    2005-11-01

    Full Text Available Abstract Background Recent advances in genome sequencing suggest a remarkable conservation in gene content of mammalian organisms. The similarity in gene repertoire present in different organisms has increased interest in studying regulatory mechanisms of gene expression aimed at elucidating the differences in phenotypes. In particular, a proximal promoter region contains a large number of regulatory elements that control the expression of its downstream gene. Although many studies have focused on identification of these elements, a broader picture on the complexity of transcriptional regulation of different biological processes has not been addressed in mammals. The regulatory complexity may strongly correlate with gene function, as different evolutionary forces must act on the regulatory systems under different biological conditions. We investigate this hypothesis by comparing the conservation of promoters upstream of genes classified in different functional categories. Results By conducting a rank correlation analysis between functional annotation and upstream sequence alignment scores obtained by human-mouse and human-dog comparison, we found a significantly greater conservation of the upstream sequence of genes involved in development, cell communication, neural functions and signaling processes than those involved in more basic processes shared with unicellular organisms such as metabolism and ribosomal function. This observation persists after controlling for G+C content. Considering conservation as a functional signature, we hypothesize a higher density of cis-regulatory elements upstream of genes participating in complex and adaptive processes. Conclusion We identified a class of functions that are associated with either high or low promoter conservation in mammals. We detected a significant tendency that points to complex and adaptive processes were associated with higher promoter conservation, despite the fact that they have emerged

  11. Chemometrics-based process analytical technology (PAT) tools: applications and adaptation in pharmaceutical and biopharmaceutical industries.

    Science.gov (United States)

    Challa, Shruthi; Potumarthi, Ravichandra

    2013-01-01

    Process analytical technology (PAT) is used to monitor and control critical process parameters in raw materials and in-process products to maintain the critical quality attributes and build quality into the product. Process analytical technology can be successfully implemented in pharmaceutical and biopharmaceutical industries not only to impart quality into the products but also to prevent out-of-specifications and improve the productivity. PAT implementation eliminates the drawbacks of traditional methods which involves excessive sampling and facilitates rapid testing through direct sampling without any destruction of sample. However, to successfully adapt PAT tools into pharmaceutical and biopharmaceutical environment, thorough understanding of the process is needed along with mathematical and statistical tools to analyze large multidimensional spectral data generated by PAT tools. Chemometrics is a chemical discipline which incorporates both statistical and mathematical methods to obtain and analyze relevant information from PAT spectral tools. Applications of commonly used PAT tools in combination with appropriate chemometric method along with their advantages and working principle are discussed. Finally, systematic application of PAT tools in biopharmaceutical environment to control critical process parameters for achieving product quality is diagrammatically represented.

  12. [Problems in the process of adapting to change among the family caregivers of elderly people with dementia].

    Science.gov (United States)

    Moreno-Cámara, Sara; Palomino-Moral, Pedro Ángel; Moral-Fernández, Lourdes; Frías-Osuna, Antonio; Del-Pino-Casado, Rafael

    2016-01-01

    To identify and analyse problems in adapting to change among the family caregivers of relatives with dementia. Qualitative study based on the methodology of Charmaz's Constructivist Grounded Theory. Seven focus groups were conducted in different primary health care centres in the province of Jaen (Spain). Eighty-two primary family caregivers of relatives with dementia participated by purposeful maximum variation sampling and theoretical sampling. Triangulation analysis was carried out to increase internal validity. We obtained three main categories: 'Changing Care', 'Problems in the process of adapting to change' and 'Facilitators of the process of adapting to change'. Family caregivers perform their role in a context characterized by personal change, both in the person receiving the care and in the social and cultural context. The challenge of adaptation lies in the balance between the problems that hamper adaptation of the caregiver to new situations of care and the factors that facilitate the caregiver role. The adaptation of family caregivers to caring for a person with dementia is hindered by the lack of formal support and under-diagnosis of dementia. The adaptation process could be improved by strengthening formal support in the early stages of care to reduce the stress of family caregivers who must teach themselves about their task and by interventions adapted to each phase in the development of the caregiver role. Copyright © 2016 SESPAS. Published by Elsevier Espana. All rights reserved.

  13. Cultural differences and process adaptation in international R&D project management

    DEFF Research Database (Denmark)

    Li, Xing; Li, J. Z.

    2009-01-01

    In the era of globalization, Western companies have started to explore China as a source of technology. Yet, Western R&D project management processes in China are frequently facing many problems. Some of the problems can be conceptualized by analyzing a number of known cultural contrasts between ...... project success. At the same time, lessons and recommendations on the adaptability to Chinese style business and management interactions will be drawn from the case study for international companies that locate R&D projects in China.......In the era of globalization, Western companies have started to explore China as a source of technology. Yet, Western R&D project management processes in China are frequently facing many problems. Some of the problems can be conceptualized by analyzing a number of known cultural contrasts between...

  14. Rethinking infant knowledge: toward an adaptive process account of successes and failures in object permanence tasks.

    Science.gov (United States)

    Munakata, Y; McClelland, J L; Johnson, M H; Siegler, R S

    1997-10-01

    Infants seem sensitive to hidden objects in habituation tasks at 3.5 months but fail to retrieve hidden objects until 8 months. The authors first consider principle-based accounts of these successes and failures, in which early successes imply knowledge of principles and failures are attributed to ancillary deficits. One account is that infants younger than 8 months have the object permanence principle but lack means-ends abilities. To test this, 7-month-olds were trained on means-ends behaviors and were tested on retrieval of visible and occluded toys. Means-ends demands were the same, yet infants made more toy-guided retrievals in the visible case. The authors offer an adaptive process account in which knowledge is graded and embedded in specific behavioral processes. Simulation models that learn gradually to represent occluded objects show how this approach can account for success and failure in object permanence tasks without assuming principles and ancillary deficits.

  15. Decorative design of ceramic tiles adapted to inkjet printing employing digital image processing

    International Nuclear Information System (INIS)

    Defez, B.; Santiago-Praderas, V.; Lluna, E.; Peris-Fajarnes, G.; Dunai, E.

    2013-01-01

    The ceramic tile sector is a very competitive industry. The designer's proficiency to offer new models of the decorated surface, adapted to the production means, plays a very important role in the competitiveness. In the present work, we analyze the evolution of the design process in the ceramic sector, as much as the changes experimented in parallel by the printing equipment. Afterwards, we present a new concept of ceramic design, based on digital image processing. This technique allows the generation of homogeneous and non-repetitive designs for large surfaces, especially thought for inkjet printing. With the programmed algorithms we have compiled a prototype software for the assistance of the ceramic design. This tool allows creating continuous designs for large surfaces saving developing time. (Author)

  16. Modeling Stochastic Complexity in Complex Adaptive Systems: Non-Kolmogorov Probability and the Process Algebra Approach.

    Science.gov (United States)

    Sulis, William H

    2017-10-01

    Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.

  17. Processing of pulse oximeter signals using adaptive filtering and autocorrelation to isolate perfusion and oxygenation components

    Science.gov (United States)

    Ibey, Bennett; Subramanian, Hariharan; Ericson, Nance; Xu, Weijian; Wilson, Mark; Cote, Gerard L.

    2005-03-01

    A blood perfusion and oxygenation sensor has been developed for in situ monitoring of transplanted organs. In processing in situ data, motion artifacts due to increased perfusion can create invalid oxygenation saturation values. In order to remove the unwanted artifacts from the pulsatile signal, adaptive filtering was employed using a third wavelength source centered at 810nm as a reference signal. The 810 nm source resides approximately at the isosbestic point in the hemoglobin absorption curve where the absorbance of light is nearly equal for oxygenated and deoxygenated hemoglobin. Using an autocorrelation based algorithm oxygenation saturation values can be obtained without the need for large sampling data sets allowing for near real-time processing. This technique has been shown to be more reliable than traditional techniques and proven to adequately improve the measurement of oxygenation values in varying perfusion states.

  18. A novel joint-processing adaptive nonlinear equalizer using a modular recurrent neural network for chaotic communication systems.

    Science.gov (United States)

    Zhao, Haiquan; Zeng, Xiangping; Zhang, Jiashu; Liu, Yangguang; Wang, Xiaomin; Li, Tianrui

    2011-01-01

    To eliminate nonlinear channel distortion in chaotic communication systems, a novel joint-processing adaptive nonlinear equalizer based on a pipelined recurrent neural network (JPRNN) is proposed, using a modified real-time recurrent learning (RTRL) algorithm. Furthermore, an adaptive amplitude RTRL algorithm is adopted to overcome the deteriorating effect introduced by the nesting process. Computer simulations illustrate that the proposed equalizer outperforms the pipelined recurrent neural network (PRNN) and recurrent neural network (RNN) equalizers. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Soil mapping and processes models to support climate change mitigation and adaptation strategies: a review

    Science.gov (United States)

    Muñoz-Rojas, Miriam; Pereira, Paulo; Brevik, Eric; Cerda, Artemi; Jordan, Antonio

    2017-04-01

    As agreed in Paris in December 2015, global average temperature is to be limited to "well below 2 °C above pre-industrial levels" and efforts will be made to "limit the temperature increase to 1.5 °C above pre-industrial levels. Thus, reducing greenhouse gas emissions (GHG) in all sectors becomes critical and appropriate sustainable land management practices need to be taken (Pereira et al., 2017). Mitigation strategies focus on reducing the rate and magnitude of climate change by reducing its causes. Complementary to mitigation, adaptation strategies aim to minimise impacts and maximize the benefits of new opportunities. The adoption of both practices will require developing system models to integrate and extrapolate anticipated climate changes such as global climate models (GCMs) and regional climate models (RCMs). Furthermore, integrating climate models driven by socio-economic scenarios in soil process models has allowed the investigation of potential changes and threats in soil characteristics and functions in future climate scenarios. One of the options with largest potential for climate change mitigation is sequestering carbon in soils. Therefore, the development of new methods and the use of existing tools for soil carbon monitoring and accounting have therefore become critical in a global change context. For example, soil C maps can help identify potential areas where management practices that promote C sequestration will be productive and guide the formulation of policies for climate change mitigation and adaptation strategies. Despite extensive efforts to compile soil information and map soil C, many uncertainties remain in the determination of soil C stocks, and the reliability of these estimates depends upon the quality and resolution of the spatial datasets used for its calculation. Thus, better estimates of soil C pools and dynamics are needed to advance understanding of the C balance and the potential of soils for climate change mitigation. Here

  20. Development of a Post-Processing Algorithm for Accurate Human Skull Profile Extraction via Ultrasonic Phased Arrays

    Science.gov (United States)

    Al-Ansary, Mariam Luay Y.

    Ultrasound Imaging has been favored by clinicians for its safety, affordability, accessibility, and speed compared to other imaging modalities. However, the trade-offs to these benefits are a relatively lower image quality and interpretability, which can be addressed by, for example, post-processing methods. One particularly difficult imaging case is associated with the presence of a barrier, such as a human skull, with significantly different acoustical properties than the brain tissue as the target medium. Some methods were proposed in the literature to account for this structure if the skull's geometry is known. Measuring the skull's geometry is therefore an important task that requires attention. In this work, a new edge detection method for accurate human skull profile extraction via post-processing of ultrasonic A-Scans is introduced. This method, referred to as the Selective Echo Extraction algorithm, SEE, processes each A-Scan separately and determines the outermost and innermost boundaries of the skull by means of adaptive filtering. The method can also be used to determine the average attenuation coefficient of the skull. When applied to simulated B-Mode images of the skull profile, promising results were obtained. The profiles obtained from the proposed process in simulations were found to be within 0.15lambda +/- 0.11lambda or 0.09 +/- 0.07mm from the actual profiles. Experiments were also performed to test SEE on skull mimicking phantoms with major acoustical properties similar to those of the actual human skull. With experimental data, the profiles obtained with the proposed process were within 0.32lambda +/- 0.25lambda or 0.19 +/- 0.15mm from the actual profile.

  1. Intelligent Modeling Combining Adaptive Neuro Fuzzy Inference System and Genetic Algorithm for Optimizing Welding Process Parameters

    Science.gov (United States)

    Gowtham, K. N.; Vasudevan, M.; Maduraimuthu, V.; Jayakumar, T.

    2011-04-01

    Modified 9Cr-1Mo ferritic steel is used as a structural material for steam generator components of power plants. Generally, tungsten inert gas (TIG) welding is preferred for welding of these steels in which the depth of penetration achievable during autogenous welding is limited. Therefore, activated flux TIG (A-TIG) welding, a novel welding technique, has been developed in-house to increase the depth of penetration. In modified 9Cr-1Mo steel joints produced by the A-TIG welding process, weld bead width, depth of penetration, and heat-affected zone (HAZ) width play an important role in determining the mechanical properties as well as the performance of the weld joints during service. To obtain the desired weld bead geometry and HAZ width, it becomes important to set the welding process parameters. In this work, adaptative neuro fuzzy inference system is used to develop independent models correlating the welding process parameters like current, voltage, and torch speed with weld bead shape parameters like depth of penetration, bead width, and HAZ width. Then a genetic algorithm is employed to determine the optimum A-TIG welding process parameters to obtain the desired weld bead shape parameters and HAZ width.

  2. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    Science.gov (United States)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  3. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-01-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.

  4. Usability of clinical decision support system as a facilitator for learning the assistive technology adaptation process.

    Science.gov (United States)

    Danial-Saad, Alexandra; Kuflik, Tsvi; Weiss, Patrice L Tamar; Schreuer, Naomi

    2016-01-01

    The aim of this study was to evaluate the usability of Ontology Supported Computerized Assistive Technology Recommender (OSCAR), a Clinical Decision Support System (CDSS) for the assistive technology adaptation process, its impact on learning the matching process, and to determine the relationship between its usability and learnability. Two groups of expert and novice clinicians (total, n = 26) took part in this study. Each group filled out system usability scale (SUS) to evaluate OSCAR's usability. The novice group completed a learning questionnaire to assess OSCAR's effect on their ability to learn the matching process. Both groups rated OSCAR's usability as "very good", (M [SUS] = 80.7, SD = 11.6, median = 83.7) by the novices, and (M [SUS] = 81.2, SD = 6.8, median = 81.2) by the experts. The Mann-Whitney results indicated that no significant differences were found between the expert and novice groups in terms of OSCAR's usability. A significant positive correlation existed between the usability of OSCAR and the ability to learn the adaptation process (rs = 0.46, p = 0.04). Usability is an important factor in the acceptance of a system. The successful application of user-centered design principles during the development of OSCAR may serve as a case study that models the significant elements to be considered, theoretically and practically in developing other systems. Implications for Rehabilitation Creating a CDSS with a focus on its usability is an important factor for its acceptance by its users. Successful usability outcomes can impact the learning process of the subject matter in general, and the AT prescription process in particular. The successful application of User-Centered Design principles during the development of OSCAR may serve as a case study that models the significant elements to be considered, theoretically and practically. The study emphasizes the importance of close collaboration between the developers and

  5. Modeling of Activated Sludge Process Using Sequential Adaptive Neuro-fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Mahsa Vajedi

    2014-10-01

    Full Text Available In this study, an adaptive neuro-fuzzy inference system (ANFIS has been applied to model activated sludge wastewater treatment process of Mobin petrochemical company. The correlation coefficients between the input variables and the output variable were calculated to determine the input with the highest influence on the output (the quality of the outlet flow in order to compare three neuro-fuzzy structures with different number of parameters. The predictions of the neuro-fuzzy models were compared with those of multilayer artificial neural network models with similar structure. The comparison indicated that both methods resulted in flexible, robust and effective models for the activated sludge system. Moreover, the root mean square of the error for neuro-fuzzy and neural network models were 5.14 and 6.59, respectively, which means the former is the superior method.

  6. PSYCHOSOCIAL INTERVENTION EFFECTS ON ADAPTATION, DISEASE COURSE AND BIOBEHAVIORAL PROCESSES IN CANCER

    OpenAIRE

    Antoni, Michael H.

    2012-01-01

    A diagnosis of cancer and subsequent treatments place demands on psychological adaptation. Behavioral research suggests the importance of cognitive, behavioral, and social factors in facilitating adaptation during active treatment and throughout cancer survivorship, which forms the rationale for the use of many psychosocial interventions in cancer patients. This cancer experience may also affect physiological adaptation systems (e.g., neuroendocrine) in parallel with psychological adaptation ...

  7. Improving performance of natural language processing part-of-speech tagging on clinical narratives through domain adaptation.

    Science.gov (United States)

    Ferraro, Jeffrey P; Daumé, Hal; Duvall, Scott L; Chapman, Wendy W; Harkema, Henk; Haug, Peter J

    2013-01-01

    Natural language processing (NLP) tasks are commonly decomposed into subtasks, chained together to form processing pipelines. The residual error produced in these subtasks propagates, adversely affecting the end objectives. Limited availability of annotated clinical data remains a barrier to reaching state-of-the-art operating characteristics using statistically based NLP tools in the clinical domain. Here we explore the unique linguistic constructions of clinical texts and demonstrate the loss in operating characteristics when out-of-the-box part-of-speech (POS) tagging tools are applied to the clinical domain. We test a domain adaptation approach integrating a novel lexical-generation probability rule used in a transformation-based learner to boost POS performance on clinical narratives. Two target corpora from independent healthcare institutions were constructed from high frequency clinical narratives. Four leading POS taggers with their out-of-the-box models trained from general English and biomedical abstracts were evaluated against these clinical corpora. A high performing domain adaptation method, Easy Adapt, was compared to our newly proposed method ClinAdapt. The evaluated POS taggers drop in accuracy by 8.5-15% when tested on clinical narratives. The highest performing tagger reports an accuracy of 88.6%. Domain adaptation with Easy Adapt reports accuracies of 88.3-91.0% on clinical texts. ClinAdapt reports 93.2-93.9%. ClinAdapt successfully boosts POS tagging performance through domain adaptation requiring a modest amount of annotated clinical data. Improving the performance of critical NLP subtasks is expected to reduce pipeline error propagation leading to better overall results on complex processing tasks.

  8. Design Process of Flight Vehicle Structures for a Common Bulkhead and an MPCV Spacecraft Adapter

    Science.gov (United States)

    Aggarwal, Pravin; Hull, Patrick V.

    2015-01-01

    Design and manufacturing space flight vehicle structures is a skillset that has grown considerably at NASA during that last several years. Beginning with the Ares program and followed by the Space Launch System (SLS); in-house designs were produced for both the Upper Stage and the SLS Multipurpose crew vehicle (MPCV) spacecraft adapter. Specifically, critical design review (CDR) level analysis and flight production drawing were produced for the above mentioned hardware. In particular, the experience of this in-house design work led to increased manufacturing infrastructure for both Marshal Space Flight Center (MSFC) and Michoud Assembly Facility (MAF), improved skillsets in both analysis and design, and hands on experience in building and testing (MSA) full scale hardware. The hardware design and development processes from initiation to CDR and finally flight; resulted in many challenges and experiences that produced valuable lessons. This paper builds on these experiences of NASA in recent years on designing and fabricating flight hardware and examines the design/development processes used, as well as the challenges and lessons learned, i.e. from the initial design, loads estimation and mass constraints to structural optimization/affordability to release of production drawing to hardware manufacturing. While there are many documented design processes which a design engineer can follow, these unique experiences can offer insight into designing hardware in current program environments and present solutions to many of the challenges experienced by the engineering team.

  9. An adaptive process-based cloud infrastructure for space situational awareness applications

    Science.gov (United States)

    Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce

    2014-06-01

    Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.

  10. Adaptive memory: the survival-processing memory advantage is not due to negativity or mortality salience.

    Science.gov (United States)

    Bell, Raoul; Röer, Jan P; Buchner, Axel

    2013-05-01

    Recent research has highlighted the adaptive function of memory by showing that imagining being stranded in the grasslands without any survival material and rating words according to their survival value in this situation leads to exceptionally good memory for these words. Studies examining the role of emotions in causing the survival-processing memory advantage have been inconclusive, but some studies have suggested that the effect might be due to negativity or mortality salience. In Experiments 1 and 2, we compared the survival scenario to a control scenario that implied imagining a hopeless situation (floating in outer space with dwindling oxygen supplies) in which only suicide can avoid the agony of choking to death. Although this scenario was perceived as being more negative than the survival scenario, the survival-processing memory advantage persisted. In Experiment 3, thinking about the relevance of words for survival led to better memory for these words than did thinking about the relevance of words for death. This survival advantage was found for concrete, but not for abstract, words. The latter finding is consistent with the assumption that the survival instructions encourage participants to think about many different potential uses of items to aid survival, which may be a particularly efficient form of elaborate encoding. Together, the results suggest that thinking about death is much less effective in promoting recall than is thinking about survival. Therefore, the survival-processing memory advantage cannot be satisfactorily explained by negativity or mortality salience.

  11. ALTERNATIVE METHODOLOGIES FOR THE ESTIMATION OF LOCAL POINT DENSITY INDEX: MOVING TOWARDS ADAPTIVE LIDAR DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2012-07-01

    Full Text Available Over the past few years, LiDAR systems have been established as a leading technology for the acquisition of high density point clouds over physical surfaces. These point clouds will be processed for the extraction of geo-spatial information. Local point density is one of the most important properties of the point cloud that highly affects the performance of data processing techniques and the quality of extracted information from these data. Therefore, it is necessary to define a standard methodology for the estimation of local point density indices to be considered for the precise processing of LiDAR data. Current definitions of local point density indices, which only consider the 2D neighbourhood of individual points, are not appropriate for 3D LiDAR data and cannot be applied for laser scans from different platforms. In order to resolve the drawbacks of these methods, this paper proposes several approaches for the estimation of the local point density index which take the 3D relationship among the points and the physical properties of the surfaces they belong to into account. In the simplest approach, an approximate value of the local point density for each point is defined while considering the 3D relationship among the points. In the other approaches, the local point density is estimated by considering the 3D neighbourhood of the point in question and the physical properties of the surface which encloses this point. The physical properties of the surfaces enclosing the LiDAR points are assessed through eigen-value analysis of the 3D neighbourhood of individual points and adaptive cylinder methods. This paper will discuss these approaches and highlight their impact on various LiDAR data processing activities (i.e., neighbourhood definition, region growing, segmentation, boundary detection, and classification. Experimental results from airborne and terrestrial LiDAR data verify the efficacy of considering local point density variation for

  12. Dual Rate Adaptive Control for an Industrial Heat Supply Process Using Signal Compensation Approach

    Energy Technology Data Exchange (ETDEWEB)

    Chai, Tianyou; Jia, Yao; Wang, Hong; Su, Chun-Yi

    2017-07-09

    The industrial heat supply process (HSP) is a highly nonlinear cascaded process which uses a steam valve opening as its control input, the steam flow-rate as its inner loop output and the supply water temperature as its outer loop output. The relationship between the heat exchange rate and the model parameters, such as steam density, entropy, and fouling correction factor and heat exchange efficiency are unknown and nonlinear. Moreover, these model parameters vary in line with steam pressure, ambient temperature and the residuals caused by the quality variations of the circulation water. When the steam pressure and the ambient temperature are of high values and are subjected to frequent external random disturbances, the supply water temperature and the steam flow-rate would interact with each other and fluctuate a lot. This is also true when the process exhibits unknown characteristic variations of the process dynamics caused by the unexpected changes of the heat exchange residuals. As a result, it is difficult to control the supply water temperature and the rates of changes of steam flow-rate well inside their targeted ranges. In this paper, a novel compensation signal based dual rate adaptive controller is developed by representing the unknown variations of dynamics as unmodeled dynamics. In the proposed controller design, such a compensation signal is constructed and added onto the control signal obtained from the linear deterministic model based feedback control design. Such a compensation signal aims at eliminating the unmodeled dynamics and the rate of changes of the currently sample unmodeled dynamics. A successful industrial application is carried out, where it has been shown that both the supply water temperature and the rate of the changes of the steam flow-rate can be controlled well inside their targeted ranges when the process is subjected to unknown variations of its dynamics.

  13. The creativity exploration, through the use of brainstorming technique, adapted to the process of creation in fashion

    OpenAIRE

    Broega, A. C.; Mazzotti, Karla; Gomes, Luiz Vidal Negreiros

    2012-01-01

    This article describes a practical work experience in a classroom, which deals with aggregating techniques that facilitate the development of creativity, in a process of fashion creation. The method used was adapted to the fashion design, through the use of the concept of "brainstorming" and his approach to generating multiple ideas. The aim of this study is to analyze the creative performance of the students, and the creative possibilities resulting from the use and adaptation of this creati...

  14. Adaptation in constitutional dynamic libraries and networks, switching between orthogonal metalloselection and photoselection processes.

    Science.gov (United States)

    Vantomme, Ghislaine; Jiang, Shimei; Lehn, Jean-Marie

    2014-07-02

    Constitutional dynamic libraries of hydrazones (a)A(b)B and acylhydrazones (a)A(c)C undergo reorganization and adaptation in response to a chemical effector (metal cations) or a physical stimulus (light). The set of hydrazones [(1)A(1)B, (1)A(2)B, (2)A(1)B, (2)A(2)B] undergoes metalloselection on addition of zinc cations which drive the amplification of Zn((1)A(2)B)2 by selection of the fittest component (1)A(2)B. The set of acylhydrazones [E-(1)A(1)C, (1)A(2)C, (2)A(1)C, (2)A(2)C] undergoes photoselection by irradiation of the system, which causes photoisomerization of E-(1)A(1)C into Z-(1)A(1)C with amplification of the latter. The set of acyl hydrazones [E-(1)A(1)C, (1)A(3)C, (2)A(1)C, (2)A(3)C] undergoes a dual adaptation via component exchange and selection in response to two orthogonal external agents: a chemical effector, metal cations, and a physical stimulus, light irradiation. Metalloselection takes place on addition of zinc cations which drive the amplification of Zn((1)A(3)C)2 by selection of the fittest constituent (1)A(3)C. Photoselection is obtained on irradiation of the acylhydrazones that leads to photoisomerization from E-(1)A(1)C to Z-(1)A(1)C configuration with amplification of the latter. These changes may be represented by square constitutional dynamic networks that display up-regulation of the pairs of agonists ((1)A(2)B, (2)A(1)B), (Z-(1)A(1)C, (2)A(2)C), ((1)A(3)C, (2)A(1)C), (Z-(1)A(1)C, (2)A(3)C) and the simultaneous down-regulation of the pairs of antagonists ((1)A(1)B, (2)A(2)B), ((1)A(2)C, (2)A(1)C), (E-(1)A(1)C, (2)A(3)C), ((1)A(3)C, (2)A(1)C). The orthogonal dual adaptation undergone by the set of acylhydrazones amounts to a network switching process.

  15. Adaptive security systems -- Combining expert systems with adaptive technologies

    International Nuclear Information System (INIS)

    Argo, P.; Loveland, R.; Anderson, K.

    1997-01-01

    The Adaptive Multisensor Integrated Security System (AMISS) uses a variety of computational intelligence techniques to reason from raw sensor data through an array of processing layers to arrive at an assessment for alarm/alert conditions based on human behavior within a secure facility. In this paper, the authors give an overview of the system and briefly describe some of the major components of the system. This system is currently under development and testing in a realistic facility setting

  16. Adapting Rational Unified Process (RUP) approach in designing a secure e-Tendering model

    Science.gov (United States)

    Mohd, Haslina; Robie, Muhammad Afdhal Muhammad; Baharom, Fauziah; Darus, Norida Muhd; Saip, Mohamed Ali; Yasin, Azman

    2016-08-01

    e-Tendering is an electronic processing of the tender document via internet and allow tenderer to publish, communicate, access, receive and submit all tender related information and documentation via internet. This study aims to design the e-Tendering system using Rational Unified Process approach. RUP provides a disciplined approach on how to assign tasks and responsibilities within the software development process. RUP has four phases that can assist researchers to adjust the requirements of various projects with different scope, problem and the size of projects. RUP is characterized as a use case driven, architecture centered, iterative and incremental process model. However the scope of this study only focusing on Inception and Elaboration phases as step to develop the model and perform only three of nine workflows (business modeling, requirements, analysis and design). RUP has a strong focus on documents and the activities in the inception and elaboration phases mainly concern the creation of diagrams and writing of textual descriptions. The UML notation and the software program, Star UML are used to support the design of e-Tendering. The e-Tendering design based on the RUP approach can contribute to e-Tendering developers and researchers in e-Tendering domain. In addition, this study also shows that the RUP is one of the best system development methodology that can be used as one of the research methodology in Software Engineering domain related to secured design of any observed application. This methodology has been tested in various studies in certain domains, such as in Simulation-based Decision Support, Security Requirement Engineering, Business Modeling and Secure System Requirement, and so forth. As a conclusion, these studies showed that the RUP one of a good research methodology that can be adapted in any Software Engineering (SE) research domain that required a few artifacts to be generated such as use case modeling, misuse case modeling, activity

  17. Pre-processing, registration and selection of adaptive optics corrected retinal images.

    Science.gov (United States)

    Ramaswamy, Gomathy; Devaney, Nicholas

    2013-07-01

    In this paper, the aim is to demonstrate enhanced processing of sequences of fundus images obtained using a commercial AO flood illumination system. The purpose of the work is to (1) correct for uneven illumination at the retina (2) automatically select the best quality images and (3) precisely register the best images. Adaptive optics corrected retinal images are pre-processed to correct uneven illumination using different methods; subtracting or dividing by the average filtered image, homomorphic filtering and a wavelet based approach. These images are evaluated to measure the image quality using various parameters, including sharpness, variance, power spectrum kurtosis and contrast. We have carried out the registration in two stages; a coarse stage using cross-correlation followed by fine registration using two approaches; parabolic interpolation on the peak of the cross-correlation and maximum-likelihood estimation. The angle of rotation of the images is measured using a combination of peak tracking and Procrustes transformation. We have found that a wavelet approach (Daubechies 4 wavelet at 6th level decomposition) provides good illumination correction with clear improvement in image sharpness and contrast. The assessment of image quality using a 'Designer metric' works well when compared to visual evaluation, although it is highly correlated with other metrics. In image registration, sub-pixel translation measured using parabolic interpolation on the peak of the cross-correlation function and maximum-likelihood estimation are found to give very similar results (RMS difference 0.047 pixels). We have confirmed that correcting rotation of the images provides a significant improvement, especially at the edges of the image. We observed that selecting the better quality frames (e.g. best 75% images) for image registration gives improved resolution, at the expense of poorer signal-to-noise. The sharpness map of the registered and de-rotated images shows increased

  18. Using adaptive processes and adverse outcome pathways to develop meaningful, robust, and actionable environmental monitoring programs.

    Science.gov (United States)

    Arciszewski, Tim J; Munkittrick, Kelly R; Scrimgeour, Garry J; Dubé, Monique G; Wrona, Fred J; Hazewinkel, Rod R

    2017-09-01

    The primary goals of environmental monitoring are to indicate whether unexpected changes related to development are occurring in the physical, chemical, and biological attributes of ecosystems and to inform meaningful management intervention. Although achieving these objectives is conceptually simple, varying scientific and social challenges often result in their breakdown. Conceptualizing, designing, and operating programs that better delineate monitoring, management, and risk assessment processes supported by hypothesis-driven approaches, strong inference, and adverse outcome pathways can overcome many of the challenges. Generally, a robust monitoring program is characterized by hypothesis-driven questions associated with potential adverse outcomes and feedback loops informed by data. Specifically, key and basic features are predictions of future observations (triggers) and mechanisms to respond to success or failure of those predictions (tiers). The adaptive processes accelerate or decelerate the effort to highlight and overcome ignorance while preventing the potentially unnecessary escalation of unguided monitoring and management. The deployment of the mutually reinforcing components can allow for more meaningful and actionable monitoring programs that better associate activities with consequences. Integr Environ Assess Manag 2017;13:877-891. © 2017 The Authors. Integrated Environmental Assessment and Management Published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC). © 2017 The Authors. Integrated Environmental Assessment and Management Published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).

  19. Adaptive Signal Processing Testbed: VME-based DSP board market survey

    Science.gov (United States)

    Ingram, Rick E.

    1992-04-01

    The Adaptive Signal Processing Testbed (ASPT) is a real-time multiprocessor system utilizing digital signal processor technology on VMEbus based printed circuit boards installed on a Sun workstation. The ASPT has specific requirements, particularly as regards to the signal excision application, with respect to interfacing with current and planned data generation equipment, processing of the data, storage to disk of final and intermediate results, and the development tools for applications development and integration into the overall EW/COM computing environment. A prototype ASPT was implemented using three VME-C-30 boards from Applied Silicon. Experience gained during the prototype development led to the conclusions that interprocessor communications capability is the most significant contributor to overall ASPT performance. In addition, the host involvement should be minimized. Boards using different processors were evaluated with respect to the ASPT system requirements, pricing, and availability. Specific recommendations based on various priorities are made as well as recommendations concerning the integration and interaction of various tools developed during the prototype implementation.

  20. Realizing drug repositioning by adapting a recommendation system to handle the process.

    Science.gov (United States)

    Ozsoy, Makbule Guclin; Özyer, Tansel; Polat, Faruk; Alhajj, Reda

    2018-04-12

    Drug repositioning is the process of identifying new targets for known drugs. It can be used to overcome problems associated with traditional drug discovery by adapting existing drugs to treat new discovered diseases. Thus, it may reduce associated risk, cost and time required to identify and verify new drugs. Nowadays, drug repositioning has received more attention from industry and academia. To tackle this problem, researchers have applied many different computational methods and have used various features of drugs and diseases. In this study, we contribute to the ongoing research efforts by combining multiple features, namely chemical structures, protein interactions and side-effects to predict new indications of target drugs. To achieve our target, we realize drug repositioning as a recommendation process and this leads to a new perspective in tackling the problem. The utilized recommendation method is based on Pareto dominance and collaborative filtering. It can also integrate multiple data-sources and multiple features. For the computation part, we applied several settings and we compared their performance. Evaluation results show that the proposed method can achieve more concentrated predictions with high precision, where nearly half of the predictions are true. Compared to other state of the art methods described in the literature, the proposed method is better at making right predictions by having higher precision. The reported results demonstrate the applicability and effectiveness of recommendation methods for drug repositioning.

  1. Informative gene selection using Adaptive Analytic Hierarchy Process (A2HP

    Directory of Open Access Journals (Sweden)

    Abhishek Bhola

    2017-12-01

    Full Text Available Gene expression dataset derived from microarray experiments are marked by large number of genes, which contains the gene expression values at different sample conditions/time-points. Selection of informative genes from these large datasets is an issue of major concern for various researchers and biologists. In this study, we propose a gene selection and dimensionality reduction method called Adaptive Analytic Hierarchy Process (A2HP. Traditional analytic hierarchy process is a multiple-criteria based decision analysis method whose result depends upon the expert knowledge or decision makers. It is mainly used to solve the decision problems in different fields. On the other hand, A2HP is a fused method that combines the outcomes of five individual gene selection ranking methods t-test, chi-square variance test, z-test, wilcoxon test and signal-to-noise ratio (SNR. At first, the preprocessing of gene expression dataset is done and then the reduced number of genes obtained, will be fed as input for A2HP. A2HP utilizes both quantitative and qualitative factors to select the informative genes. Results demonstrate that A2HP selects efficient number of genes as compared to the individual gene selection methods. The percentage of deduction in number of genes and time complexity are taken as the performance measure for the proposed method. And it is shown that A2HP outperforms individual gene selection methods.

  2. Applying the sequential neural-network approximation and orthogonal array algorithm to optimize the axial-flow cooling system for rapid thermal processes

    International Nuclear Information System (INIS)

    Hung, Shih-Yu; Shen, Ming-Ho; Chang, Ying-Pin

    2009-01-01

    The sequential neural-network approximation and orthogonal array (SNAOA) were used to shorten the cooling time for the rapid cooling process such that the normalized maximum resolved stress in silicon wafer was always below one in this study. An orthogonal array was first conducted to obtain the initial solution set. The initial solution set was treated as the initial training sample. Next, a back-propagation sequential neural network was trained to simulate the feasible domain to obtain the optimal parameter setting. The size of the training sample was greatly reduced due to the use of the orthogonal array. In addition, a restart strategy was also incorporated into the SNAOA so that the searching process may have a better opportunity to reach a near global optimum. In this work, we considered three different cooling control schemes during the rapid thermal process: (1) downward axial gas flow cooling scheme; (2) upward axial gas flow cooling scheme; (3) dual axial gas flow cooling scheme. Based on the maximum shear stress failure criterion, the other control factors such as flow rate, inlet diameter, outlet width, chamber height and chamber diameter were also examined with respect to cooling time. The results showed that the cooling time could be significantly reduced using the SNAOA approach

  3. Inside the adaptation process of Lactobacillus delbrueckii subsp. lactis to bile

    OpenAIRE

    Burns, Patricia; Sánchez García, Borja; Vinderola, Gabriel; Ruas-Madiedo, Patricia; Ruíz García, Lorena; Margolles Barros, Abelardo; Reinheimer, Jorge A.; González de los Reyes-Gavilán, Clara

    2010-01-01

    Progressive adaptation to bile might render some lactobacilli able to withstand physiological bile salt concentrations. In this work, the adaptation to bile was evaluated on previously isolated dairy strains of Lactobacillus delbrueckii subsp. lactis 200 and L. delbrueckii subsp. lactis 200+, a strain derived thereof with stable bile-resistant phenotype. The adaptation to bile was obtained by comparing cytosolic proteomes of both strains grown in the presence or absence of bile. Proteomics we...

  4. Process Network Approach to Understanding How Forest Ecosystems Adapt to Changes

    Science.gov (United States)

    Kim, J.; Yun, J.; Hong, J.; Kwon, H.; Chun, J.

    2011-12-01

    Sustainability challenges are transforming science and its role in society. Complex systems science has emerged as an inevitable field of education and research, which transcends disciplinary boundaries and focuses on understanding of the dynamics of complex social-ecological systems (SES). SES is a combined system of social and ecological components and drivers that interact and give rise to results, which could not be understood on the basis of sociological or ecological considerations alone. However, both systems may be viewed as a network of processes, and such a network hierarchy may serve as a hinge to bridge social and ecological systems. As a first step toward such effort, we attempted to delineate and interpret such process networks in forest ecosystems, which play a critical role in the cycles of carbon and water from local to global scales. These cycles and their variability, in turn, play an important role in the emergent and self-organizing interactions between forest ecosystems and their environment. Ruddell and Kumar (2009) define a process network as a network of feedback loops and the related time scales, which describe the magnitude and direction of the flow of energy, matter, and information between the different variables in a complex system. Observational evidence, based on micrometeorological eddy covariance measurements, suggests that heterogeneity and disturbances in forest ecosystems in monsoon East Asia may facilitate to build resilience for adaptation to change. Yet, the principles that characterize the role of variability in these interactions remain elusive. In this presentation, we report results from the analysis of multivariate ecohydrologic and biogeochemical time series data obtained from temperate forest ecosystems in East Asia based on information flow statistics.

  5. HIV-1 Adaptation to Antigen Processing Results in Population-Level Immune Evasion and Affects Subtype Diversification

    DEFF Research Database (Denmark)

    Tenzer, Stefan; Crawford, Hayley; Pymm, Phillip

    2014-01-01

    these regions encode epitopes presented by ~30 more common HLA variants. By combining epitope processing and computational analyses of the two HIV subtypes responsible for ~60% of worldwide infections, we identified a hitherto unrecognized adaptation to the antigen-processing machinery through substitutions...... of intrapatient adaptations, is predictable, facilitates viral subtype diversification, and increases global HIV diversity. Because low epitope abundance is associated with infrequent and weak T cell responses, this most likely results in both population-level immune evasion and inadequate responses in most...

  6. Adaptation in CRISPR-Cas Systems.

    Science.gov (United States)

    Sternberg, Samuel H; Richter, Hagen; Charpentier, Emmanuelle; Qimron, Udi

    2016-03-17

    Clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated (Cas) proteins constitute an adaptive immune system in prokaryotes. The system preserves memories of prior infections by integrating short segments of foreign DNA, termed spacers, into the CRISPR array in a process termed adaptation. During the past 3 years, significant progress has been made on the genetic requirements and molecular mechanisms of adaptation. Here we review these recent advances, with a focus on the experimental approaches that have been developed, the insights they generated, and a proposed mechanism for self- versus non-self-discrimination during the process of spacer selection. We further describe the regulation of adaptation and the protein players involved in this fascinating process that allows bacteria and archaea to harbor adaptive immunity. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. The desire to survive: the adaptation process of adult cancer patients undergoing radiotherapy.

    Science.gov (United States)

    Chao, Yu Huan; Wang, Shou-Yu; Hsu, Tsui Hua; Wang, Kai Wei K

    2015-01-01

    Radiotherapy is one of the primary treatment strategies for cancer. However, patients not only deal with the side-effects of radiotherapy, but they must also endure the psychological distress caused by cancer. This study explores how cancer patients adapt to the treatment process when receiving radiotherapy. This study used a grounded theory approach, and eight in-depth interviews were conducted with newly diagnosed cancer patients who received radiotherapy as a primary treatment. The core category that emerged from this study was "the desire to survive". The categories and subcategories that emerged from the data include facing unknown situations (e.g. searching for relevant information and decision-making considerations, and listening to healthcare professionals' suggestions), experiencing the pain of treatment (e.g. tolerating side-effects, tolerating inconvenience during the treatment, accepting support during the treatment, and adjusting lifestyles), and chances to extend life (e.g. accepting fate, determination to undergo the treatment, and adjusting negative emotions). The study results provide a better understanding of the experiences of cancer patients undergoing radiotherapy. Healthcare professionals should provide effective medical management for side-effects and psychological support to cancer patients during the journey of radiotherapy. © 2014 The Authors. Japan Journal of Nursing Science © 2014 Japan Academy of Nursing Science.

  8. Reformed and reforming: Adapting the licensing process to meet new challenges

    International Nuclear Information System (INIS)

    Burns, Stephen G.

    2017-01-01

    The NRC has engaged in a steady, albeit modest, examination of its preparedness for advanced designs over the past few years. These efforts have included examination of its own guidance and processes as well as co-operation with the US Department of Energy (US DOE) in identifying key issues and potential strategies. But the NRC is constrained in some respects from devoting substantial resources to the development of new or revised regulatory approaches due to statutory requirements that the NRC recover most of its appropriated funds through user fees imposed on the industry. Unless designers are prepared to put up the funds necessary to cover the fees for review of the new designs, the NRC is not able to review them, and licensees of operating facilities paying annual fees may not all be supportive of the NRC expending resources to develop infrastructure for the review of advanced reactor designs. Given the current context, this article will attempt to reflect on the NRC's current framework for licensing, the lessons from NRC's regulations in 10 CFR Part 52 and strategies for adapting to the new demands that may be made on the agency

  9. Adaptive neural reward processing during anticipation and receipt of monetary rewards in mindfulness meditators.

    Science.gov (United States)

    Kirk, Ulrich; Brown, Kirk Warren; Downar, Jonathan

    2015-05-01

    Reward seeking is ubiquitous and adaptive in humans. But excessive reward seeking behavior, such as chasing monetary rewards, may lead to diminished subjective well-being. This study examined whether individuals trained in mindfulness meditation show neural evidence of lower susceptibility to monetary rewards. Seventy-eight participants (34 meditators, 44 matched controls) completed the monetary incentive delay task while undergoing functional magnetic resonance imaging. The groups performed equally on the task, but meditators showed lower neural activations in the caudate nucleus during reward anticipation, and elevated bilateral posterior insula activation during reward anticipation. Meditators also evidenced reduced activations in the ventromedial prefrontal cortex during reward receipt compared with controls. Connectivity parameters between the right caudate and bilateral anterior insula were attenuated in meditators during incentive anticipation. In summary, brain regions involved in reward processing-both during reward anticipation and receipt of reward-responded differently in mindfulness meditators than in nonmeditators, indicating that the former are less susceptible to monetary incentives. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  10. Differential effect of ultraviolet-B radiation on certain metabolic processes in a chromatically adapting Nostoc

    Energy Technology Data Exchange (ETDEWEB)

    Tyagi, R.; Srinivas, G.; Vyas, D.; Kumar, A.; Kumar, H.D. (Banaras Hindu Univ., Varanasi (India))

    1992-03-01

    The impact of UV-B radiation on growth, pigmentation and certain physiological processes was studied in a N{sub 2}-fixing chromatically adapting cyanobacterium, Nostoc spongiaeforme. A brownish form (phycoerythrin rich) was found to be more tolerant to UV-B than the blue-green (phycocyanin rich) form of N. spongiaeforme. Continuous exposure to UV-B (5.5 W m{sup -2}) for 90 min caused complete killing of the blue-green strain whereas the brown strain showed complete loss of survival after 180 min. Pigment content was more strongly inhibited in the blue-green strain than in the brown. Nitrogenase activity was completely abolished in both strains within 35 min of UV-B treatment. Restoration of nitrogenase occurred upon transfer to fluorescent or incandescent light after a lag of 5-6 h, suggesting fresh synthesis of nitrogenase. In vivo nitrate reductase activity was stimulated by UV-B treatment, the degree of enhancement being significantly higher in the blue-green strain. {sup 14}CO{sub 2} uptake was also completely abolished by UV-B treatment in both strains. (author).

  11. Differential effect of ultraviolet-B radiation on certain metabolic processes in a chromatically adapting Nostoc

    International Nuclear Information System (INIS)

    Tyagi, R.; Srinivas, G.; Vyas, D.; Kumar, A.; Kumar, H.D.

    1992-01-01

    The impact of UV-B radiation on growth, pigmentation and certain physiological processes was studied in a N 2 -fixing chromatically adapting cyanobacterium, Nostoc spongiaeforme. A brownish form (phycoerythrin rich) was found to be more tolerant to UV-B than the blue-green (phycocyanin rich) form of N. spongiaeforme. Continuous exposure to UV-B (5.5 W m -2 ) for 90 min caused complete killing of the blue-green strain whereas the brown strain showed complete loss of survival after 180 min. Pigment content was more strongly inhibited in the blue-green strain than in the brown. Nitrogenase activity was completely abolished in both strains within 35 min of UV-B treatment. Restoration of nitrogenase occurred upon transfer to fluorescent or incandescent light after a lag of 5-6 h, suggesting fresh synthesis of nitrogenase. In vivo nitrate reductase activity was stimulated by UV-B treatment, the degree of enhancement being significantly higher in the blue-green strain. 14 CO 2 uptake was also completely abolished by UV-B treatment in both strains. (author)

  12. Separate channels for processing form, texture, and color: evidence from FMRI adaptation and visual object agnosia.

    Science.gov (United States)

    Cavina-Pratesi, C; Kentridge, R W; Heywood, C A; Milner, A D

    2010-10-01

    Previous neuroimaging research suggests that although object shape is analyzed in the lateral occipital cortex, surface properties of objects, such as color and texture, are dealt with in more medial areas, close to the collateral sulcus (CoS). The present study sought to determine whether there is a single medial region concerned with surface properties in general or whether instead there are multiple foci independently extracting different surface properties. We used stimuli varying in their shape, texture, or color, and tested healthy participants and 2 object-agnosic patients, in both a discrimination task and a functional MR adaptation paradigm. We found a double dissociation between medial and lateral occipitotemporal cortices in processing surface (texture or color) versus geometric (shape) properties, respectively. In Experiment 2, we found that the medial occipitotemporal cortex houses separate foci for color (within anterior CoS and lingual gyrus) and texture (caudally within posterior CoS). In addition, we found that areas selective for shape, texture, and color individually were quite distinct from those that respond to all of these features together (shape and texture and color). These latter areas appear to correspond to those associated with the perception of complex stimuli such as faces and places.

  13. STUDY OF THE ADAPTATION PROCESS IN COMMON CARP (CYPRINUS CARPIO L. AFTER HARVESTING

    Directory of Open Access Journals (Sweden)

    Milena Bušová

    2013-02-01

    Full Text Available Fish is sensitive to exogenous and endogenous ammonia. Ammonia formed in fish as a product of metabolism of proteins may be under certain circumstances life-threatening. Ammonia autointoxication is a serious problem and can cause mass mortalities in fish farms. This study focused on the common carp Cyprinus carpio L. in large-capacity breeding farms. It was focused on monitoring the blood ammonia levels in fish blood in the period of metabolic attenuation and the influence of harvesting and handling of fish on the fish's ability to withstand such changes. The study results confirmed the effect of sudden changes in water temperature to values of ammonia in the blood of fish. On the contrary, there were no dramatically increased concentrations of ammonia in the blood of fish nor symptoms of autointoxication. The measured ammonia concentrations ranged between 98.3 ± 56µmol/L and 141.4 ± 31 µmol/L in the monitored period, which corresponds with the study results of other authors. This study has confirmed good technological conditions in the market production of carp after harvesting and a good level of adaptation process of the common carp Cyprinus carpio L. to these changes.

  14. Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process

    Science.gov (United States)

    Nakanishi, W.; Fuse, T.; Ishikawa, T.

    2015-05-01

    This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.

  15. Sparsity-Based Space-Time Adaptive Processing Using OFDM Radar

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL

    2012-01-01

    We propose a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly-moving target using an orthogonal frequency division multiplexing (OFDM) radar. We observe that the target and interference spectra are inherently sparse in the spatio-temporal domain, and hence we exploit that sparsity to develop an efficient STAP technique. In addition, the use of an OFDM signal increases the frequency diversity of our system, as different scattering centers of a target resonate at different frequencies, and thus improves the target detectability. First, we formulate a realistic sparse-measurement model for an OFDM radar considering both the clutter and jammer as the interfering sources. Then, we show that the optimal STAP-filter weight-vector is equal to the generalized eigenvector corresponding to the minimum generalized eigenvalue of the interference and target covariance matrices. To estimate the target and interference covariance matrices, we apply a residual sparse-recovery technique that enables us to incorporate the partially known support of the sparse vector. Our numerical results demonstrate that the sparsity-based STAP algorithm, with considerably lesser number of secondary data, produces an equivalent performance as the other existing STAP techniques.

  16. A novel heterogeneous training sample selection method on space-time adaptive processing

    Science.gov (United States)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  17. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  18. Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.

    Science.gov (United States)

    Vikhe, P S; Thool, V R

    2016-04-01

    Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection.

  19. Three-State Locally Adaptive Texture Preserving Filter for Radar and Optical Image Processing

    Directory of Open Access Journals (Sweden)

    Jaakko T. Astola

    2005-05-01

    Full Text Available Textural features are one of the most important types of useful information contained in images. In practice, these features are commonly masked by noise. Relatively little attention has been paid to texture preserving properties of noise attenuation methods. This stimulates solving the following tasks: (1 to analyze the texture preservation properties of various filters; and (2 to design image processing methods capable to preserve texture features well and to effectively reduce noise. This paper deals with examining texture feature preserving properties of different filters. The study is performed for a set of texture samples and different noise variances. The locally adaptive three-state schemes are proposed for which texture is considered as a particular class. For “detection” of texture regions, several classifiers are proposed and analyzed. As shown, an appropriate trade-off of the designed filter properties is provided. This is demonstrated quantitatively for artificial test images and is confirmed visually for real-life images.

  20. Optimization of processing parameters on the controlled growth of c-axis oriented ZnO nanorod arrays

    Energy Technology Data Exchange (ETDEWEB)

    Malek, M. F., E-mail: mfmalek07@gmail.com; Rusop, M., E-mail: rusop@salam.uitm.my [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); NANO-SciTech Centre (NST), Institute of Science (IOS), Universiti Teknologi MARA - UiTM, 40450 Shah Alam, Selangor (Malaysia); Mamat, M. H., E-mail: hafiz-030@yahoo.com [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); Musa, M. Z., E-mail: musa948@gmail.com [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM) Pulau Pinang, Jalan Permatang Pauh, 13500 Permatang Pauh, Pulau Pinang (Malaysia); Saurdi, I., E-mail: saurdy788@gmail.com; Ishak, A., E-mail: ishak@sarawak.uitm.edu.my [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM) Sarawak, Kampus Kota Samarahan, Jalan Meranek, 94300 Kota Samarahan, Sarawak (Malaysia); Alrokayan, Salman A. H., E-mail: dr.salman@alrokayan.com; Khan, Haseeb A., E-mail: khan-haseeb@yahoo.com [Chair of Targeting and Treatment of Cancer Using Nanoparticles, Deanship of Scientific Research, King Saud University (KSU), Riyadh 11451 (Saudi Arabia)

    2016-07-06

    Optimization of the growth time parameter was conducted to synthesize high-quality c-axis ZnO nanorod arrays. The effects of the parameter on the crystal growth and properties were systematically investigated. Our studies confirmed that the growth time influence the properties of ZnO nanorods where the crystallite size of the structures was increased at higher deposition time. Field emission scanning electron microsope analysis confirmed the morphologies structure of the ZnO nanorods. The ZnO nanostructures prepared under the optimized growth conditions showed an intense XRD peak which reveal a higher c-axis oriented ZnO nanorod arrays thus demonstrating the formation of defect free structure.

  1. Biological Inspired Stochastic Optimization Technique (PSO for DOA and Amplitude Estimation of Antenna Arrays Signal Processing in RADAR Communication System

    Directory of Open Access Journals (Sweden)

    Khurram Hammed

    2016-01-01

    Full Text Available This paper presents a stochastic global optimization technique known as Particle Swarm Optimization (PSO for joint estimation of amplitude and direction of arrival of the targets in RADAR communication system. The proposed scheme is an excellent optimization methodology and a promising approach for solving the DOA problems in communication systems. Moreover, PSO is quite suitable for real time scenario and easy to implement in hardware. In this study, uniform linear array is used and targets are supposed to be in far field of the arrays. Formulation of the fitness function is based on mean square error and this function requires a single snapshot to obtain the best possible solution. To check the accuracy of the algorithm, all of the results are taken by varying the number of antenna elements and targets. Finally, these results are compared with existing heuristic techniques to show the accuracy of PSO.

  2. Imaging RF Phased Array Receivers using Optically-Coherent Up-conversion for High Beam-Bandwidth Processing

    Science.gov (United States)

    2017-03-01

    It does so by using an optical lens to perform an inverse spatial Fourier Transform on the up-converted RF signals, thereby rendering a real-time... simultaneous beams or other engineered beam patterns. There are two general approaches to array-based beam forming: digital and analog. In digital beam...of significantly limiting the number of beams that can be formed simultaneously and narrowing the operational bandwidth. An alternate approach that

  3. Array capabilities and future arrays

    International Nuclear Information System (INIS)

    Radford, D.

    1993-01-01

    Early results from the new third-generation instruments GAMMASPHERE and EUROGAM are confirming the expectation that such arrays will have a revolutionary effect on the field of high-spin nuclear structure. When completed, GAMMASHPERE will have a resolving power am order of magnitude greater that of the best second-generation arrays. When combined with other instruments such as particle-detector arrays and fragment mass analysers, the capabilites of the arrays for the study of more exotic nuclei will be further enhanced. In order to better understand the limitations of these instruments, and to design improved future detector systems, it is important to have some intelligible and reliable calculation for the relative resolving power of different instrument designs. The derivation of such a figure of merit will be briefly presented, and the relative sensitivities of arrays currently proposed or under construction presented. The design of TRIGAM, a new third-generation array proposed for Chalk River, will also be discussed. It is instructive to consider how far arrays of Compton-suppressed Ge detectors could be taken. For example, it will be shown that an idealised open-quote perfectclose quotes third-generation array of 1000 detectors has a sensitivity an order of magnitude higher again than that of GAMMASPHERE. Less conventional options for new arrays will also be explored

  4. Adaptive Lévy processes and area-restricted search in human foraging.

    Directory of Open Access Journals (Sweden)

    Thomas T Hills

    Full Text Available A considerable amount of research has claimed that animals' foraging behaviors display movement lengths with power-law distributed tails, characteristic of Lévy flights and Lévy walks. Though these claims have recently come into question, the proposal that many animals forage using Lévy processes nonetheless remains. A Lévy process does not consider when or where resources are encountered, and samples movement lengths independently of past experience. However, Lévy processes too have come into question based on the observation that in patchy resource environments resource-sensitive foraging strategies, like area-restricted search, perform better than Lévy flights yet can still generate heavy-tailed distributions of movement lengths. To investigate these questions further, we tracked humans as they searched for hidden resources in an open-field virtual environment, with either patchy or dispersed resource distributions. Supporting previous research, for both conditions logarithmic binning methods were consistent with Lévy flights and rank-frequency methods-comparing alternative distributions using maximum likelihood methods-showed the strongest support for bounded power-law distributions (truncated Lévy flights. However, goodness-of-fit tests found that even bounded power-law distributions only accurately characterized movement behavior for 4 (out of 32 participants. Moreover, paths in the patchy environment (but not the dispersed environment showed a transition to intensive search following resource encounters, characteristic of area-restricted search. Transferring paths between environments revealed that paths generated in the patchy environment were adapted to that environment. Our results suggest that though power-law distributions do not accurately reflect human search, Lévy processes may still describe movement in dispersed environments, but not in patchy environments-where search was area-restricted. Furthermore, our results

  5. Prism Adaptation Alters Electrophysiological Markers of Attentional Processes in the Healthy Brain.

    Science.gov (United States)

    Martín-Arévalo, Elisa; Laube, Inga; Koun, Eric; Farnè, Alessandro; Reilly, Karen T; Pisella, Laure

    2016-01-20

    Neglect patients typically show a rightward attentional orienting bias and a strong disengagement deficit, such that they are especially slow in responding to left-sided targets after right-sided cues (Posner et al., 1984). Prism adaptation (PA) can reduce diverse debilitating neglect symptoms and it has been hypothesized that PA's effects are so generalized that they might be mediated by attentional mechanisms (Pisella et al., 2006; Redding and Wallace, 2006). In neglect patients, performance on spatial attention tasks improves after rightward-deviating PA (Jacquin-Courtois et al., 2013). In contrast, in healthy subjects, although there is evidence that leftward-deviating PA induces neglect-like performance on some visuospatial tasks, behavioral studies of spatial attention tasks have mostly yielded negative results (Morris et al., 2004; Bultitude et al., 2013). We hypothesized that these negative behavioral findings might reflect the limitations of behavioral measures in healthy subjects. Here we exploited the sensitivity of event-related potentials to test the hypothesis that electrophysiological markers of attentional processes in the healthy human brain are affected by PA. Leftward-deviating PA generated asymmetries in attentional orienting (reflected in the cue-locked N1) and in attentional disengagement for invalidly cued left targets (reflected in the target-locked P1). This is the first electrophysiological demonstration that leftward-deviating PA in healthy subjects mimics attentional patterns typically seen in neglect patients. Significance statement: Prism adaptation (PA) is a promising tool for ameliorating many deficits in neglect patients and inducing neglect-like behavior in healthy subjects. The mechanisms underlying PA's effects are poorly understood but one hypothesis suggests that it acts by modulating attention. To date, however, there has been no successful demonstration of attentional modulation in healthy subjects. We provide the first

  6. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    Directory of Open Access Journals (Sweden)

    Peter Kampmann

    2014-04-01

    Full Text Available With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach.

  7. Ecological opportunity and predator-prey interactions: linking eco-evolutionary processes and diversification in adaptive radiations.

    Science.gov (United States)

    Pontarp, Mikael; Petchey, Owen L

    2018-03-14

    Much of life's diversity has arisen through ecological opportunity and adaptive radiations, but the mechanistic underpinning of such diversification is not fully understood. Competition and predation can affect adaptive radiations, but contrasting theoretical and empirical results show that they can both promote and interrupt diversification. A mechanistic understanding of the link between microevolutionary processes and macroevolutionary patterns is thus needed, especially in trophic communities. Here, we use a trait-based eco-evolutionary model to investigate the mechanisms linking competition, predation and adaptive radiations. By combining available micro-evolutionary theory and simulations of adaptive radiations we show that intraspecific competition is crucial for diversification as it induces disruptive selection, in particular in early phases of radiation. The diversification rate is however decreased in later phases owing to interspecific competition as niche availability, and population sizes are decreased. We provide new insight into how predation tends to have a negative effect on prey diversification through decreased population sizes, decreased disruptive selection and through the exclusion of prey from parts of niche space. The seemingly disparate effects of competition and predation on adaptive radiations, listed in the literature, may thus be acting and interacting in the same adaptive radiation at different relative strength as the radiation progresses. © 2018 The Authors.

  8. Designing Training for Temporal and Adaptive Transfer: A Comparative Evaluation of Three Training Methods for Process Control Tasks

    Science.gov (United States)

    Kluge, Annette; Sauer, Juergen; Burkolter, Dina; Ritzmann, Sandrina

    2010-01-01

    Training in process control environments requires operators to be prepared for temporal and adaptive transfer of skill. Three training methods were compared with regard to their effectiveness in supporting transfer: Drill & Practice (D&P), Error Training (ET), and procedure-based and error heuristics training (PHT). Communication…

  9. A universal electronical adaptation of automats for biochemical analysis to a central processing computer by applying CAMAC-signals

    International Nuclear Information System (INIS)

    Schaefer, R.

    1975-01-01

    A universal expansion of a CAMAC-subsystem - BORER 3000 - for adapting analysis instruments in biochemistry to a processing computer is described. The possibility of standardizing input interfaces for lab instruments with such circuits is discussed and the advantages achieved by applying the CAMAC-specifications are described

  10. Person Fit Based on Statistical Process Control in an Adaptive Testing Environment. Research Report 98-13.

    Science.gov (United States)

    van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.

    Person-fit research in the context of paper-and-pencil tests is reviewed, and some specific problems regarding person fit in the context of computerized adaptive testing (CAT) are discussed. Some new methods are proposed to investigate person fit in a CAT environment. These statistics are based on Statistical Process Control (SPC) theory. A…

  11. Application of Non-Kolmogorovian Probability and Quantum Adaptive Dynamics to Unconscious Inference in Visual Perception Process

    Science.gov (United States)

    Accardi, Luigi; Khrennikov, Andrei; Ohya, Masanori; Tanaka, Yoshiharu; Yamato, Ichiro

    2016-07-01

    Recently a novel quantum information formalism — quantum adaptive dynamics — was developed and applied to modelling of information processing by bio-systems including cognitive phenomena: from molecular biology (glucose-lactose metabolism for E.coli bacteria, epigenetic evolution) to cognition, psychology. From the foundational point of view quantum adaptive dynamics describes mutual adapting of the information states of two interacting systems (physical or biological) as well as adapting of co-observations performed by the systems. In this paper we apply this formalism to model unconscious inference: the process of transition from sensation to perception. The paper combines theory and experiment. Statistical data collected in an experimental study on recognition of a particular ambiguous figure, the Schröder stairs, support the viability of the quantum(-like) model of unconscious inference including modelling of biases generated by rotation-contexts. From the probabilistic point of view, we study (for concrete experimental data) the problem of contextuality of probability, its dependence on experimental contexts. Mathematically contextuality leads to non-Komogorovness: probability distributions generated by various rotation contexts cannot be treated in the Kolmogorovian framework. At the same time they can be embedded in a “big Kolmogorov space” as conditional probabilities. However, such a Kolmogorov space has too complex structure and the operational quantum formalism in the form of quantum adaptive dynamics simplifies the modelling essentially.

  12. Locally-adaptive Myriad Filters for Processing ECG Signals in Real Time

    Directory of Open Access Journals (Sweden)

    Nataliya Tulyakova

    2017-03-01

    Full Text Available The locally adaptive myriad filters to suppress noise in electrocardiographic (ECG signals in almost in real time are proposed. Statistical estimates of efficiency according to integral values of such criteria as mean square error (MSE and signal-to-noise ratio (SNR for the test ECG signals sampled at 400 Hz embedded in additive Gaussian noise with different values of variance are obtained. Comparative analysis of adaptive filters is carried out. High efficiency of ECG filtering and high quality of signal preservation are demonstrated. It is shown that locally adaptive myriad filters provide higher degree of suppressing additive Gaussian noise with possibility of real time implementation.

  13. SNP Arrays

    Directory of Open Access Journals (Sweden)

    Jari Louhelainen

    2016-10-01

    Full Text Available The papers published in this Special Issue “SNP arrays” (Single Nucleotide Polymorphism Arrays focus on several perspectives associated with arrays of this type. The range of papers vary from a case report to reviews, thereby targeting wider audiences working in this field. The research focus of SNP arrays is often human cancers but this Issue expands that focus to include areas such as rare conditions, animal breeding and bioinformatics tools. Given the limited scope, the spectrum of papers is nothing short of remarkable and even from a technical point of view these papers will contribute to the field at a general level. Three of the papers published in this Special Issue focus on the use of various SNP array approaches in the analysis of three different cancer types. Two of the papers concentrate on two very different rare conditions, applying the SNP arrays slightly differently. Finally, two other papers evaluate the use of the SNP arrays in the context of genetic analysis of livestock. The findings reported in these papers help to close gaps in the current literature and also to give guidelines for future applications of SNP arrays.

  14. Reconfigurable signal processor designs for advanced digital array radar systems

    Science.gov (United States)

    Suarez, Hernan; Zhang, Yan (Rockee); Yu, Xining

    2017-05-01

    The new challenges originated from Digital Array Radar (DAR) demands a new generation of reconfigurable backend processor in the system. The new FPGA devices can support much higher speed, more bandwidth and processing capabilities for the need of digital Line Replaceable Unit (LRU). This study focuses on using the latest Altera and Xilinx devices in an adaptive beamforming processor. The field reprogrammable RF devices from Analog Devices are used as analog front end transceivers. Different from other existing Software-Defined Radio transceivers on the market, this processor is designed for distributed adaptive beamforming in a networked environment. The following aspects of the novel radar processor will be presented: (1) A new system-on-chip architecture based on Altera's devices and adaptive processing module, especially for the adaptive beamforming and pulse compression, will be introduced, (2) Successful implementation of generation 2 serial RapidIO data links on FPGA, which supports VITA-49 radio packet format for large distributed DAR processing. (3) Demonstration of the feasibility and capabilities of the processor in a Micro-TCA based, SRIO switching backplane to support multichannel beamforming in real-time. (4) Application of this processor in ongoing radar system development projects, including OU's dual-polarized digital array radar, the planned new cylindrical array radars, and future airborne radars.

  15. THE FORMATION OF SUBJECTIVITY AND NORMS IN THE PROCESS OF ADAPTATION OF YOUNG EMPLOYEES AT THE ENTERPRISE

    Directory of Open Access Journals (Sweden)

    Natalia V. Popova

    2016-01-01

    Full Text Available The aim of the publication is to determine the interrelation of the formation of subjective qualities and norms process of adaptation of young employees at the enterprise.Methods. The research methodology involves a comprehensive combination of the theoretical analysis and the results of applied research at the enterprises of the Sverdlovsk region. The dialectical method and comparative analysis are used.Results and theoretical novelty. The questions of adaptation of young employees at the enterprise are considered. The concepts of «subjectivity» and «norms» in philosophy are analyzed. Subjectivity is presented as a personal basis of social activity of the young worker at the entity; regulations – as a method of adaptation of the personality, individual to that community in which it emerged to be. The characteristics of the youth working at the industrial enterprise are disclosed on the basis of socio-philosophical analysis; youth policy at the industrial enterprises is described; the formation of values and norms of young workers in the process of adapting the enterprise is observed. The personal subjectivity as the basis of social activity of the young worker in the enterprise is demonstrated. It is shown that relevance of subject qualities forming and regulations at youth is caused not only by the need of development of the identity of young workers, but also by economic safety of industrial enterprises wellbeing where their working career begins.Practical significance consists in the social-philosophical substantiation of interrelation of formation of subjective qualities and norms in the process of adaptation of young employees in the company, of the main provisions for the development of programs of adaptation of young employees at the enterprise; in providing the teaching social and humanitarian disciplines for bachelors and masters majoring in «Organization of Work with Youth». 

  16. Minimizing the effect of process mismatch in a neuromorphic system using spike-timing-dependent adaptation.

    Science.gov (United States)

    Cameron, Katherine; Murray, Alan

    2008-05-01

    This paper investigates whether spike-timing-dependent plasticity (STDP) can minimize the effect of mismatch within the context of a depth-from-motion algorithm. To improve noise rejection, this algorithm contains a spike prediction element, whose performance is degraded by analog very large scale integration (VLSI) mismatch. The error between the actual spike arrival time and the prediction is used as the input to an STDP circuit, to improve future predictions. Before STDP adaptation, the error reflects the degree of mismatch within the prediction circuitry. After STDP adaptation, the error indicates to what extent the adaptive circuitry can minimize the effect of transistor mismatch. The circuitry is tested with static and varying prediction times and chip results are presented. The effect of noisy spikes is also investigated. Under all conditions the STDP adaptation is shown to improve performance.

  17. Socio-Emotional Adaptation Theory: Charting the Emotional Process of Alzheimer's Disease.

    Science.gov (United States)

    Halpin, Sean N; Dillard, Rebecca L; Puentes, William J

    2017-08-01

    The emotional reactions to the progression of Mild Cognitive Impairment and Alzheimer's disease (MCI/AD) oftentimes present as cognitive or behavioral changes, leading to misguided interventions by Formal Support (paid health care providers). Despite a rich body of literature identifying cognitive and behavioral staging of MCI/AD, the emotional changes that accompany these diagnoses have been largely ignored. The objective of this study was to develop a model of the emotional aspects of MCI/AD. One hour, semistructured interviews, with 14 patient-Informal Support Partner dyads (N = 28) interviewed concurrently; patients were in various stages of MCI/AD. An interdisciplinary team employed a grounded theory coding process to detect emotional characteristics of the participants with MCI/AD. Emotional reactions were categorized into depression/sadness, apathy, concern/fear, anger/frustration, and acceptance. The emotions did not present linearly along the course of the disease and were instead entwined within a set of complex (positive/negative) interactions including: relationship with the Informal Support Partner (i.e., teamwork vs infantilization), relationship with the Formal Support (i.e., patient vs disengaged), coping (i.e., adaptive vs nonadaptive), and perceived control (i.e., internal vs external locus-of-control). For example, a person with poor formal and informal support and external locus-of-control may become depressed, a condition that is known to negatively affect cognitive status. Understanding the emotional reactions of individuals diagnosed with MCI/AD will provide clinicians with information needed to develop treatments suited to current needs of the patient and provide Informal Support Partners insight into cognitive and physical changes associated with MCI/AD. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Process-based quality management for clinical implementation of adaptive radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Noel, Camille E.; Santanam, Lakshmi; Parikh, Parag J.; Mutic, Sasa, E-mail: smutic@radonc.wustl.edu [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States)

    2014-08-15

    Purpose: Intensity-modulated adaptive radiotherapy (ART) has been the focus of considerable research and developmental work due to its potential therapeutic benefits. However, in light of its unique quality assurance (QA) challenges, no one has described a robust framework for its clinical implementation. In fact, recent position papers by ASTRO and AAPM have firmly endorsed pretreatment patient-specific IMRT QA, which limits the feasibility of online ART. The authors aim to address these obstacles by applying failure mode and effects analysis (FMEA) to identify high-priority errors and appropriate risk-mitigation strategies for clinical implementation of intensity-modulated ART. Methods: An experienced team of two clinical medical physicists, one clinical engineer, and one radiation oncologist was assembled to perform a standard FMEA for intensity-modulated ART. A set of 216 potential radiotherapy failures composed by the forthcoming AAPM task group 100 (TG-100) was used as the basis. Of the 216 failures, 127 were identified as most relevant to an ART scheme. Using the associated TG-100 FMEA values as a baseline, the team considered how the likeliness of occurrence (O), outcome severity (S), and likeliness of failure being undetected (D) would change for ART. New risk priority numbers (RPN) were calculated. Failures characterized by RPN ≥ 200 were identified as potentially critical. Results: FMEA revealed that ART RPN increased for 38% (n = 48/127) of potential failures, with 75% (n = 36/48) attributed to failures in the segmentation and treatment planning processes. Forty-three of 127 failures were identified as potentially critical. Risk-mitigation strategies include implementing a suite of quality control and decision support software, specialty QA software/hardware tools, and an increase in specially trained personnel. Conclusions: Results of the FMEA-based risk assessment demonstrate that intensity-modulated ART introduces different (but not necessarily

  19. Process-based quality management for clinical implementation of adaptive radiotherapy

    International Nuclear Information System (INIS)

    Noel, Camille E.; Santanam, Lakshmi; Parikh, Parag J.; Mutic, Sasa

    2014-01-01

    Purpose: Intensity-modulated adaptive radiotherapy (ART) has been the focus of considerable research and developmental work due to its potential therapeutic benefits. However, in light of its unique quality assurance (QA) challenges, no one has described a robust framework for its clinical implementation. In fact, recent position papers by ASTRO and AAPM have firmly endorsed pretreatment patient-specific IMRT QA, which limits the feasibility of online ART. The authors aim to address these obstacles by applying failure mode and effects analysis (FMEA) to identify high-priority errors and appropriate risk-mitigation strategies for clinical implementation of intensity-modulated ART. Methods: An experienced team of two clinical medical physicists, one clinical engineer, and one radiation oncologist was assembled to perform a standard FMEA for intensity-modulated ART. A set of 216 potential radiotherapy failures composed by the forthcoming AAPM task group 100 (TG-100) was used as the basis. Of the 216 failures, 127 were identified as most relevant to an ART scheme. Using the associated TG-100 FMEA values as a baseline, the team considered how the likeliness of occurrence (O), outcome severity (S), and likeliness of failure being undetected (D) would change for ART. New risk priority numbers (RPN) were calculated. Failures characterized by RPN ≥ 200 were identified as potentially critical. Results: FMEA revealed that ART RPN increased for 38% (n = 48/127) of potential failures, with 75% (n = 36/48) attributed to failures in the segmentation and treatment planning processes. Forty-three of 127 failures were identified as potentially critical. Risk-mitigation strategies include implementing a suite of quality control and decision support software, specialty QA software/hardware tools, and an increase in specially trained personnel. Conclusions: Results of the FMEA-based risk assessment demonstrate that intensity-modulated ART introduces different (but not necessarily

  20. A dutch adaptation of the child-rearing styles inventory and a validation of krohne's two-process model.

    Science.gov (United States)

    Depreeuw, E; Lens, W; Horebeek, W

    1995-01-01

    Abstract A Questionnaire for the Parent-Child Interaction (VOKI) has been developed by adapting Krohne's German ESI for the Flemish high school population. The psychometric characteristics of the adaptation are satisfying. The ESI factor structure has been replicated and the VOKI scales are perfectly comparable to the original German scales. Further research on the VOKI and two questionnaires assessing achievement related concepts such as test anxiety, procrastination and achievement motivation yielded correlational patterns partly predicted from Krohne's Two-Process Model. The relations between parental child-rearing styles and competence and consequence expectancies are in line with this model, whereas test anxiety and procrastination seem more complexly determined.

  1. The adaptation process following acute onset disability: an interactive two-dimensional approach applied to acquired brain injury.

    Science.gov (United States)

    Brands, Ingrid M H; Wade, Derick T; Stapert, Sven Z; van Heugten, Caroline M

    2012-09-01

    To describe a new model of the adaptation process following acquired brain injury, based on the patient's goals, the patient's abilities and the emotional response to the changes and the possible discrepancy between goals and achievements. The process of adaptation after acquired brain injury is characterized by a continuous interaction of two processes: achieving maximal restoration of function and adjusting to the alterations and losses that occur in the various domains of functioning. Consequently, adaptation requires a balanced mix of restoration-oriented coping and loss-oriented coping. The commonly used framework to explain adaptation and coping, 'The Theory of Stress and Coping' of Lazarus and Folkman, does not capture this interactive duality. This model additionally considers theories concerned with self-regulation of behaviour, self-awareness and self-efficacy, and with the setting and achievement of goals. THE TWO-DIMENSIONAL MODEL: Our model proposes the simultaneous and continuous interaction of two pathways; goal pursuit (short term and long term) or revision as a result of success and failure in reducing distance between current state and expected future state and an affective response that is generated by the experienced goal-performance discrepancies. This affective response, in turn, influences the goals set. This two-dimensional representation covers the processes mentioned above: restoration of function and consideration of long-term limitations. We propose that adaptation centres on readjustment of long-term goals to new achievable but desired and important goals, and that this adjustment underlies re-establishing emotional stability. We discuss how the proposed model is related to actual rehabilitation practice.

  2. electrode array

    African Journals Online (AJOL)

    PROF EKWUEME

    A geoelectric investigation employing vertical electrical soundings (VES) using the Ajayi - Makinde Two-Electrode array and the ... arrangements used in electrical D.C. resistivity survey. These include ..... Refraction Tomography to Study the.

  3. Narrowband direction of arrival estimation for antenna arrays

    CERN Document Server

    Foutz, Jeffrey

    2008-01-01

    This book provides an introduction to narrowband array signal processing, classical and subspace-based direction of arrival (DOA) estimation with an extensive discussion on adaptive direction of arrival algorithms. The book begins with a presentation of the basic theory, equations, and data models of narrowband arrays. It then discusses basic beamforming methods and describes how they relate to DOA estimation. Several of the most common classical and subspace-based direction of arrival methods are discussed. The book concludes with an introduction to subspace tracking and shows how subspace tr

  4. Low-cost Solar Array Project. Feasibility of the Silane Process for Producing Semiconductor-grade Silicon

    Science.gov (United States)

    1979-01-01

    The feasibility of Union Carbide's silane process for commercial application was established. An integrated process design for an experimental process system development unit and a commercial facility were developed. The corresponding commercial plant economic performance was then estimated.

  5. Turning risk assessment and adaptation policy priorities into meaningful interventions and governance processes

    Science.gov (United States)

    Brown, Kathryn; DiMauro, Manuela; Johns, Daniel; Holmes, Gemma; Thompson, David; Russell, Andrew; Style, David

    2018-06-01

    The UK is one of the first countries in the world to have set up a statutory system of national climate risk assessments followed by a national adaptation programme. Having this legal framework has been essential for enabling adaptation at the government level in a challenging political environment. However, using this framework to create an improvement in resilience to climate change across the country requires more than publishing a set of documents; it requires careful thought about what interventions work, how they can be enabled and what level of risk acceptability individuals, organizations and the country should be aiming for. This article is part of the theme issue `Advances in risk assessment for climate change adaptation policy'.

  6. Adaptive variational mode decomposition method for signal processing based on mode characteristic

    Science.gov (United States)

    Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng

    2018-07-01

    Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.

  7. Internet-usage patterns of immigrants in the process of intercultural adaptation.

    Science.gov (United States)

    Chen, Wenli

    2010-08-01

    This paper investigates Internet-usage patterns of immigrants, and seeks to identify the correlation between Internet use and intercultural adaptation. The study focuses on mainland Chinese immigrants in Singapore, and was conducted via a nationwide telephone survey. The results show that immigrants tend to change their preferences on Internet use to reflect their residence in the host country. In particular, the longer an immigrant resides in the host country, the less likely they would be to surf their original country's websites and the more likely they would be to communicate with local people via the Internet. More importantly, differences in Internet usage are found to have a significant impact on immigrants' intercultural adaptation. In an online environment, the social communication in the host country is a critical component that can facilitate or impede immigrants' successful adaptation to the host country, whereas ethnic social communication also plays a role at the initial stage of transition.

  8. Multi-objective optimization of p-xylene oxidation process using an improved self-adaptive differential evolution algorithm

    Institute of Scientific and Technical Information of China (English)

    Lili Tao; Bin Xu; Zhihua Hu; Weimin Zhong

    2017-01-01

    The rise in the use of global polyester fiber contributed to strong demand of the Terephthalic acid (TPA). The liquid-phase catalytic oxidation of p-xylene (PX) to TPA is regarded as a critical and efficient chemical process in industry [1]. PX oxidation reaction involves many complex side reactions, among which acetic acid combustion and PX combustion are the most important. As the target product of this oxidation process, the quality and yield of TPA are of great concern. However, the improvement of the qualified product yield can bring about the high energy consumption, which means that the economic objectives of this process cannot be achieved simulta-neously because the two objectives are in conflict with each other. In this paper, an improved self-adaptive multi-objective differential evolution algorithm was proposed to handle the multi-objective optimization prob-lems. The immune concept is introduced to the self-adaptive multi-objective differential evolution algorithm (SADE) to strengthen the local search ability and optimization accuracy. The proposed algorithm is successfully tested on several benchmark test problems, and the performance measures such as convergence and divergence metrics are calculated. Subsequently, the multi-objective optimization of an industrial PX oxidation process is carried out using the proposed immune self-adaptive multi-objective differential evolution algorithm (ISADE). Optimization results indicate that application of ISADE can greatly improve the yield of TPA with low combustion loss without degenerating TA quality.

  9. Process development for automated solar cell and module production. Task 4. Automated array assembly. Quarterly report No. 1

    Energy Technology Data Exchange (ETDEWEB)

    Hagerty, J. J.

    1980-10-15

    Work has been divided into five phases. The first phase is to modify existing hardware and controlling computer software to: (1) improve cell-to-cell placement accuracy, (2) improve the solder joint while reducing the amount of solder and flux smear on the cell's surface, and (3) reduce the system cycle time to 10 seconds. The second phase involves expanding the existing system's capabilities to be able to reject broken cells and make post-solder electrical tests. Phase 3 involves developing new hardware to allow for the automated encapsulation of solar modules. This involves three discrete pieces of hardware: (1) a vacuum platen end effector for the robot which allows it to pick up the 1' x 4' array of 35 inter-connected cells. With this, it can also pick up the cover glass and completed module, (2) a lamination preparation station which cuts the various encapsulation components from roll storage and positions them for encapsulation, and (3) an automated encapsulation chamber which interfaces with the above two and applies the heat and vacuum to cure the encapsulants. Phase 4 involves the final assembly of the encapsulated array into a framed, edge-sealed module completed for installation. For this we are using MBA's Glass Reinforced Concrete (GRC) in panels such as those developed by MBA for JPL under contract No. 955281. The GRC panel plays the multiple role of edge frame, substrate and mounting structure. An automated method of applying the edge seal will also be developed. The final phase (5) is the fabrication of six 1' x 4' electrically active solar modules using the above developed equipment. Progress is reported. (WHK)

  10. Determination of Rayleigh wave ellipticity using single-station and array-based processing of ambient seismic noise

    Science.gov (United States)

    Workman, Eli Joseph

    We present a single-station method for the determination of Rayleigh wave ellipticity, or Rayleigh wave horizontal to vertical amplitude ratio (H/V) using Frequency Dependent Polarization Analysis (FDPA). This procedure uses singular value decomposition of 3-by-3 spectral covariance matrices over 1-hr time windows to determine properties of the ambient seismic noise field such as particle motion and dominant wave-type. In FPDA, if the noise is mostly dominated by a primary singular value and the phase difference is roughly 90° between the major horizontal axis and the vertical axis of the corresponding singular vector, we infer that Rayleigh waves are dominant and measure an H/V ratio for that hour and frequency bin. We perform this analysis for all available data from the Earthscope Transportable Array between 2004 and 2014. We compare the observed Rayleigh wave H/V ratios with those previously measured by multicomponent, multistation noise cross-correlation (NCC), as well as classical noise spectrum H/V ratio analysis (NSHV). At 8 sec the results from all three methods agree, suggesting that the ambient seismic noise field is Rayleigh wave dominated. Between 10 and 30 sec, while the general pattern agrees well, the results from FDPA and NSHV are persistently slightly higher ( 2%) and significantly higher (>20%), respectively, than results from the array-based NCC. This is likely caused by contamination from other wave types (i.e., Love waves, body waves, and tilt noise) in the single station methods, but it could also reflect a small, persistent error in NCC. Additionally, we find that the single station method has difficulty retrieving robust Rayleigh wave H/V ratios within major sedimentary basins, such as the Williston Basin and Mississippi Embayment, where the noise field is likely dominated by reverberating Love waves.

  11. Can Survival Processing Enhance Story Memory? Testing the Generalizability of the Adaptive Memory Framework

    Science.gov (United States)

    Seamon, John G.; Bohn, Justin M.; Coddington, Inslee E.; Ebling, Maritza C.; Grund, Ethan M.; Haring, Catherine T.; Jang, Sue-Jung; Kim, Daniel; Liong, Christopher; Paley, Frances M.; Pang, Luke K.; Siddique, Ashik H.

    2012-01-01

    Research from the adaptive memory framework shows that thinking about words in terms of their survival value in an incidental learning task enhances their free recall relative to other semantic encoding strategies and intentional learning (Nairne, Pandeirada, & Thompson, 2008). We found similar results. When participants used incidental…

  12. Translation and adaptation of a questionnaire to assess the group processes of rehabilitation team conferences

    NARCIS (Netherlands)

    Roelofsen, E.E.; Lankhorst, G.J.; Bouter, L.M.

    2001-01-01

    Objective: To investigate the internal consistency, the domain structure and the influence of social desirability with regard to a questionnaire translated and adapted to assess the quality of rehabilitation team conferences in the Netherlands. Study design: A questionnaire to determine group

  13. Adaptive control of anaerobic digestion processes-a pilot-scale application.

    Science.gov (United States)

    Renard, P; Dochain, D; Bastin, G; Naveau, H; Nyns, E J

    1988-03-01

    A simple adaptive control algorithm, for which theoretical stability and convergence properties had been previously demonstrated, has been successfully implemented on a biomethanation pilot reactor. The methane digester, operated in the CSTR mode was submitted to a shock load, and successfully computer controlled during the subsequent transitory state.

  14. Rational Adaptation under Task and Processing Constraints: Implications for Testing Theories of Cognition and Action

    Science.gov (United States)

    Howes, Andrew; Lewis, Richard L.; Vera, Alonso

    2009-01-01

    The authors assume that individuals adapt rationally to a utility function given constraints imposed by their cognitive architecture and the local task environment. This assumption underlies a new approach to modeling and understanding cognition--cognitively bounded rational analysis--that sharpens the predictive acuity of general, integrated…

  15. Adaptive and Decentralized Operator Placement for In-Network Query Processing

    DEFF Research Database (Denmark)

    Bonfils, B; Bonnet, Philippe

    2003-01-01

    . In this paper, we show that this problem is a variant of the task assignment problem for which polynomial algorithms have been developed. These algorithms are however centralized and cannot be used in a sensor network. We describe an adaptive and decentralized algorithm that progressively refines the placement...

  16. Adaptability in diversification processes of cyanobacteria; the example of Synechococcus bigranulatus

    Czech Academy of Sciences Publication Activity Database

    Komárek, Jiří; Kaštovský, J.

    2003-01-01

    Roč. 148, č. 109 (2003), s. 299-304 ISSN 0342-1120. [Symposium of the International Association for Cyanophyte Research /15./. Barcelona, 03.09.2001-07.09.2001] R&D Projects: GA AV ČR KSK6005114 Keywords : cyanobacteria * adaptation * ecophysiology Subject RIV: EF - Botanics

  17. Development of an Advanced, Automatic, Ultrasonic NDE Imaging System via Adaptive Learning Network Signal Processing Techniques

    Science.gov (United States)

    1981-03-13

    UNCLASSIFIED SECURITY CLAS,:FtfC ’i OF TH*!’ AGC W~ct P- A* 7~9r1) 0. ABSTRACT (continued) onuing in concert with a sophisticated detector has...and New York, 1969. Whalen, M.F., L.J. O’Brien, and A.N. Mucciardi, "Application of Adaptive Learning Netowrks for the Characterization of Two

  18. Photoreceptor processing speed and input resistance changes during light adaptation correlate with spectral class in the bumblebee, Bombus impatiens.

    Directory of Open Access Journals (Sweden)

    Peter Skorupski

    Full Text Available Colour vision depends on comparison of signals from photoreceptors with different spectral sensitivities. However, response properties of photoreceptor cells may differ in ways other than spectral tuning. In insects, for example, broadband photoreceptors, with a major sensitivity peak in the green region of the spectrum (>500 nm, drive fast visual processes, which are largely blind to chromatic signals from more narrowly-tuned photoreceptors with peak sensitivities in the blue and UV regions of the spectrum. In addition, electrophysiological properties of the photoreceptor membrane may result in differences in response dynamics of photoreceptors of similar spectral class between species, and different spectral classes within a species. We used intracellular electrophysiological techniques to investigate response dynamics of the three spectral classes of photoreceptor underlying trichromatic colour vision in the bumblebee, Bombus impatiens, and we compare these with previously published data from a related species, Bombus terrestris. In both species, we found significantly faster responses in green, compared with blue- or UV-sensitive photoreceptors, although all 3 photoreceptor types are slower in B. impatiens than in B. terrestris. Integration times for light-adapted B. impatiens photoreceptors (estimated from impulse response half-width were 11.3 ± 1.6 ms for green photoreceptors compared with 18.6 ± 4.4 ms and 15.6 ± 4.4 for blue and UV, respectively. We also measured photoreceptor input resistance in dark- and light-adapted conditions. All photoreceptors showed a decrease in input resistance during light adaptation, but this decrease was considerably larger (declining to about 22% of the dark value in green photoreceptors, compared to blue and UV (41% and 49%, respectively. Our results suggest that the conductances associated with light adaptation are largest in green photoreceptors, contributing to their greater temporal processing speed

  19. Fabrication of micro-dot arrays and micro-walls of acrylic acid/melamine resin on aluminum by AFM probe processing and electrophoretic coating

    International Nuclear Information System (INIS)

    Kurokawa, S.; Kikuchi, T.; Sakairi, M.; Takahashi, H.

    2008-01-01

    Micro-dot arrays and micro-walls of acrylic acid/melamine resin were fabricated on aluminum by anodizing, atomic force microscope (AFM) probe processing, and electrophoretic deposition. Barrier type anodic oxide films of 15 nm thickness were formed on aluminum and then the specimen was scratched with an AFM probe in a solution containing acrylic acid/melamine resin nano-particles to remove the anodic oxide film locally. After scratching, the specimen was anodically polarized to deposit acrylic acid/melamine resin electrophoretically at the film-removed area. The resin deposited on the specimen was finally cured by heating. It was found that scratching with the AFM probe on open circuit leads to the contamination of the probe with resin, due to positive shifts in the potential during scratching. Scratching of the specimen under potentiostatic conditions at -1.0 V, however, resulted in successful resin deposition at the film-removed area without probe contamination. The rate of resin deposition increased as the specimen potential becomes more positive during electrophoretic deposition. Arrays of resin dots with a few to several tens μm diameter and 100-1000 nm height, and resin walls with 100-1000 nm height and 1 μm width were obtained on specimens by successive anodizing, probe processing, and electrophoretic deposition

  20. Fabrication of micro-dot arrays and micro-walls of acrylic acid/melamine resin on aluminum by AFM probe processing and electrophoretic coating

    Energy Technology Data Exchange (ETDEWEB)

    Kurokawa, S.; Kikuchi, T.; Sakairi, M. [Graduate School of Engineering, Hokkaido University, N-13, W-8, Kita-Ku, Sapporo 060-8628 (Japan); Takahashi, H. [Graduate School of Engineering, Hokkaido University, N-13, W-8, Kita-Ku, Sapporo 060-8628 (Japan)], E-mail: takahasi@elechem1-mc.eng.hokudai.ac.jp

    2008-11-30

    Micro-dot arrays and micro-walls of acrylic acid/melamine resin were fabricated on aluminum by anodizing, atomic force microscope (AFM) probe processing, and electrophoretic deposition. Barrier type anodic oxide films of 15 nm thickness were formed on aluminum and then the specimen was scratched with an AFM probe in a solution containing acrylic acid/melamine resin nano-particles to remove the anodic oxide film locally. After scratching, the specimen was anodically polarized to deposit acrylic acid/melamine resin electrophoretically at the film-removed area. The resin deposited on the specimen was finally cured by heating. It was found that scratching with the AFM probe on open circuit leads to the contamination of the probe with resin, due to positive shifts in the potential during scratching. Scratching of the specimen under potentiostatic conditions at -1.0 V, however, resulted in successful resin deposition at the film-removed area without probe contamination. The rate of resin deposition increased as the specimen potential becomes more positive during electrophoretic deposition. Arrays of resin dots with a few to several tens {mu}m diameter and 100-1000 nm height, and resin walls with 100-1000 nm height and 1 {mu}m width were obtained on specimens by successive anodizing, probe processing, and electrophoretic deposition.