The book tries to briefly introduce the diverse literatures in the field of fractional order signal processing which is becoming an emerging topic among an interdisciplinary community of researchers. This book is aimed at postgraduate and beginning level research scholars who would like to work in the field of Fractional Order Signal processing (FOSP). The readers should have preliminary knowledge about basic signal processing techniques. Prerequisite knowledge of fractional calculus is not essential and is exposited at relevant places in connection to the appropriate signal processing topics. Basic signal processing techniques like filtering, estimation, system identification, etc. in the light of fractional order calculus are presented along with relevant application areas. The readers can easily extend these concepts to varied disciplines like image or speech processing, pattern recognition, time series forecasting, financial data analysis and modeling, traffic modeling in communication channels, optics, b...
Signal processing techniques, extensively used nowadays to maximize the performance of audio and video equipment, have been a key part in the design of hardware and software for high energy physics detectors since pioneering applications in the UA1 experiment at CERN in 1979
Alessio, Silvia Maria
This book covers the basics of processing and spectral analysis of monovariate discrete-time signals. The approach is practical, the aim being to acquaint the reader with the indications for and drawbacks of the various methods and to highlight possible misuses. The book is rich in original ideas, visualized in new and illuminating ways, and is structured so that parts can be skipped without loss of continuity. Many examples are included, based on synthetic data and real measurements from the fields of physics, biology, medicine, macroeconomics etc., and a complete set of MATLAB exercises requiring no previous experience of programming is provided. Prior advanced mathematical skills are not needed in order to understand the contents: a good command of basic mathematical analysis is sufficient. Where more advanced mathematical tools are necessary, they are included in an Appendix and presented in an easy-to-follow way. With this book, digital signal processing leaves the domain of engineering to address the ne...
Clausen, Anders; Palushani, Evarist; Mulvad, Hans Christian Hansen
This survey paper presents some of the applications where the versatile time-lens concept successfully can be applied to ultra-high-speed serial systems by offering expected needed functionalities for future optical communication networks....
Implementations of sophisticated mathematical techniques in advanced digital signal processors can significantly improve performance. Future VLSI and VHSI circuit designs must include the practical realization of these algorithms. A structured design approach is described and illustrated with examples from a RNS FIR filter processor development project. The CAE hardware and software required to support tasks of this complexity are also discussed. An EWS is recommended for controlling essential functions such as logic optimization, simulation and verification. The total IC design system is illustrated with the implementation of a new high performance algorithm for computing complex magnitude.
Representative Operational System MMSE – Minimum Mean Squared Error MNS – MIST National Systems MOE – Margin of Error MOPS – Measures of Performance...assuming pulse to pulse invariance of the ground clutter. Additionally, Doppler processing is employed to further suppress clutter returns and...matrix via the minimum mean squared- error ( MMSE ) estimate as [F-l - F-3], 1 ⁄ ⁄ ; , , , (F-3) where G indicates the use of guard cells. The Xi in
Lee, Chae Uk
This book deal with the newest digital signal processing, which contains introduction on conception of digital signal processing, constitution and purpose, signal and system such as signal, continuos signal, discrete signal and discrete system, I/O expression on impress response, convolution, mutual connection of system and frequency character,z transform of definition, range, application of z transform and relationship with laplace transform, Discrete fourier, Fast fourier transform on IDFT algorithm and FFT application, foundation of digital filter of notion, expression, types, frequency characteristic of digital filter and design order of filter, Design order of filter, Design of FIR digital filter, Design of IIR digital filter, Adaptive signal processing, Audio signal processing, video signal processing and application of digital signal processing.
This book is the first in a set of forthcoming books focussed on state-of-the-art development in the VLSI Signal Processing area. It is a response to the tremendous research activities taking place in that field. These activities have been driven by two factors: the dramatic increase in demand for high speed signal processing, especially in consumer elec tronics, and the evolving microelectronic technologies. The available technology has always been one of the main factors in determining al gorithms, architectures, and design strategies to be followed. With every new technology, signal processing systems go through many changes in concepts, design methods, and implementation. The goal of this book is to introduce the reader to the main features of VLSI Signal Processing and the ongoing developments in this area. The focus of this book is on: • Current developments in Digital Signal Processing (DSP) pro cessors and architectures - several examples and case studies of existing DSP chips are discussed in...
This book provides a clear understanding of the principles of signal processing of radiation detectors. It puts great emphasis on the characteristics of pulses from various types of detectors and offers a full overview on the basic concepts required to understand detector signal processing systems and pulse processing techniques. Signal Processing for Radiation Detectors covers all of the important aspects of signal processing, including energy spectroscopy, timing measurements, position-sensing, pulse-shape discrimination, and radiation intensity measurement. The book encompasses a wide range of applications so that readers from different disciplines can benefit from all of the information. In addition, this resource: * Describes both analog and digital techniques of signal processing * Presents a complete compilation of digital pulse processing algorithms * Extrapolates content from more than 700 references covering classic papers as well as those of today * Demonstrates concepts with more than 340 origin...
Lockhart, Gordon B
Basic Digital Signal Processing describes the principles of digital signal processing and experiments with BASIC programs involving the fast Fourier theorem (FFT). The book reviews the fundamentals of the BASIC program, continuous and discrete time signals including analog signals, Fourier analysis, discrete Fourier transform, signal energy, power. The text also explains digital signal processing involving digital filters, linear time-variant systems, discrete time unit impulse, discrete-time convolution, and the alternative structure for second order infinite impulse response (IIR) sections.
Biemond, J.; Slump, C.H.; Lagendijk, R.L.; Tolhuizen, L.M.G.M.; de With, P.H.N.
Digital Signal Processing (DSP) concerns the theoretical and practical aspects of representing information-bearing signals in digital form and the use of processors or special purpose hardware to extract that information or to transform the signals in useful ways. Areas where digital signal
Sophisticated techniques for signal processing are now available to the biomedical specialist! Written in an easy-to-read, straightforward style, Biomedical Signal Processing presents techniques to eliminate background noise, enhance signal detection, and analyze computer data, making results easy to comprehend and apply. In addition to examining techniques for electrical signal analysis, filtering, and transforms, the author supplies an extensive appendix with several computer programs that demonstrate techniques presented in the text.
Alvarez, A.; Premkumar, A. B.
An economical and efficient VLSI implementation of a mixed signal processing system (MSP) is presented in this paper. The MSP concept is investigated and the functional blocks of the proposed MSP are described. The requirements of each of the blocks are discussed in detail. A sample application using active acoustic cancellation technique is described to demonstrate the power of the MSP approach.
O'Shea, Peter; Hussain, Zahir M
In three parts, this book contributes to the advancement of engineering education and that serves as a general reference on digital signal processing. Part I presents the basics of analog and digital signals and systems in the time and frequency domain. It covers the core topics: convolution, transforms, filters, and random signal analysis. It also treats important applications including signal detection in noise, radar range estimation for airborne targets, binary communication systems, channel estimation, banking and financial applications, and audio effects production. Part II considers sel
Oxenløwe, Leif Katsuo
Optical time lenses have proven to be very versatile for advanced optical signal processing. Based on a controlled interplay between dispersion and phase-modulation by e.g. four-wave mixing, the processing is phase-preserving, an hence useful for all types of data signals including coherent multi......-level modulation founats. This has enabled processing of phase-modulated spectrally efficient data signals, such as orthogonal frequency division multiplexed (OFDM) signa In that case, a spectral telescope system was used, using two time lenses with different focal lengths (chirp rates), yielding a spectral...... regeneratio These operations require a broad bandwidth nonlinear platform, and novel photonic integrated nonlinear platform like aluminum gallium arsenide nano-waveguides used for 1.28 Tbaud optical signal processing will be described....
Huang, Yiteng; Chen, Jingdong
A timely and important book addressing a variety of acoustic signal processing problems under multiple-input multiple-output (MIMO) scenarios. It uniquely investigates these problems within a unified framework offering a novel and penetrating analysis.
Signal processing is the discipline of extracting information from collections of measurements. To be effective, the measurements must be organized and then filtered, detected, or transformed to expose the desired information. Distortions caused by uncertainty, noise, and clutter degrade the performance of practical signal processing systems. In aggressively uncertain situations, the full truth about an underlying signal cannot be known. This book develops the theory and practice of signal processing systems for these situations that extract useful, qualitative information using the mathematics of topology -- the study of spaces under continuous transformations. Since the collection of continuous transformations is large and varied, tools which are topologically-motivated are automatically insensitive to substantial distortion. The target audience comprises practitioners as well as researchers, but the book may also be beneficial for graduate students.
Vetterli, Martin; Goyal, Vivek K
This comprehensive and engaging textbook introduces the basic principles and techniques of signal processing, from the fundamental ideas of signals and systems theory to real-world applications. Students are introduced to the powerful foundations of modern signal processing, including the basic geometry of Hilbert space, the mathematics of Fourier transforms, and essentials of sampling, interpolation, approximation and compression. The authors discuss real-world issues and hurdles to using these tools, and ways of adapting them to overcome problems of finiteness and localisation, the limitations of uncertainty and computational costs. Standard engineering notation is used throughout, making mathematical examples easy for students to follow, understand and apply. It includes over 150 homework problems and over 180 worked examples, specifically designed to test and expand students' understanding of the fundamentals of signal processing, and is accompanied by extensive online materials designed to aid learning, ...
Cerutti, Sergio; Baselli, Giuseppe; Bianchi, Anna; Caiani, Enrico; Contini, Davide; Cubeddu, Rinaldo; Dercole, Fabio; Rienzo, Luca; Liberati, Diego; Mainardi, Luca; Ravazzani, Paolo; Rinaldi, Sergio; Signorini, Maria; Torricelli, Alessandro
Generally, physiological modeling and biomedical signal processing constitute two important paradigms of biomedical engineering (BME): their fundamental concepts are taught starting from undergraduate studies and are more completely dealt with in the last years of graduate curricula, as well as in Ph.D. courses. Traditionally, these two cultural aspects were separated, with the first one more oriented to physiological issues and how to model them and the second one more dedicated to the development of processing tools or algorithms to enhance useful information from clinical data. A practical consequence was that those who did models did not do signal processing and vice versa. However, in recent years,the need for closer integration between signal processing and modeling of the relevant biological systems emerged very clearly , . This is not only true for training purposes(i.e., to properly prepare the new professional members of BME) but also for the development of newly conceived research projects in which the integration between biomedical signal and image processing (BSIP) and modeling plays a crucial role. Just to give simple examples, topics such as brain–computer machine or interfaces,neuroengineering, nonlinear dynamical analysis of the cardiovascular (CV) system,integration of sensory-motor characteristics aimed at the building of advanced prostheses and rehabilitation tools, and wearable devices for vital sign monitoring and others do require an intelligent fusion of modeling and signal processing competences that are certainly peculiar of our discipline of BME.
Genomic signal processing (GSP) can be defined as the analysis, processing, and use of genomic signals to gain biological knowledge, and the translation of that knowledge into systems-based applications that can be used to diagnose and treat genetic diseases. Situated at the crossroads of engineering, biology, mathematics, statistics, and computer science, GSP requires the development of both nonlinear dynamical models that adequately represent genomic regulation, and diagnostic and therapeutic tools based on these models. This book facilitates these developments by providing rigorous mathema
Since their appearance in mid-1980s, wavelets and, more generally, multiscale methods have become powerful tools in mathematical analysis and in applications to numerical analysis and signal processing. This book is based on "Ondelettes et Traitement Numerique du Signal" by Albert Cohen. It has been translated from French by Robert D. Ryan and extensively updated by both Cohen and Ryan. It studies the existing relations between filter banks and wavelet decompositions and shows how these relations can be exploited in the context of digital signal processing. Throughout, the book concentrates on the fundamentals. It begins with a chapter on the concept of multiresolution analysis, which contains complete proofs of the basic results. The description of filter banks that are related to wavelet bases is elaborated in both the orthogonal case (Chapter 2), and in the biorthogonal case (Chapter 4). The regularity of wavelets, how this is related to the properties of the filters and the importance of regularity for t...
Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo
A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...
Taking a novel, less classical approach to the subject, the authors have written this book with the conviction that signal processing should be fun. Their treatment is less focused on the mathematics and more on the conceptual aspects, allowing students to think about the subject at a higher conceptual level, thus building the foundations for more advanced topics and helping students solve real-world problems. The last chapter pulls together the individual topics into an in-depth look at the development of an end-to-end communication system. Richly illustrated with examples and exercises in ea
Abbas, Abbas K
The auscultation method is an important diagnostic indicator for hemodynamic anomalies. Heart sound classification and analysis play an important role in the auscultative diagnosis. The term phonocardiography refers to the tracing technique of heart sounds and the recording of cardiac acoustics vibration by means of a microphone-transducer. Therefore, understanding the nature and source of this signal is important to give us a tendency for developing a competent tool for further analysis and processing, in order to enhance and optimize cardiac clinical diagnostic approach. This book gives the
Sahr, John D.; Mir, Hasan; Morabito, Andrew; Grossman, Matthew
Our role in this project was to participate in the design of the signal processing suite to analyze plasma density measurements on board a small constellation (3 or 4) satellites in Low Earth Orbit. As we are new to space craft experiments, one of the challenges was to simply gain understanding of the quantity of data which would flow from the satellites, and possibly to interact with the design teams in generating optimal sampling patterns. For example, as the fleet of satellites were intended to fly through the same volume of space (displaced slightly in time and space), the bulk plasma structure should be common among the spacecraft. Therefore, an optimal, limited bandwidth data downlink would take advantage of this commonality. Also, motivated by techniques in ionospheric radar, we hoped to investigate the possibility of employing aperiodic sampling in order to gain access to a wider spatial spectrum without suffering aliasing in k-space.
Wallner, Eva-Sophie; López-Salmerón, Vadir; Greb, Thomas
In this review, we compare knowledge about the recently discovered strigolactone signaling pathway and the well established gibberellin signaling pathway to identify gaps of knowledge and putative research directions in strigolactone biology. Communication between and inside cells is integral for the vitality of living organisms. Hormonal signaling cascades form a large part of this communication and an understanding of both their complexity and interactive nature is only beginning to emerge. In plants, the strigolactone (SL) signaling pathway is the most recent addition to the classically acting group of hormones and, although fundamental insights have been made, knowledge about the nature and impact of SL signaling is still cursory. This narrow understanding is in spite of the fact that SLs influence a specific spectrum of processes, which includes shoot branching and root system architecture in response, partly, to environmental stimuli. This makes these hormones ideal tools for understanding the coordination of plant growth processes, mechanisms of long-distance communication and developmental plasticity. Here, we summarize current knowledge about SL signaling and employ the well-characterized gibberellin (GA) signaling pathway as a scaffold to highlight emerging features as well as gaps in our knowledge in this context. GA signaling is particularly suitable for this comparison because both signaling cascades share key features of hormone perception and of immediate downstream events. Therefore, our comparative view demonstrates the possible level of complexity and regulatory interfaces of SL signaling.
INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview
Vatsa, Mayank; Majumdar, Angshul; Kumar, Ajay
This book comprises chapters on key problems in machine learning and signal processing arenas. The contents of the book are a result of a 2014 Workshop on Machine Intelligence and Signal Processing held at the Indraprastha Institute of Information Technology. Traditionally, signal processing and machine learning were considered to be separate areas of research. However in recent times the two communities are getting closer. In a very abstract fashion, signal processing is the study of operator design. The contributions of signal processing had been to device operators for restoration, compression, etc. Applied Mathematicians were more interested in operator analysis. Nowadays signal processing research is gravitating towards operator learning – instead of designing operators based on heuristics (for example wavelets), the trend is to learn these operators (for example dictionary learning). And thus, the gap between signal processing and machine learning is fast converging. The 2014 Workshop on Machine Intel...
Schilling, Robert L
Focus on the development, implementation, and application of modern DSP techniques with DIGITAL SIGNAL PROCESSING USING MATLAB(R), 3E. Written in an engaging, informal style, this edition immediately captures your attention and encourages you to explore each critical topic. Every chapter starts with a motivational section that highlights practical examples and challenges that you can solve using techniques covered in the chapter. Each chapter concludes with a detailed case study example, a chapter summary with learning outcomes, and practical homework problems cross-referenced to specific chapter sections for your convenience. DSP Companion software accompanies each book to enable further investigation. The DSP Companion software operates with MATLAB(R) and provides intriguing demonstrations as well as interactive explorations of analysis and design concepts.
This is a very new concept for learning Signal Processing, not only from the physically-based scientific fundamentals, but also from the didactic perspective, based on modern results of brain research. The textbook together with the DVD form a learning system that provides investigative studies and enables the reader to interactively visualize even complex processes. The unique didactic concept is built on visualizing signals and processes on the one hand, and on graphical programming of signal processing systems on the other. The concept has been designed especially for microelectronics, computer technology and communication. The book allows to develop, modify, and optimize useful applications using DasyLab - a professional and globally supported software for metrology and control engineering. With the 3rd edition, the software is also suitable for 64 bit systems running on Windows 7. Real signals can be acquired, processed and played on the sound card of your computer. The book provides more than 200 pre-pr...
Kay, Steven M
A unified presentation of parameter estimation for those involved in the design and implementation of statistical signal processing algorithms. Covers important approaches to obtaining an optimal estimator and analyzing its performance; and includes numerous examples as well as applications to real- world problems. MARKETS: For practicing engineers and scientists who design and analyze signal processing systems, i.e., to extract information from noisy signals — radar engineer, sonar engineer, geophysicist, oceanographer, biomedical engineer, communications engineer, economist, statistician, physicist, etc.
Rechester, A.B.; White, R.B.
Complex dynamic processes exhibit many complicated patterns of evolution. How can all these patterns be recognized using only output (observational, experimental) data without prior knowledge of the equations of motion? The powerful method for doing this is based on symbolic dynamics: (1) Present output data in symbolic form (trial language). (2) Topological and metric entropies are constructed. (3) Develop algorithms for computer optimization of entropies. (4) By maximizing entropies, find the most appropriate symbolic language for the purpose of pattern recognition. (5) Test this method using a variety of dynamical models from nonlinear science. The authors are in the process of applying this method for analysis of MHD fluctuations in tokamaks
Alberto, Diego; Musa, L
It is well known that many of the scientific and technological discoveries of the XXI century will depend on the capability of processing and understanding a huge quantity of data. With the advent of the digital era, a fully digital and automated treatment can be designed and performed. From data mining to data compression, from signal elaboration to noise reduction, a processing is essential to manage and enhance features of interest after every data acquisition (DAQ) session. In the near future, science will go towards interdisciplinary research. In this work there will be given an example of the application of signal processing to different fields of Physics from nuclear particle detectors to biomedical examinations. In Chapter 1 a brief description of the collaborations that allowed this thesis is given, together with a list of the publications co-produced by the author in these three years. The most important notations, definitions and acronyms used in the work are also provided. In Chapter 2, the last r...
Naidu, Prabhakar S
Chapter One: An Overview of Wavefields 1.1 Types of Wavefields and the Governing Equations 1.2 Wavefield in open space 1.3 Wavefield in bounded space 1.4 Stochastic wavefield 1.5 Multipath propagation 1.6 Propagation through random medium 1.7 ExercisesChapter Two: Sensor Array Systems 2.1 Uniform linear array (ULA) 2.2 Planar array 2.3 Distributed sensor array 2.4 Broadband sensor array 2.5 Source and sensor arrays 2.6 Multi-component sensor array2.7 ExercisesChapter Three: Frequency Wavenumber Processing 3.1 Digital filters in the w-k domain 3.2 Mapping of 1D into 2D filters 3.3 Multichannel Wiener filters 3.4 Wiener filters for ULA and UCA 3.5 Predictive noise cancellation 3.6 Exercises Chapter Four: Source Localization: Frequency Wavenumber Spectrum4.1 Frequency wavenumber spectrum 4.2 Beamformation 4.3 Capon's w-k spectrum 4.4 Maximum entropy w-k spectrum 4.5 Doppler-Azimuth Processing4.6 ExercisesChapter Five: Source Localization: Subspace Methods 5.1 Subspace methods (Narrowband) 5.2 Subspace methods (B...
This book covers the fundamental concepts in signal processing illustrated with Python code and made available via IPython Notebooks, which are live, interactive, browser-based documents that allow one to change parameters, redraw plots, and tinker with the ideas presented in the text. Everything in the text is computable in this format and thereby invites readers to ""experiment and learn"" as they read. The book focuses on the core, fundamental principles of signal processing. The code corresponding to this book uses the core functionality of the scientific Python toolchain that should remai
Nuclear Engineering has matured during the last decade. In research and design, control, supervision, maintenance and production, mathematical models and theories are used extensively. In all such applications signal processing is embedded in the process. Artificial Neural Networks (ANN), because of their nonlinear, adaptive nature are well suited to such applications where the classical assumptions of linearity and second order Gaussian noise statistics cannot be made. ANN's can be treated as nonparametric techniques, which can model an underlying process from example data. They can also adopt their model parameters to statistical change with time. Algorithms in the framework of Neural Networks in Signal processing have found new applications potentials in the field of Nuclear Engineering. This paper reviews the fundamentals of Neural Networks in signal processing and their applications in tasks such as recognition/identification and control. The topics covered include dynamic modeling, model based ANN's, statistical learning, eigen structure based processing and generalization structures. (orig.)
Frankenstein, B.; Froehlich, K.-J.; Hentschel, D.; Reppe, G.
Acoustic monitoring of technological processes requires methods that eliminate noise as much as possible. Sensor-near signal evaluation can contribute substantially. Frequently, a further necessity exists to integrate the measuring technique in the monitored structure. The solution described contains components for analog preprocessing of acoustic signals, their digitization, algorithms for data reduction, and digital communication. The core component is a digital signal processor (DSP). Digital signal processors perform the algorithms necessary for filtering, down sampling, FFT computation and correlation of spectral components particularly effective. A compact, sensor-near signal processing structure was realized. It meets the Match-X standard, which as specified by the German Association for Mechanical and Plant Engineering (VDMA) for development of micro-technical modules, which can be combined to applicaiton specific systems. The solution is based on AL2O3 ceramic components including different signal processing modules as ADC, as well as memory and power supply. An arbitrary waveform generator has been developed and combined with a power amplifier for piezoelectric transducers in a special module. A further module interfaces to these transducers. It contains a multi-channel preamplifier, some high-pass filters for analog signal processing and an ADC-driver. A Bluetooth communication chip for wireless data transmission and a DiscOnChip module are under construction. As a first application, the combustion behavior of safety-relevant contacts is monitored. A special waveform up to 5MHz is produced and sent to the monitored object. The resulting signal form is evaluated with special algorithms, which extract significant parameters of the signal, and transmitted via CAN-bus.
Jayaweera, Sudharman K
This book covers power electronics, in depth, by presenting the basic principles and application details, and it can be used both as a textbook and reference book. Introduces the specific type of CR that has gained the most research attention in recent years: the CR for Dynamic Spectrum Access (DSA). Provides signal processing solutions to each task by relating the tasks to materials covered in Part II. Specialized chapters then discuss specific signal processing algorithms required for DSA and DSS cognitive radios
PSpice for Digital Signal Processing is the last in a series of five books using Cadence Orcad PSpice version 10.5 and introduces a very novel approach to learning digital signal processing (DSP). DSP is traditionally taught using Matlab/Simulink software but has some inherent weaknesses for students particularly at the introductory level. The 'plug in variables and play' nature of these software packages can lure the student into thinking they possess an understanding they don't actually have because these systems produce results quicklywithout revealing what is going on. However, it must be
Written by leaders in the field, Signal Processing for Remote Sensing explores the data acquisitions segment of remote sensing. Each chapter presents a major research result or the most up to date development of a topic. The book includes a chapter by Dr. Norden Huang, inventor of the Huang-Hilbert transform who, along with and Dr. Steven Long discusses the application of the transform to remote sensing problems. It also contains a chapter by Dr. Enders A. Robinson, who has made major contributions to seismic signal processing for over half a century, on the basic problem of constructing seism
NDT begins to adapt and use the most recent developments of digital signal and image processing. We briefly sum up the main characteristics of NDT situations (particularly noise and inverse problem formulation) and comment on techniques already used or just emerging (SAFT, split spectrum, adaptive learning network, noise reference filtering, stochastic models, neural networks). This survey is focused on ultrasonics, eddy currents and X-ray radiography. The final objective of end users (availability of automatic diagnosis systems) cannot be achieved only by signal processing algorithms. A close cooperation with other techniques such as artificial intelligence has therefore to be implemented. (author). 20 refs
Bhattacharyya, Shuvra S; Leupers, Rainer; Takala, Jarmo
The Handbook is organized in four parts. The first part motivates representative applications that drive and apply state-of-the art methods for design and implementation of signal processing systems; the second part discusses architectures for implementing these applications; the third part focuses on compilers and simulation tools; and the fourth part describes models of computation and their associated design tools and methodologies.
Huxtable, Barton D.; Jackson, Christopher R.; Skaron, Steve A.
Synthetic aperture radar (SAR) is uniquely suited to help solve the Search and Rescue problem since it can be utilized either day or night and through both dense fog or thick cloud cover. Other papers in this session, and in this session in 1997, describe the various SAR image processing algorithms that are being developed and evaluated within the Search and Rescue Program. All of these approaches to using SAR data require substantial amounts of digital signal processing: for the SAR image formation, and possibly for the subsequent image processing. In recognition of the demanding processing that will be required for an operational Search and Rescue Data Processing System (SARDPS), NASA/Goddard Space Flight Center and NASA/Stennis Space Center are conducting a technology demonstration utilizing SHARC multi-chip modules from Boeing to perform SAR image formation processing.
Ell, Todd A; Sangwine, Stephen J
Based on updates to signal and image processing technology made in the last two decades, this text examines the most recent research results pertaining to Quaternion Fourier Transforms. QFT is a central component of processing color images and complex valued signals. The book's attention to mathematical concepts, imaging applications, and Matlab compatibility render it an irreplaceable resource for students, scientists, researchers, and engineers.
This thesis introduces a novel approach to programmable and low power platform design for audio signal processing, in particular hearing aids. The proposed programmable platform is a heterogeneous multiprocessor architecture consisting of small and simple instruction set processors called mini......-cores as well as standard DSP/CPU-cores that communicate using message passing. The work has been based on a study of the algorithm suite covering the application domain. The observation of dominant tasks for certain algorithms (FIR, IIR, correlation, etc.) that require custom computational units and special...... data addressing capabilities lead to the design of low power mini-cores. The algorithm suite also consisted of less demanding and/or irregular algorithms (LMS, compression) that required subsample rate signal processing justifying the use of a DSP/CPU-core. The thesis also contributes to the recent...
A transparent signal-processing payload architecture applicable to mobile communication satellites is introduced, and its features and implementation issues are discussed. In its basic form it is characterized by the formation of a large number of narrowband beams directed at the individual users on ground, and is demonstrated to offer improved transmit power efficiency, frequency-reuse capability and traffic-routing flexibility. The processor implementation is envisaged to make extensive use of digital processing functions and ASIC technology combined with advanced SAW techniques. In addition to its inherent attractive features, this architecture provides many of the benefits of full onboard regeneration and processing while preserving most of the flexibility of conventional analog transponders. Simplified derivatives of the basic configuration that offer reduced processing complexity while preserving the essential advantages gained are also presented. Although initially conceived for FDMA/SCPC-type traffic, the concept can also be adapted to other transmission formats.
Padgett, Wayne T
This book is intended to fill the gap between the ""ideal precision"" digital signal processing (DSP) that is widely taught, and the limited precision implementation skills that are commonly required in fixed-point processors and field programmable gate arrays (FPGAs). These skills are often neglected at the university level, particularly for undergraduates. We have attempted to create a resource both for a DSP elective course and for the practicing engineer with a need to understand fixed-point implementation. Although we assume a background in DSP, Chapter 2 contains a review of basic theory
Oxenløwe, Leif Katsuo; Galili, Michael; Mulvad, Hans Christian Hansen
This paper describes the use of optical time lenses for optical signal processing of advanced optical data signals. Examples given include 1.28 Tbaud Nyquist channel serial-to-parallel conversion and spectral magnification of OFDM signals.......This paper describes the use of optical time lenses for optical signal processing of advanced optical data signals. Examples given include 1.28 Tbaud Nyquist channel serial-to-parallel conversion and spectral magnification of OFDM signals....
Vaseghi, Saeed V
Digital signal processing plays a central role in the development of modern communication and information processing systems. The theory and application of signal processing is concerned with the identification, modelling and utilisation of patterns and structures in a signal process. The observation signals are often distorted, incomplete and noisy and therefore noise reduction, the removal of channel distortion, and replacement of lost samples are important parts of a signal processing system. The fourth edition of Advanced Digital Signal Processing and Noise Reduction updates an
Anderson, James A.; Korrapati, Raghu; Swain, Nikunja K.
The area of test and measurement is changing rapidly because of the recent developments in software and hardware. The test and measurement systems are increasingly becoming PC based. Most of these PC based systems use graphical programming language to design test and measurement modules called virtual instruments (Vis). These Vis provide visual representation of dat or models, and make understanding of abstract concepts and algorithms easier. This allows users to express their ideas in a concise manner. One such virtual instruments package is LabVIEW from National Instruments Corporation at Austin, Texas. This software package is one of the first graphical programming products and is currently used in number of academic institutions, industries, Department of Defense graphical programming products and is currently sued in number of academic institutions, industries, Department of Defense, Department of Energy, and National Aeronautics and Space Administration for various test, measurement, and control applications. LabVIEW has an extensive built-in VI library that can be used to design and develop solutions for different applications. Besides using the built-in VI library that can be used to design and develop solutions for different applications. Besides using the built-in VI modules in LabVIEW, the user can design new VI modules easily. This paper discusses the use of LabVIEW to design and develop digital signal processing VI modules such as Fourier Analysis and Windowing. Instructors can use these modules to teach some of the signal processing concepts effectively.
The contents of this book contains basic practice of CEM Tool, discrete time signal and experiment and practice of system, experiment and practice of discrete time signal sampling, practice of frequency analysis, experiment of digital filter design, application of digital signal processing, project related voice, basic principle of signal processing, the technique of basic image signal processing, biology astronomy and Robot soccer with apply of image signal processing technique, control video signal and project related image. It also has an introduction of CEM Linker I. O in the end.
This book is an accessible guide to adaptive signal processing methods that equips the reader with advanced theoretical and practical tools for the study and development of circuit structures and provides robust algorithms relevant to a wide variety of application scenarios. Examples include multimodal and multimedia communications, the biological and biomedical fields, economic models, environmental sciences, acoustics, telecommunications, remote sensing, monitoring, and, in general, the modeling and prediction of complex physical phenomena. The reader will learn not only how to design and implement the algorithms but also how to evaluate their performance for specific applications utilizing the tools provided. While using a simple mathematical language, the employed approach is very rigorous. The text will be of value both for research purposes and for courses of study.
Some basic concepts of political decision making - a process within a given system - are described and analysed starting from false and going on to more adequate concepts. A practical example (breeder technology) is used to show how the author's arguments can be applied to the analysis of practical problems in social reality. (DG) [de
Purpose: To carry out the works signal processing, in a nuclear power plant, automatically. Constitution: Neutron flux signal from reactor instruments is inputted into a reactivity meter, and processes in accordance with reactor characteristic equations and, as the result, outputted as a reactivity signal rho. The signal rho, as well as the neutron flux signal and temperature signal are inputted together to a pen recorder and recorded on a record chart. While on the other hand, these signals are inputted into a signal processor, together with a range-switching signal from the reactivity meter and with a control-rod-position signal n from other instrument. In the signal processor, the differentiation and integration values such as Δrho/(n 1 -n 2 ) and ΣΔrho of the control element are conducted automatically. The results are indicated on a graphic display. (J.P.N.)
Garcia, Thomas R.
A digital communications signal is a sinusoidal waveform that is modified by a binary (digital) information signal. The sinusoidal waveform is called the carrier. The carrier may be modified in amplitude, frequency, phase, or a combination of these. In this project a binary phase shift keyed (BPSK) signal is the communication signal. In a BPSK signal the phase of the carrier is set to one of two states, 180 degrees apart, by a binary (i.e., 1 or 0) information signal. A digital signal is a sampled version of a "real world" time continuous signal. The digital signal is generated by sampling the continuous signal at discrete points in time. The rate at which the signal is sampled is called the sampling rate (f(s)). The device that performs this operation is called an analog-to-digital (A/D) converter or a digitizer. The digital signal is composed of the sequence of individual values of the sampled BPSK signal. Digital signal processing (DSP) is the modification of the digital signal by mathematical operations. A device that performs this processing is called a digital signal processor. After processing, the digital signal may then be converted back to an analog signal using a digital-to-analog (D/A) converter. The goal of this project is to develop a system that will recover the digital information from a BPSK signal using DSP techniques. The project is broken down into the following steps: (1) Development of the algorithms required to demodulate the BPSK signal; (2) Simulation of the system; and (3) Implementation a BPSK receiver using digital signal processing hardware.
Mahafza, Bassem R
Offering radar-related software for the analysis and design of radar waveform and signal processing, this book provides comprehensive coverage of radar signals and signal processing techniques and algorithms. It contains numerous graphical plots, common radar-related functions, table format outputs, and end-of-chapter problems. The complete set of MATLAB[registered] functions and routines are available for download online.
Christensen, Lars P.B.
by allowing an increased reuse of network resources. To achieve this goal, one must first understand the nature of the problem and an introduction is therefore provided. In addition, the concept of graph-based models and approximations for wireless communications is introduced along with various Belief......This thesis is concerned with signal processing for improving the performance of wireless communication receivers for well-established cellular networks such as the GSM/EDGE and WCDMA/HSPA systems. The goal of doing so, is to improve the end-user experience and/or provide a higher system capacity...... Propagation (BP) methods for detecting the transmitted information, including the Turbo principle. Having established a framework for the research, various approximate detection schemes are discussed. First, the general form of linear detection is presented and it is argued that this may be preferable...
A readable, understandable introduction to DSP for professionals and students alike . . . This practical guide is a welcome alternative to more complicated introductions to DSP. It assumes no prior DSP experience and takes the reader step-by-step through the most basic signal processing concepts to more complex functions and devices, including sampling, filtering, frequency transforms, data compression, and even DSP design decisions. The guide provides clear, concise explanations and examples, while keeping mathematics to a minimum, to help develop a fundamental understanding of DSP. Other features include: * An extensive resource bibliography of more advanced DSP books. * An example of a typical DSP system development cycle, including tool descriptions. * A complete glossary of DSP-related acronyms Whether you're a working engineer looking into DSP for the first time or an undergraduate struggling to comprehend the subject, this engaging introduction provides easy access to the basic knowledge that will l...
Singh, Avtar; Hines, John; Somps, Chris
This is an attempt to develop a biotelemetry receiver using digital signal processing technology and techniques. The receiver developed in this work is based on recovering signals that have been encoded using either Pulse Position Modulation (PPM) or Pulse Code Modulation (PCM) technique. A prototype has been developed using state-of-the-art digital signal processing technology. A Printed Circuit Board (PCB) is being developed based on the technique and technology described here. This board is intended to be used in the UCSF Fetal Monitoring system developed at NASA. The board is capable of handling a variety of PPM and PCM signals encoding signals such as ECG, temperature, and pressure. A signal processing program has also been developed to analyze the received ECG signal to determine heart rate. This system provides a base for using digital signal processing in biotelemetry receivers and other similar applications.
The many requirements made on the storage, administration and handling of signal patterns of stationary random signal patterns and multi-channel transient signal patterns are discussed. For the field of loose part detection, the concept of a data base is presented for structure-borne noise burst patterns. Chapter 2 describes the measuring principle of structure-borne noise monitoring. Chapter 3 gives a summary of the requirements made on the automation of measured data recording; chapter 4 compiles requirements made on the data base. Chapter 5 describes the concept developed for a burst pattern data base; chapter 6 shows the organization of the data base, and chapter 7 reflects the present state of application. (orig./HP) [de
Dandavate, A. L.; Sarje, S. H.
In this paper, an attempt has been made to develop concepts for patient transferring tasks. The concept generation process of patient transferring device (PTD), which includes interviews of the customers, interpretation of the needs, organizing the needs into a hierarchy, establishing relative importance of the needs, establishing target specifications, and conceptualization has been discussed in this paper. The authors conducted the interviews of customers at Mobilink NGO, St. John's Hospital, Bangalore in order to know the needs and wants for the PTD. AHP technique was used for establishing and evaluating relative importance of needs, and based on the importance of the customer needs, concepts were developed through brainstorming.
Howard, Roy M
A fresh introduction to random processes utilizing signal theory By incorporating a signal theory basis, A Signal Theoretic Introduction to Random Processes presents a unique introduction to random processes with an emphasis on the important random phenomena encountered in the electronic and communications engineering field. The strong mathematical and signal theory basis provides clarity and precision in the statement of results. The book also features: A coherent account of the mathematical fundamentals and signal theory that underpin the presented material Unique, in-depth coverage of
Kalfod Nielsen, H.
A circuit for processing signals from a detector and occuring at random time intervals has a pulse-shaper, a delay and a processing circuit. The signal path is divided over part of its extent into parallel part-signal paths, each including an electronic switch and signal modifying circuits, a discriminator to detect a signal in the path and a control circuit for the switches and controlled by the discriminator being connected to the path ahead of the delay. The parallel paths are identical and the switch in each is ahead of the modifying circuits. When the discriminator detects a signal in the path the switch in on part path is made to conduct for at least as long as the duration of the signal as detected by the discriminator. The switches are preferable made to conduct cyclically. Processes increased number of signals, with quality of results not dependent on pulse rate and risk of errors substantially reduced. (au)
AFRL-RY-WP-TR-2017-0172 SIGNAL PROCESSING UTILIZING RADIO FREQUENCY PHOTONICS Preetpaul S. Devgan RF/EO Subsystems Branch Aerospace Components...4. TITLE AND SUBTITLE SIGNAL PROCESSING UTILIZING RADIO FREQUENCY PHOTONICS 5a. CONTRACT NUMBER In-house 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT...can be used for multiple signal processing applications. Down conversion, oscillators analog to digital conversion and waveform generation are
Full Text Available We describe recent progress in integrated microwave photonics in wideband signal processing applications with a focus on the key signal processing building blocks, the realization of monolithic integration, and cascaded photonic signal processing for analog radio frequency (RF photonic links. New developments in integration-based microwave photonic techniques, that have high potentialities to be used in a variety of sensing applications for enhanced resolution and speed are also presented.
Chen, Ya-Chin; Juang, Jer-Nan
Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.
This book grew out of the IEEE-EMBS Summer Schools on Biomedical Signal Processing, which have been held annually since 2002 to provide the participants state-of-the-art knowledge on emerging areas in biomedical engineering. Prominent experts in the areas of biomedical signal processing, biomedical data treatment, medicine, signal processing, system biology, and applied physiology introduce novel techniques and algorithms as well as their clinical or physiological applications. The book provides an overview of a compelling group of advanced biomedical signal processing techniques, such as mult
Hummel, Robert; Stoica, Petre; Zelnio, Edmund
Radar Signal Processing and Its Applications brings together in one place important contributions and up-to-date research results in this fast-moving area. In twelve selected chapters, it describes the latest advances in architectures, design methods, and applications of radar signal processing. The contributors to this work were selected from the leading researchers and practitioners in the field. This work, originally published as Volume 14, Numbers 1-3 of the journal, Multidimensional Systems and Signal Processing, will be valuable to anyone working or researching in the field of radar signal processing. It serves as an excellent reference, providing insight into some of the most challenging issues being examined today.
Oxenløwe, Leif Katsuo; Hu, Hao; Kjøller, Niels-Kristian
Optical signal processing may aid in reducing the number of active components in communication systems with many parallel channels, by e.g. using telescopic time lens arrangements to perform format conversion and allow for WDM regeneration.......Optical signal processing may aid in reducing the number of active components in communication systems with many parallel channels, by e.g. using telescopic time lens arrangements to perform format conversion and allow for WDM regeneration....
Van Nee, D.J.R.
Abstract of CA 2109759 (A1) A method and device for processing a signal are described, wherein an estimate of a multipath-induced contribution to a demodulated navigation signal is calculated and subtracted from said demodulated navigation signal to obtain an estimated line of sight contribution to
Da Ros, Francesco
signal processing, including wavelength conversion, optical phase conjugation (OPC), and signal regeneration. This project focuses precisely on the applications of OPAs for all-optical signal processing with a two-fold focus: on the one hand, processing the advanced modulation formats required......) waveguides, are investigated. The limits of parametric amplification for 16-quadrature amplitude modulation (QAM) signals are first characterized. The acquired knowledge is then applied to the design of a black-box OPC-device used to provide Kerr nonlinearity compensation for a 5-channel polarization......-division multiplexing (PDM) 16-QAM signal at 1.12 Tbps with significant improvements in received signal quality. Furthermore, the first demonstration of phase regeneration for binary phase-shift keying (BPSK) signals using the silicon platform is presented. The silicon-based OPA relies on a novel design where a reverse...
The optimal control and safe operation of a nuclear power plant requires reliable information concerning the state of the process. Signal validation is the detection, isolation, and characterization of faulty signals. Properly validated process signals are beneficial from the standpoint of increased plant availability and reliability of operator actions. This paper reports on a signal validation technique utilizing a process hypercube comparison (PHC) originated during this research. The hypercube is merely a multidimensional joint histogram of the process conditions. The hypercube is created off-line during a learning phase using operational plant data. In the event that a newly observed plant state does not match with those in the learned hypercube, the PHV algorithm performs signal validation by progressively hypothesizing that one or more signals is in error. This assumption is then either substantiated or denied. In the case where many signals are found to be in error, a conclusion that the process conditions are abnormal is reached. The global data base contained within the hypercube provides a best estimate of the process conditions in the event a signal is deemed failed. The hypercube signal validation methodology was tested using operational data from a commercial pressurized water reactor (PWR) and the Experimental Breeder Reactor II (EBR-II). This research was part of a larger project aimed at the development of a comprehensive signal validation software system for application to nuclear power plants
Wahl, Daniel E.; Yocky, David A.
This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.
Digital Signal Processing is a mathematically rigorous but accessible treatment of digital signal processing that intertwines basic theoretical techniques with hands-on laboratory instruction. Divided into three parts, the book covers various aspects of the digital signal processing (DSP) ""problem."" It begins with the analysis of discrete-time signals and explains sampling and the use of the discrete and fast Fourier transforms. The second part of the book???covering digital to analog and analog to digital conversion???provides a practical interlude in the mathematical content before Part II
Helms, Niels Henrik; Tellerup, Susanne
This paper introduces the Quadrant model for innovation. The model should be seen as a generative model for structuring processes in innovation with complex partnerships. The paper discusses the model and especially emphasises the need for, and the different concepts of materiality in innovation....
Cavallaro, A.; Lagendijk, R. (Inald) L.; Erkin, Zekeriya; Erkin, Zekeriya; Kwasinski, A.; Barni, Mauro
In recent years, signal processing applications that deal with user-related data have aroused privacy concerns. For instance, face recognition and personalized recommendations rely on privacy-sensitive information that can be abused if the signal processing is executed on remote servers or in the
Cartledge, John C.; Guiomar, Fernando P.; Kschischang, Frank R.
This paper reviews digital signal processing techniques that compensate, mitigate, and exploit fiber nonlinearities in coherent optical fiber transmission systems......This paper reviews digital signal processing techniques that compensate, mitigate, and exploit fiber nonlinearities in coherent optical fiber transmission systems...
A fast-digitizer data acquisition system recently installed at the neutron time-of-flight experiment nELBE, which is located at the superconducting electron accelerator ELBE of Forschungszentrum Dresden-Rossendorf, is tested with two different detector types. Preamplifier signals from a high-purity germanium detector are digitized, stored and finally processed. For a precise determination of the energy of the detected radiation, the moving-window deconvolution algorithm is used to compensate the ballistic deficit and different shaping algorithms are applied. The energy resolution is determined in an experiment with γ-rays from a 22 Na source and is compared to the energy resolution achieved with analogously processed signals. On the other hand, signals from the photomultipliers of barium fluoride and plastic scintillation detectors are digitized. These signals have risetimes of a few nanoseconds only. The moment of interaction of the radiation with the detector is determined by methods of digital signal processing. Therefore, different timing algorithms are implemented and tested with data from an experiment at nELBE. The time resolutions achieved with these algorithms are compared to each other as well as to reference values coming from analog signal processing. In addition to these experiments, some properties of the digitizing hardware are measured and a program for the analysis of stored, digitized data is developed. The analysis of the signals shows that the energy resolution achieved with the 10-bit digitizer system used here is not competitive to a 14-bit peak-sensing ADC, although the ballistic deficit can be fully corrected. However, digital methods give better result in sub-ns timing than analog signal processing. (orig.)
Havelock, David; Vorländer, Michael
The Handbook of Signal Processing in Acoustics presents signal processing as it is practiced in the field of acoustics. The Handbook is organized by areas of acoustics, with recognized leaders coordinating the self-contained chapters of each section. It brings together a wide range of perspectives from over 100 authors to reveal the interdisciplinary nature of signal processing in acoustics. Success in acoustic applications often requires juggling both the acoustic and the signal processing parameters of the problem. This handbook brings the key issues from both into perspective and is complementary to other reference material on the two subjects. It is a unique resource for experts and practitioners alike to find new ideas and techniques within the diversity of signal processing in acoustics.
This report describes the software that was developed to process test waveforms that were recorded by crash test data acquisition systems. The test waveforms are generated by an electronic waveform generator developed by MGA Research Corporation unde...
Chatterjee, Amitava; Siarry, Patrick
There have been significant developments in the design and application of algorithms for both one-dimensional signal processing and multidimensional signal processing, namely image and video processing, with the recent focus changing from a step-by-step procedure of designing the algorithm first and following up with in-depth analysis and performance improvement to instead applying heuristic-based methods to solve signal-processing problems. In this book the contributing authors demonstrate both general-purpose algorithms and those aimed at solving specialized application problems, with a spec
Communicating with a machine in a natural mode such as speech brings out not only several technological challenges, but also limitations in our understanding of how people communicate so effortlessly. The key is to understand the distinction between speech processing (as is done in human communication) and speech ...
Lewis, Daniel D; Chavez, Michael; Chiu, Kwan Lun; Tan, Cheemeng
Living cells are known for their capacity for versatile signal processing, particularly the ability to respond differently to the same stimuli using biochemical networks that integrate environmental signals and reconfigure their dynamic responses. However, the complexity of natural biological networks confounds the discovery of fundamental mechanisms behind versatile signaling. Here, we study one specific aspect of reconfigurable signal processing in which a minimal biological network integrates two signals, using one to reconfigure the network's transfer function with respect to the other, producing an emergent switch between induction and repression. In contrast to known mechanisms, the new mechanism reconfigures transfer functions through genetic networks without extensive protein-protein interactions. These results provide a novel explanation for the versatility of genetic programs, and suggest a new mechanism of signal integration that may govern flexibility and plasticity of gene expression.
K. A. Zimenko
Full Text Available A method of electromyogram signals processing and identification for implementation in rehabilitation devices control is given. The method is based on the high-frequency components filtration which improves the signal/noise ratio; also it is based on the wavelet analysis for signal preprocessing and motion type classification by taught artificial neural network. Obtained accuracy of motion type classification is 94%.
In recent years, pseudo random signal processing has proven to be a critical enabler of modern communication, information, security and measurement systems. The signal's pseudo random, noise-like properties make it vitally important as a tool for protecting against interference, alleviating multipath propagation and allowing the potential of sharing bandwidth with other users. Taking a practical approach to the topic, this text provides a comprehensive and systematic guide to understanding and using pseudo random signals. Covering theoretical principles, design methodologies and applications
Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan
This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.
Hall, Gregory A.; Youngquist, Robert; Mikhael, Wasfy
A method of adaptive signal processing has been proposed as the basis of a new generation of interferometric optical profilometers for measuring surfaces. The proposed profilometers would be portable, hand-held units. Sizes could be thus reduced because the adaptive-signal-processing method would make it possible to substitute lower-power coherent light sources (e.g., laser diodes) for white light sources and would eliminate the need for most of the optical components of current white-light profilometers. The adaptive-signal-processing method would make it possible to attain scanning ranges of the order of decimeters in the proposed profilometers.
Plesinger, F; Jurco, J; Halamek, J; Jurak, P
The growing technical standard of acquisition systems allows the acquisition of large records, often reaching gigabytes or more in size as is the case with whole-day electroencephalograph (EEG) recordings, for example. Although current 64-bit software for signal processing is able to process (e.g. filter, analyze, etc) such data, visual inspection and labeling will probably suffer from rather long latency during the rendering of large portions of recorded signals. For this reason, we have developed SignalPlant-a stand-alone application for signal inspection, labeling and processing. The main motivation was to supply investigators with a tool allowing fast and interactive work with large multichannel records produced by EEG, electrocardiograph and similar devices. The rendering latency was compared with EEGLAB and proves significantly faster when displaying an image from a large number of samples (e.g. 163-times faster for 75 × 10(6) samples). The presented SignalPlant software is available free and does not depend on any other computation software. Furthermore, it can be extended with plugins by third parties ensuring its adaptability to future research tasks and new data formats.
Chowdhury, Rubana H.; Reaz, Mamun B. I.; Ali, Mohd Alauddin Bin Mohd; Bakar, Ashrif A. A.; Chellappan, Kalaivani; Chang, Tae. G.
Electromyography (EMG) signals are becoming increasingly important in many applications, including clinical/biomedical, prosthesis or rehabilitation devices, human machine interactions, and more. However, noisy EMG signals are the major hurdles to be overcome in order to achieve improved performance in the above applications. Detection, processing and classification analysis in electromyography (EMG) is very desirable because it allows a more standardized and precise evaluation of the neurophysiological, rehabitational and assistive technological findings. This paper reviews two prominent areas; first: the pre-processing method for eliminating possible artifacts via appropriate preparation at the time of recording EMG signals, and second: a brief explanation of the different methods for processing and classifying EMG signals. This study then compares the numerous methods of analyzing EMG signals, in terms of their performance. The crux of this paper is to review the most recent developments and research studies related to the issues mentioned above. PMID:24048337
Rybin, Yu K
Electronic Devices for Analog Signal Processing is intended for engineers and post graduates and considers electronic devices applied to process analog signals in instrument making, automation, measurements, and other branches of technology. They perform various transformations of electrical signals: scaling, integration, logarithming, etc. The need in their deeper study is caused, on the one hand, by the extension of the forms of the input signal and increasing accuracy and performance of such devices, and on the other hand, new devices constantly emerge and are already widely used in practice, but no information about them are written in books on electronics. The basic approach of presenting the material in Electronic Devices for Analog Signal Processing can be formulated as follows: the study with help from self-education. While divided into seven chapters, each chapter contains theoretical material, examples of practical problems, questions and tests. The most difficult questions are marked by a diamon...
Devasahayam, Suresh R
The use of digital signal processing is ubiquitous in the field of physiology and biomedical engineering. The application of such mathematical and computational tools requires a formal or explicit understanding of physiology. Formal models and analytical techniques are interlinked in physiology as in any other field. This book takes a unitary approach to physiological systems, beginning with signal measurement and acquisition, followed by signal processing, linear systems modelling, and computer simulations. The signal processing techniques range across filtering, spectral analysis and wavelet analysis. Emphasis is placed on fundamental understanding of the concepts as well as solving numerical problems. Graphs and analogies are used extensively to supplement the mathematics. Detailed models of nerve and muscle at the cellular and systemic levels provide examples for the mathematical methods and computer simulations. Several of the models are sufficiently sophisticated to be of value in understanding real wor...
Oxenløwe, Leif Katsuo; Mulvad, Hans Christian Hansen; Hu, Hao
We describe recent demonstrations of exploiting highly nonlinear silicon waveguides for ultrafast optical signal processing. We describe wavelength conversion and serial-to-parallel conversion of 640 Gbit/s data signals and 1.28 Tbit/s demultiplexing and all-optical sampling....
Bri Rolston; Sarah Freeman
Researchers at INL with funding from the Department of Energy’s Office of Electricity Delivery and Energy Reliability (DOE-OE) evaluated a novel approach for near real-time consumption of threat intelligence. Demonstration testing in an industry environment supported the development of this new process to assist the electric sector in securing their critical networks. This report provides the reader with an understanding of the methods used during this proof of concept project. The processes and templates were further advanced with an industry partner during an onsite assessment. This report concludes with lessons learned and a roadmap for final development of these materials for use by industry.
Ranade, G.; Sudhakar, T.
Mathematical advances and the advances in the real time signal processing techniques in the recent times, have considerably improved the state of art in the bathymetry systems. These improvements have helped in developing high resolution swath...
Mendes, R Vilela
Non-commutative tomography is a technique originally developed and extensively used by Professors M A Man’ko and V I Man’ko in quantum mechanics. Because signal processing deals with operators that, in general, do not commute with time, the same technique has a natural extension to this domain. Here, a review is presented of the theory and some applications of non-commutative tomography for time series as well as some new results on signal processing on graphs. (paper)
Prime motivators in the evolution of increasingly sophisticated communication and detection systems are the needs for handling ever wider signal bandwidths and higher data processing speeds. These same needs drive the development of electronic device technology. Until recently the superconductive community has been tightly focused on digital devices for high speed computers. The purpose of this paper is to describe opportunities and challenges which exist for both analog and digital devices in a less familiar area, that of wideband signal processing. The function and purpose of analog signal-processing components, including matched filters, correlators and Fourier transformers, will be described and examples of superconductive implementations given. A canonic signal-processing system is then configured using these components in combination with analog/digital converters and digital output circuits to highlight the important issues of dynamic range, accuracy and equivalent computation rate. Superconductive circuits hold promise for processing signals of 10-GHz bandwidth. Signal processing systems, however, can be properly designed and implemented only through a synergistic combination of the talents of device physicists, circuit designers, algorithm architects and system engineers. An immediate challenge to the applied superconductivity community is to begin sharing ideas with these other researchers
This book is devoted to the emerging technology of noise waveform radar and its signal processing aspects. It is a new kind of radar, which use noise-like waveform to illuminate the target. The book includes an introduction to basic radar theory, starting from classical pulse radar, signal compression, and wave radar. The book then discusses the properties, difficulties and potential of noise radar systems, primarily for low-power and short-range civil applications. The contribution of modern signal processing techniques to making noise radar practical are emphasized, and application examples
Baller, Bruce [Fermilab
This document describes the early stage of the reconstruction chain that was developed for the ArgoNeuT and MicroBooNE experiments at Fermilab. These experiments study accelerator neutrino interactions that occur in a Liquid Argon Time Projection Chamber. Reconstructing the properties of particles produced in these interactions requires knowledge of the micro-physics processes that affect the creation and transport of ionization electrons to the readout system. A wire signal deconvolution technique was developed to convert wire signals to a standard form for hit reconstruction, to remove artifacts in the electronics chain and to remove coherent noise.
A self-contained approach to DSP techniques and applications in radar imagingThe processing of radar images, in general, consists of three major fields: Digital Signal Processing (DSP); antenna and radar operation; and algorithms used to process the radar images. This book brings together material from these different areas to allow readers to gain a thorough understanding of how radar images are processed.The book is divided into three main parts and covers:* DSP principles and signal characteristics in both analog and digital domains, advanced signal sampling, and
Plešinger, Filip; Jurčo, Juraj; Halámek, Josef; Jurák, Pavel
Roč. 37, č. 7 (2016), N38-N48 ISSN 0967-3334 R&D Projects: GA ČR GAP103/11/0933; GA MŠk(CZ) LO1212; GA ČR GAP102/12/2034 Institutional support: RVO:68081731 Keywords : data visualization * software * signal processing * ECG * EEG Subject RIV: FS - Medical Facilities ; Equipment Impact factor: 2.058, year: 2016
This title provides the most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals. The theory is supported by exercises and computer simulations relating to real applications.More than 200 programs and functions are provided in the MATLAB® language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject.
Full Text Available In this paper, we apply wavelet thresholding for removing automatically ground and intermittent clutter (airplane echoes from wind profiler radar data. Using the concept of discrete multi-resolution analysis and non-parametric estimation theory, we develop wavelet domain thresholding rules, which allow us to identify the coefficients relevant for clutter and to suppress them in order to obtain filtered reconstructions.Key words. Meteorology and atmospheric dynamics (instruments and techniques – Radio science (remote sensing; signal processing
Popa, Cosmin Radu
Presents the most important classes of computational structures for analog signal processing, including differential or multiplier structures, squaring or square-rooting circuits, exponential or Euclidean distance structures and active resistor circuitsIntroduces the original concept of the multifunctional circuit, an active structure that is able to implement, starting from the same circuit core, a multitude of continuous mathematical functionsCovers mathematical analysis, design and implementation of a multitude of function generator structures
Kulkarni,Sanjeev R; Dmitry M. Malioutov
The modern financial industry has been required to deal with large and diverse portfolios in a variety of asset classes often with limited market data available. Financial Signal Processing and Machine Learning unifies a number of recent advances made in signal processing and machine learning for the design and management of investment portfolios and financial engineering. This book bridges the gap between these disciplines, offering the latest information on key topics including characterizing statistical dependence and correlation in high dimensions, constructing effective and robust risk measures, and their use in portfolio optimization and rebalancing. The book focuses on signal processing approaches to model return, momentum, and mean reversion, addressing theoretical and implementation aspects. It highlights the connections between portfolio theory, sparse learning and compressed sensing, sparse eigen-portfolios, robust optimization, non-Gaussian data-driven risk measures, graphical models, causal analy...
Rao, K Deergha
The book provides a comprehensive exposition of all major topics in digital signal processing (DSP). With numerous illustrative examples for easy understanding of the topics, it also includes MATLAB-based examples with codes in order to encourage the readers to become more confident of the fundamentals and to gain insights into DSP. Further, it presents real-world signal processing design problems using MATLAB and programmable DSP processors. In addition to problems that require analytical solutions, it discusses problems that require solutions using MATLAB at the end of each chapter. Divided into 13 chapters, it addresses many emerging topics, which are not typically found in advanced texts on DSP. It includes a chapter on adaptive digital filters used in the signal processing problems for faster acceptable results in the presence of changing environments and changing system requirements. Moreover, it offers an overview of wavelets, enabling readers to easily understand the basics and applications of this po...
Bradley, Robert W; Wang, Baojun
Microorganisms are able to respond effectively to diverse signals from their environment and internal metabolism owing to their inherent sophisticated information processing capacity. A central aim of synthetic biology is to control and reprogramme the signal processing pathways within living cells so as to realise repurposed, beneficial applications ranging from disease diagnosis and environmental sensing to chemical bioproduction. To date most examples of synthetic biological signal processing have been built based on digital information flow, though analogue computing is being developed to cope with more complex operations and larger sets of variables. Great progress has been made in expanding the categories of characterised biological components that can be used for cellular signal manipulation, thereby allowing synthetic biologists to more rationally programme increasingly complex behaviours into living cells. Here we present a current overview of the components and strategies that exist for designer cell signal processing and decision making, discuss how these have been implemented in prototype systems for therapeutic, environmental, and industrial biotechnological applications, and examine emerging challenges in this promising field. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Dr. Norden Huang, of Goddard Space Flight Center, invented a set of algorithms (called the Hilbert-Huang Transform, or HHT) for analyzing nonlinear and nonstationary signals that developed into a user-friendly signal processing technology for analyzing time-varying processes. At an auction managed by Ocean Tomo Federal Services LLC, licenses of 10 U.S. patents and 1 domestic patent application related to HHT were sold to DynaDx Corporation, of Mountain View, California. DynaDx is now using the licensed NASA technology for medical diagnosis and prediction of brain blood flow-related problems, such as stroke, dementia, and traumatic brain injury.
Karl, John H
An Introduction to Digital Signal Processing is written for those who need to understand and use digital signal processing and yet do not wish to wade through a multi-semester course sequence. Using only calculus-level mathematics, this book progresses rapidly through the fundamentals to advanced topics such as iterative least squares design of IIR filters, inverse filters, power spectral estimation, and multidimensional applications--all in one concise volume.This book emphasizes both the fundamental principles and their modern computer implementation. It presents and demonstrates how si
The rapid development of microelectronics has led to an increasing extent in circuits and systems for digital signal processing. This happened first in professional applications, e.g. geophysics, astronomy and space flight, and now, with the Compact Disc player, these techniques have entered the consumer field. In the near future digital TV applications will undoubtedly follow. This article outlines a number of the developments behind the advancing 'digitization' of modern technology. The article also considers the main advantages and disadvantages of digital signal processing the main modules now used and some common applications. Particular attention is paid to medical applications. (Auth.)
Zhang, Carolyn; Tsoi, Ryan; Wu, Feilun; You, Lingchong
From the timing of amoeba development to the maintenance of stem cell pluripotency, many biological signaling pathways exhibit the ability to differentiate between pulsatile and sustained signals in the regulation of downstream gene expression. While the networks underlying this signal decoding are diverse, many are built around a common motif, the incoherent feedforward loop (IFFL), where an input simultaneously activates an output and an inhibitor of the output. With appropriate parameters, this motif can exhibit temporal adaptation, where the system is desensitized to a sustained input. This property serves as the foundation for distinguishing input signals with varying temporal profiles. Here, we use quantitative modeling to examine another property of IFFLs-the ability to process oscillatory signals. Our results indicate that the system's ability to translate pulsatile dynamics is limited by two constraints. The kinetics of the IFFL components dictate the input range for which the network is able to decode pulsatile dynamics. In addition, a match between the network parameters and input signal characteristics is required for optimal "counting". We elucidate one potential mechanism by which information processing occurs in natural networks, and our work has implications in the design of synthetic gene circuits for this purpose.
Meyers, James F.; Lee, Joseph W.; Cavone, Angelo A.
Two schemes for processing signals obtained from the Doppler global velocimeter are described. The analog approach is a simple, real time method for obtaining an RS-170 video signal containing the normalized intensity image. Pseudo colors are added using a monochromatic frame grabber producing a standard NTSC video signal that can be monitored and/or recorded. The digital approach is more complicated, but maintains the full resolution of the acquisition cameras with the capabilities to correct the signal image for pixel sensitivity variations and to remove of background light. Prototype circuits for each scheme are described and example results from the investigation of the vortical flow field above a 75-degree delta wing presented.
Aalfs, David D.; Baxa, Ernest G., Jr.; Bracalente, Emedio M.
Low-altitude windshear (LAWS) has been identified as a major hazard to aircraft, particularly during takeoff and landing. The Federal Aviation Administration (FAA) has been involved with developing technology to detect LAWS. A key element in this technology is high resolution pulse Doppler weather radar equipped with signal and data processing to provide timely information about possible hazardous conditions.
Yvind, Kresten; Pu, Minhao; Hvam, Jørn Märcher
The fast non-linearity of silicon allows Tbit/s optical signal processing. By choosing suitable dimensions of silicon nanowires their dispersion can be tailored to ensure a high nonlinearity at power levels low enough to avoid significant two-photon abso We have fabricated low insertion...
Dow, Chyi-Ren; Li, Yi-Hsung; Bai, Jin-Yu
This work designs and implements a virtual digital signal processing laboratory, VDSPL. VDSPL consists of four parts: mobile agent execution environments, mobile agents, DSP development software, and DSP experimental platforms. The network capability of VDSPL is created by using mobile agent and wrapper techniques without modifying the source code…
This book describes the optimization methods most commonly encountered in signal and image processing: artificial evolution and Parisian approach; wavelets and fractals; information criteria; training and quadratic programming; Bayesian formalism; probabilistic modeling; Markovian approach; hidden Markov models; and metaheuristics (genetic algorithms, ant colony algorithms, cross-entropy, particle swarm optimization, estimation of distribution algorithms, and artificial immune systems).
Giron-Sierra, Jose Maria
This is the first volume in a trilogy on modern Signal Processing. The three books provide a concise exposition of signal processing topics, and a guide to support individual practical exploration based on MATLAB programs. This book includes MATLAB codes to illustrate each of the main steps of the theory, offering a self-contained guide suitable for independent study. The code is embedded in the text, helping readers to put into practice the ideas and methods discussed. The book is divided into three parts, the first of which introduces readers to periodic and non-periodic signals. The second part is devoted to filtering, which is an important and commonly used application. The third part addresses more advanced topics, including the analysis of real-world non-stationary signals and data, e.g. structural fatigue, earthquakes, electro-encephalograms, birdsong, etc. The book’s last chapter focuses on modulation, an example of the intentional use of non-stationary signals.
Full Text Available Complex gene regulation requires responses that depend not only on the current levels of input signals but also on signals received in the past. In digital electronics, logic circuits with this property are referred to as sequential logic, in contrast to the simpler combinatorial logic without such internal memory. In molecular biology, memory is implemented in various forms such as biochemical modification of proteins or multistable gene circuits, but the design of the regulatory interface, which processes the input signals and the memory content, is often not well understood. Here, we explore design constraints for such regulatory interfaces using coarse-grained nonlinear models and stochastic simulations of detailed biochemical reaction networks. We test different designs for biological analogs of the most versatile memory element in digital electronics, the JK-latch. Our analysis shows that simple protein-protein interactions and protein-DNA binding are sufficient, in principle, to implement genetic circuits with the capabilities of a JK-latch. However, it also exposes fundamental limitations to its reliability, due to the fact that biological signal processing is asynchronous, in contrast to most digital electronics systems that feature a central clock to orchestrate the timing of all operations. We describe a seemingly natural way to improve the reliability by invoking the master-slave concept from digital electronics design. This concept could be useful to interpret the design of natural regulatory circuits, and for the design of synthetic biological systems.
Hillenbrand, Patrick; Fritz, Georg; Gerland, Ulrich
Complex gene regulation requires responses that depend not only on the current levels of input signals but also on signals received in the past. In digital electronics, logic circuits with this property are referred to as sequential logic, in contrast to the simpler combinatorial logic without such internal memory. In molecular biology, memory is implemented in various forms such as biochemical modification of proteins or multistable gene circuits, but the design of the regulatory interface, which processes the input signals and the memory content, is often not well understood. Here, we explore design constraints for such regulatory interfaces using coarse-grained nonlinear models and stochastic simulations of detailed biochemical reaction networks. We test different designs for biological analogs of the most versatile memory element in digital electronics, the JK-latch. Our analysis shows that simple protein-protein interactions and protein-DNA binding are sufficient, in principle, to implement genetic circuits with the capabilities of a JK-latch. However, it also exposes fundamental limitations to its reliability, due to the fact that biological signal processing is asynchronous, in contrast to most digital electronics systems that feature a central clock to orchestrate the timing of all operations. We describe a seemingly natural way to improve the reliability by invoking the master-slave concept from digital electronics design. This concept could be useful to interpret the design of natural regulatory circuits, and for the design of synthetic biological systems.
Blanchet , Gérard
The most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals, the theory being supported by exercises and computer simulations relating to real applications. More than 200 programs and functions are provided in the MATLAB® language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject. Following on from the first volume, this second installation takes a more practical stance, provi
compression signals,” IEEE Trans. Information Theory , vol. 10, no. 1, pp. 61-67, Jan. 1964.  C.E. Cook, “A class of nonlinear FM pulse compression...Trans. Information Theory , vol. 12, no. 3, pp. 305-311, July 1966.  J.A. Johnston and A.C. Fairhead, “Waveform design and Doppler sensitivity...diversity in echolocating mammals,” IEEE Signal Processing Magazine, vol. 26, no. 1, pp. 65- 75, Jan. 2009. DISTRIBUTION A: Distribution approved for
Blanchet , Gérard
This fully revised and updated second edition presents the most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals. The theory is supported by exercises and computer simulations relating to real applications. More than 200 programs and functions are provided in the MATLABÒ language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject. This fully revised new edition updates : - the
Full Text Available There is at present a worldwide effort to develop next-generation wireless communication systems. It is envisioned that many of the future wireless systems will incorporate considerable signal-processing intelligence in order to provide advanced services such as multimedia transmission. In general, wireless channels can be very hostile media through which to communicate, due to substantial physical impediments, primarily radio-frequency interference and time-arying nature of the channel. The need of providing universal wireless access at high data-rate (which is the aim of many merging wireless applications presents a major technical challenge, and meeting this challenge necessitates the development of advanced signal processing techniques for multiple-access communications in non-stationary interference-rich environments. In this paper, we present some key advanced signal processing methodologies that have been developed in recent years for interference suppression in wireless networks. We will focus primarily on the problem of jointly suppressing multiple-access interference (MAI and intersymbol interference (ISI, which are the limiting sources of interference for the high data-rate wireless systems being proposed for many emerging application areas, such as wireless multimedia. We first present a signal subspace approach to blind joint suppression of MAI and ISI. We then discuss a powerful iterative technique for joint interference suppression and decoding, so-called Turbo multiuser detection, that is especially useful for wireless multimedia packet communications. We also discuss space-time processing methods that employ multiple antennas for interference rejection and signal enhancement. Finally, we touch briefly on the problems of suppressing narrowband interference and impulsive ambient noise, two other sources of radio-frequency interference present in wireless multimedia networks.
Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan
Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.
Jorgensen, C. C.; Lee, D. D.
A recently invented speech-recognition method applies to words that are articulated by means of the tongue and throat muscles but are otherwise not voiced or, at most, are spoken sotto voce. This method could satisfy a need for speech recognition under circumstances in which normal audible speech is difficult, poses a hazard, is disturbing to listeners, or compromises privacy. The method could also be used to augment traditional speech recognition by providing an additional source of information about articulator activity. The method can be characterized as intermediate between (1) conventional speech recognition through processing of voice sounds and (2) a method, not yet developed, of processing electroencephalographic signals to extract unspoken words directly from thoughts. This method involves computational processing of digitized electromyographic (EMG) signals from muscle innervation acquired by surface electrodes under a subject's chin near the tongue and on the side of the subject s throat near the larynx. After preprocessing, digitization, and feature extraction, EMG signals are processed by a neural-network pattern classifier, implemented in software, that performs the bulk of the recognition task as described.
Hwang, I. G.; Moon, B. S.; Kinser, Rpger
The development of Johnson Noise Thermometry requires a high sensitive preamplifier circuit to pick up the temperature-related noise on the sensing element. However, the random noise generated in this amplification circuit causes a significant erroneous influence to the measurement. This paper describes signal processing mechanism of the Johnson Noise Thermometry system which is underway of development in collaboration between KAERI and ORNL. It adopts two identical amplifier channels and utilizes a digital signal processing technique to remove the independent noise of each channel. The CPSD(Cross Power Spectral Density) function is used to cancel the independent noise and the differentiation of narrow or single frequency peak from the CPSD data separates the common mode electromagnetic interference noise
Full Text Available Genomic signal processing (GSP methods which convert DNA data to numerical values have recently been proposed, which would offer the opportunity of employing existing digital signal processing methods for genomic data. One of the most used methods for exploring data is cluster analysis which refers to the unsupervised classification of patterns in data. In this paper, we propose a novel approach for performing cluster analysis of DNA sequences that is based on the use of GSP methods and the K-means algorithm. We also propose a visualization method that facilitates the easy inspection and analysis of the results and possible hidden behaviors. Our results support the feasibility of employing the proposed method to find and easily visualize interesting features of sets of DNA data.
Mendizabal-Ruiz, Gerardo; Román-Godínez, Israel; Torres-Ramos, Sulema; Salido-Ruiz, Ricardo A; Vélez-Pérez, Hugo; Morales, J Alejandro
Genomic signal processing (GSP) methods which convert DNA data to numerical values have recently been proposed, which would offer the opportunity of employing existing digital signal processing methods for genomic data. One of the most used methods for exploring data is cluster analysis which refers to the unsupervised classification of patterns in data. In this paper, we propose a novel approach for performing cluster analysis of DNA sequences that is based on the use of GSP methods and the K-means algorithm. We also propose a visualization method that facilitates the easy inspection and analysis of the results and possible hidden behaviors. Our results support the feasibility of employing the proposed method to find and easily visualize interesting features of sets of DNA data.
Wang, Zhaocheng; Huang, Wei; Xu, Zhengyuan
This informative new book on state-of-the-art visible light communication (VLC) provides, for the first time, a systematical and advanced treatment of modulation and signal processing for VLC. Visible Light Communications: Modulation and Signal Processing offers a practical guide to designing VLC, linking academic research with commercial applications. In recent years, VLC has attracted attention from academia and industry since it has many advantages over the traditional radio frequency, including wide unregulated bandwidth, high security, and low cost. It is a promising complementary technique in 5G and beyond wireless communications, especially in indoor applications. However, lighting constraints have not been fully considered in the open literature when considering VLC system design, and its importance has been underestimated. That’s why this book—written by a team of experts with both academic research experience and industrial development experience in the field—is so welcome. To help readers u...
This book examines the signal processing perspective in haptic teleoperation systems. This text covers the topics of prediction, estimation, architecture, data compression, and error correction that can be applied to haptic teleoperation systems. The authors begin with an overview of haptic teleoperation systems, then look at a Bayesian approach to haptic teleoperation systems. They move onto a discussion of haptic data compression, haptic data digitization and forward error correction. · Presents haptic data prediction/estimation methods that compensate for unreliable networks · Discusses haptic data compression that reduces haptic data size over limited network bandwidth and haptic data error correction that compensate for packet loss problem · Provides signal processing techniques used with existing control architectures.
Teke, Oguzhan; Vaidyanathan, Palghat P.
A variety of different areas consider signals that are defined over graphs. Motivated by the advancements in graph signal processing, this study first reviews some of the recent results on the extension of classical multirate signal processing to graphs. In these results, graphs are allowed to have directed edges. The possibly non-symmetric adjacency matrix A is treated as the graph operator. These results investigate the fundamental concepts for multirate processing of graph signals such as noble identities, aliasing, and perfect reconstruction (PR). It is shown that unless the graph satisfies some conditions, these concepts cannot be extended to graph signals in a simple manner. A structure called M-Block cyclic structure is shown to be sufficient to generalize the results for bipartite graphs on two-channels to M-channel filter banks. Many classical multirate ideas can be extended to graphs due to the unique eigenstructure of M-Block cyclic graphs. For example, the PR condition for filter banks on these graphs is identical to PR in classical theory, which allows the use of well-known filter bank design techniques. In order to utilize these results, the adjacency matrix of an M-Block cyclic graph should be given in the correct permutation. In the final part, this study proposes a spectral technique to identify the hidden M-Block cyclic structure from a graph with noisy edges whose adjacency matrix is given under a random permutation. Numerical simulation results show that the technique can recover the underlying M-Block structure in the presence of random addition and deletion of the edges.
Oosterwijk, S.; Mackey, S.; Wilson-Mendenhall, C.; Winkielman, P.; Paulus, M.P.
According to embodied cognition theories, concepts are contextually situated and grounded in neural systems that produce experiential states. This view predicts that processing mental state concepts recruits neural regions associated with different aspects of experience depending on the context in
Taylor, S.; Hinton, M.; Turner, R.
Signal processing techniques for systems based upon Ion Mobility Spectrometry will be discussed in the light of 10 years of experience in the design of real-time IMS. Among the topics to be covered are compensation techniques for variations in the number density of the gas - the use of an internal standard (a reference peak) or pressure and temperature sensors. Sources of noise and methods for noise reduction will be discussed together with resolution limitations and the ability of deconvolution techniques to improve resolving power. The use of neural networks (either by themselves or as a component part of a processing system) will be reviewed.
Kulkarni, A.V.; Yen, D.W.L.
Many signal and image processing applications impose a severe demand on the I/O bandwidth and computation power of general-purpose computers. The systolic concept offers guidelines in building cost-effective systems that balance I/O with computation. The resulting simplicity and regularity of such systems leads to modular designs suitable for VLSI implementation. The authors describe a linear systolic array capable of evaluating a large class of inner-product functions used in signal and image processing. These include matrix multiplications, multidimensional convolutions using fixed or time-varying kernels, as well as various nonlinear functions of vectors. The system organization of a working prototype is also described. 11 references.
Axline, Jr., Robert M.; Sloan, George R.; Spalding, Richard E.
An active, phase-coded, time-grating transponder and a synthetic-aperture radar (SAR) and signal processor means, in combination, allow the recognition and location of the transponder (tag) in the SAR image and allow communication of information messages from the transponder to the SAR. The SAR is an illuminating radar having special processing modifications in an image-formation processor to receive an echo from a remote transponder, after the transponder receives and retransmits the SAR illuminations, and to enhance the transponder's echo relative to surrounding ground clutter by recognizing special transponder modulations from phase-shifted from the transponder retransmissions. The remote radio-frequency tag also transmits information to the SAR through a single antenna that also serves to receive the SAR illuminations. Unique tag-modulation and SAR signal processing techniques, in combination, allow the detection and precise geographical location of the tag through the reduction of interfering signals from ground clutter, and allow communication of environmental and status information from said tag to be communicated to said SAR.
Faulkner, Andrew; Zarb-Adami, Kristian; De Vaate, Jan Geralt Bij
Signal processing and communications are driving the latest generation of radio telescopes with major developments taking place for use on the Square Kilometre Array, SKA, the next generation low frequency radio telescope. The data rates and processing performance that can be achieved with currently available components means that concepts from the earlier days of radio astronomy, phased arrays, can be used at higher frequencies, larger bandwidths and higher numbers of beams. Indeed it has been argued that the use of dishes as a mechanical beamformer only gained strong acceptance to mitigate the processing load from phased array technology. The balance is changing and benefits in both performance and cost can be realised. In this paper we will mostly consider the signal processing implementation and control for very large phased arrays consisting of hundreds of thousands of antennas or even millions of antennas. They can use current technology for the initial deployments. These systems are very large extending to hundreds of racks with thousands of signal processing modules that link through high-speed, but commercially available data networking devices. There are major challenges to accurately calibrate the arrays, mitigate power consumption and make the system maintainable
This book presents theory, design methods and novel applications for integrated circuits for analog signal processing. The discussion covers a wide variety of active devices, active elements and amplifiers, working in voltage mode, current mode and mixed mode. This includes voltage operational amplifiers, current operational amplifiers, operational transconductance amplifiers, operational transresistance amplifiers, current conveyors, current differencing transconductance amplifiers, etc. Design methods and challenges posed by nanometer technology are discussed and applications described, including signal amplification, filtering, data acquisition systems such as neural recording, sensor conditioning such as biomedical implants, actuator conditioning, noise generators, oscillators, mixers, etc. Presents analysis and synthesis methods to generate all circuit topologies from which the designer can select the best one for the desired application; Includes design guidelines for active devices/elements...
At the first sight the problem to determine the beam position from the ratio of the induced charges of the opposite electrodes of a beam monitor seems trivial, but up to now no unique solution has been found that fits the various demands of all particle accelerators. The purpose of this paper is to help "instrumentalists" to choose the best processing system for their particular application, depending on the machine size, the input dynamic range, the required resolution and the acquisition speed. After a general introduction and an analysis of the electrical signals to be treated (frequency and time domain), the definition of the electronic specifications will be reviewed. The tutorial will present the different families in which the processing systems can be grouped. A general description of the operating principles with relative advantages and disadvantages for the most employed processing systems is presented. Special emphasis will be put on recent technological developments based on telecommunication circ...
Chiu, Leung Kin
As mobile platforms continue to pack on more computational power, electronics manufacturers start to differentiate their products by enhancing the audio features. However, consumers also demand smaller devices that could operate for longer time, hence imposing design constraints. In this research, we investigate two design strategies that would allow us to efficiently process audio signals on embedded systems such as mobile phones and portable electronics. In the first strategy, we exploit properties of the human auditory system to process audio signals. We designed a sound enhancement algorithm to make piezoelectric loudspeakers sound ”richer" and "fuller." Piezoelectric speakers have a small form factor but exhibit poor response in the low-frequency region. In the algorithm, we combine psychoacoustic bass extension and dynamic range compression to improve the perceived bass coming out from the tiny speakers. We also developed an audio energy reduction algorithm for loudspeaker power management. The perceptually transparent algorithm extends the battery life of mobile devices and prevents thermal damage in speakers. This method is similar to audio compression algorithms, which encode audio signals in such a ways that the compression artifacts are not easily perceivable. Instead of reducing the storage space, however, we suppress the audio contents that are below the hearing threshold, therefore reducing the signal energy. In the second strategy, we use low-power analog circuits to process the signal before digitizing it. We designed an analog front-end for sound detection and implemented it on a field programmable analog array (FPAA). The system is an example of an analog-to-information converter. The sound classifier front-end can be used in a wide range of applications because programmable floating-gate transistors are employed to store classifier weights. Moreover, we incorporated a feature selection algorithm to simplify the analog front-end. A machine
Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas
The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that
The conversion and recording of analog signals in digital form has been an active element in the manufacturing operations of the General Electric Neutron Devices Department (GEND) since 1966. The first computerized data system for these digitized waveforms was implemented at GEND's data center approximately two years later during 1968. The evolution and integration of these two activities at GEND are addressed in this paper. Beginning with the tester--data center interface, emphasis is placed on previous approaches, current capabilities, near-term trends, and future requirements. The digitizing process has developed into a firmly established set of hardware and associated software techniques which has proven itself as an accurate, reliable procedure for capturing waveform characteristics. The most important aspect of this process is the recent trend toward increased sampling rates and a greater number of digitized parameters per operation. The combined effect is a tremendous increase in output data volumes. Since digital signal processing carries the potential for significant contributions to manufacturing quality and reliability, as well as engineering design and development, increased activity in this area appears extremely desirable. 11 figures
Ribeiro, Paulo Fernando; Ribeiro, Paulo Márcio; Cerqueira, Augusto Santiago
With special relation to smart grids, this book provides clear and comprehensive explanation of how Digital Signal Processing (DSP) and Computational Intelligence (CI) techniques can be applied to solve problems in the power system. Its unique coverage bridges the gap between DSP, electrical power and energy engineering systems, showing many different techniques applied to typical and expected system conditions with practical power system examples. Surveying all recent advances on DSP for power systems, this book enables engineers and researchers to understand the current state of the art a
Full Text Available The phase, amplitude, speed, and polarization, in addition to many other properties of light, can be modulated by photonic Bragg structures. In conjunction with nonlinearity and quantum effects, a variety of ensuing micro- or nano-photonic applications can be realized. This paper reviews various optical phenomena in several exemplary 1D Bragg gratings. Important examples are resonantly absorbing photonic structures, chirped Bragg grating, and cholesteric liquid crystals; their unique operation capabilities and key issues are considered in detail. These Bragg structures are expected to be used in wide-spread applications involving light field modulations, especially in the rapidly advancing field of ultrafast optical signal processing.
Saklani, Vipin; Meenakshi Sundari, A.; Rai, A.K.; Sarma, C.V.R.
Gamma Ray Spectrometry is an essential tool for mapping radio elemental concentrations and has found widespread applications in mineral exploration, geological mapping and environmental mapping. Spectrometry based on analog amplifiers with Gaussian pulse shaping has been used for more than 40 years. Digital electronics and digital signal processing methods have enhanced the spectrometer specifications. This paper presents the design and development of a FPGA based Gamma Ray Spectrometer featuring 1024 channel Multi Channel Analyser (MCA) with NaI(Tl) scintillation detector. The system shows significant improvements in energy resolution, count rate capability and dead time as compared to its analog counterpart. (author)
Płaza, Mirosław; Szcześniak, Zbigniew
The system of signal processing for electrotherapeutic applications is proposed in the paper. The system makes it possible to model the curve of threshold human sensitivity to current (Dalziel's curve) in full medium frequency range (1kHz-100kHz). The tests based on the proposed solution were conducted and their results were compared with those obtained according to the assumptions of High Tone Power Therapy method and referred to optimum values. Proposed system has high dynamics and precision of mapping the curve of threshold human sensitivity to current and can be used in all methods where threshold curves are modelled.
Full Text Available In the past three decades, a lot of various applications of Ground Penetrating Radar (GPR took place in real life. There are important challenges of this radar in civil applications and also in military applications. In this paper, the fundamentals of GPR systems will be covered and three important signal processing methods (Wavelet Transform, Matched Filter and Hilbert Huang will be compared to each other in order to get most accurate information about objects which are in subsurface or behind the wall.
Full Text Available The Wigner Ville distribution offers a visual display of quantitative information about the way a signal’s energy is distributed in both, time and frequency. Through that, this distribution embodies the fundamentally concepts of the Fourier and time-domain analysis. The energy of the signal is distributed so that specific frequencies are localized in time by the group delay time and at specifics instants in time the frequency is given by the instantaneous frequency. The net positive volum of the Wigner distribution is numerically equal to the signal’s total energy. The paper shows the application of the Wigner Ville distribution, in the field of signal processing, using Scilab environment.
DISP-2003 - Digital Signal Processing DISP-2003 is a two-term course given by CERN and University of Lausanne (UNIL) experts within the framework of the Technical Training Programme. The course will review the current techniques dealing with Digital Signal Processing, and it is intended for an audience who work or will work on digital signal processing aspects, and who need an introductory or refresher/update course. The course will be in English, with question and answers also in French. Spring 2 Term: DISP-2003: Advanced Digital Signal Processing 30 April 2003 - 21 May 2003, 4 lectures, Wednesdays afternoon (attendance cost: 40.- CHF, registration required) Lecturers: Léonard Studer, UNIL; Laurent Deniau, AT-MTM; Elena Wildner, AT-MAS Programme: Intelligent signal processing (ISP). Non-linear time series analysis. Image processing. Wavelets. (Basic concepts and definitions have been introduced during the previous Spring 1 Term: DISP-2003: Introduction to Digital Signal Processing). DISP-2003 is open...
Vogt, M. C.
Many industrial and environmental processes, including bioremediation, would benefit from the feedback and control information provided by a local multi-analyte chemical sensor. For most processes, such a sensor would need to be rugged enough to be placed in situ for long-term remote monitoring, and inexpensive enough to be fielded in useful numbers. The multi-analyte capability is difficult to obtain from common passive sensors, but can be provided by an active device that produces a spectrum-type response. Such new active gas microsensor technology has been developed at Argonne National Laboratory. The technology couples an electrocatalytic ceramic-metallic (cermet) microsensor with a voltammetric measurement technique and advanced neural signal processing. It has been demonstrated to be flexible, rugged, and very economical to produce and deploy. Both narrow interest detectors and wide spectrum instruments have been developed around this technology. Much of this technology's strength lies in the active measurement technique employed. The technique involves applying voltammetry to a miniature electrocatalytic cell to produce unique chemical ''signatures'' from the analytes. These signatures are processed with neural pattern recognition algorithms to identify and quantify the components in the analyte. The neural signal processing allows for innovative sampling and analysis strategies to be employed with the microsensor. In most situations, the whole response signature from the voltammogram can be used to identify, classify, and quantify an analyte, without dissecting it into component parts. This allows an instrument to be calibrated once for a specific gas or mixture of gases by simple exposure to a multi-component standard rather than by a series of individual gases. The sampled unknown analytes can vary in composition or in concentration, the calibration, sensing, and processing methods of these active voltammetric microsensors can
Vogt, Michael C.; Skubal, Laura R.
Many industrial and environmental processes, including bioremediation, would benefit from the feedback and control information provided by a local multi-analyte chemical sensor. For most processes, such a sensor would need to be rugged enough to be placed in situ for long-term remote monitoring, and inexpensive enough to be fielded in useful numbers. The multi-analyte capability is difficult to obtain from common passive sensors, but can be provided by an active device that produces a spectrum-type response. Such new active gas microsensor technology has been developed at Argonne National Laboratory. The technology couples an electrocatalytic ceramic-metallic (cermet) microsensor with a voltammetric measurement technique and advanced neural signal processing. It has been demonstrated to be flexible, rugged, and very economical to produce and deploy. Both narrow interest detectors and wide spectrum instruments have been developed around this technology. Much of this technology's strength lies in the active measurement technique employed. The technique involves applying voltammetry to a miniature electrocatalytic cell to produce unique chemical 'signatures' from the analytes. These signatures are processed with neural pattern recognition algorithms to identify and quantify the components in the analyte. The neural signal processing allows for innovative sampling and analysis strategies to be employed with the microsensor. In most situations, the whole response signature from the voltammogram can be used to identify, classify, and quantify an analyte, without dissecting it into component parts. This allows an instrument to be calibrated once for a specific gas or mixture of gases by simple exposure to a multi-component standard rather than by a series of individual gases. The sampled unknown analytes can vary in composition or in concentration; the calibration, sensing, and processing methods of these active voltammetric microsensors can detect, recognize, and
Wang, Wei; Zhang, Shun; Ma, Ning
In this paper, a high-dimensional statistical signal processing is revisited with the aim of introducing the concept of vector signal representation derived from the Riesz transforms, which are the natural extension and generalization of the one-dimensional Hilbert transform. Under the new concep...
Miller and Childers have focused on creating a clear presentation of foundational concepts with specific applications to signal processing and communications, clearly the two areas of most interest to students and instructors in this course. It is aimed at graduate students as well as practicing engineers, and includes unique chapters on narrowband random processes and simulation techniques. The appendices provide a refresher in such areas as linear algebra, set theory, random variables, and more. Probability and Random Processes also includes applications in digital communications, informati
Swenson, Cory V.
The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.
Oxenløwe, Leif Katsuo; Ji, Hua; Galili, Michael
In this paper, we describe our recent work on signal processing of terabit per second optical serial data signals using pure silicon waveguides. We employ nonlinear optical signal processing in nanoengineered silicon waveguides to perform demultiplexing and optical waveform sampling of 1.28-Tbit....../s data signals as well as wavelength conversion of up to 320-Gbit/s data signals. We demonstrate that the silicon waveguides are equally useful for amplitude and phase-modulated data signals....
Ultrasound imaging has become one of the most widely used diagnostic tools in medicine. While it has advantages, compared with other modalities, in terms of safety, low-cost, accessibility, portability and capability of real-time imaging, it has limitations. One of the major disadvantages of ultrasound imaging is the relatively low image quality, especially the low signal-to-noise ratio (SNR) and the low spatial resolution. Part of this dissertation is dedicated to the development of digital ultrasound signal and image processing methods to improve ultrasound image quality. Conventional B-mode ultrasound systems display the demodulated signals, i.e., the envelopes, in the images. In this dissertation, I introduce the envelope matched quadrature filtering (EMQF) technique, which is a novel demodulation technique generating optimal performance in envelope detection. In ultrasonography, the echo signals are the results of the convolution of the pulses and the medium responses, and the finite pulse length is a major source of the degradation of the image resolution. Based on the more appropriate complex-valued medium response assumption rather than the real-valued assumption used by many researchers, a nonparametric iterative deconvolution method, the Least Squares method with Point Count regularization (LSPC), is proposed. This method was tested using simulated and experimental data, and has produced excellent results showing significant improvements in resolution. During the past two decades, ultrasound tissue characterization (UTC) has emerged as an active research field and shown potentials of applications in a variety of clinical areas. Particularly interesting to me is a group of methods characterizing the scatterer spatial distribution. For resolvable regular structures, a deconvolution based method is proposed to estimate parameters characterizing such structures, including mean scatterer spacing, and has demonstrated superior performance when compared to
Ghica, Daniela; Radulian, Mircea; Popa, Mihaela
Since July 2002, a new seismic monitoring station, the Bucovina Seismic Array (BURAR), has been installed in the northern part of Romania, in a joint effort of the Air Force Technical Applications Center, USA, and the National Institute for Earth Physics (NIEP), Romania. The array consists of 10 seismic sensors (9 short-period and one broad band) located in boreholes and distributed in a 5 x 5 km 2 area. At present, the seismic data are continuously recorded by BURAR and transmitted in real-time to the Romanian National Data Centre (ROM N DC), in Bucharest and to the National Data Center of USA, in Florida. The statistical analysis for the seismic information gathered at ROM N DC by the BURAR in the August 2002 - December 2003 time interval points out a much better efficiency of the BURAR system in detecting teleseismic events and local events occurred in the N-NE part of Romanian territory, in comparison with the actual Romanian Telemetered Network. Furthermore, the BURAR monitoring system has proven to be an important source of reliable data for NIEP efforts in issuing the seismic bulletins. Signal processing capability of the system provides useful information in order to improve the location of the local seismic events, using the array beamforming procedure. This method increases significantly the signal-to-noise ratio by summing up the coherent signals from the array components. In this way, possible source nucleation phases can be detected. At the same time, using the slowness and back azimuth estimations by f-k analysis, locations for the seismic events can be established based only on the information recorded by the BURAR array, acting like a single seismic station recording system. (authors)
This dissertation focuses on signal modeling and processing issues of the following problems in reflection tomography: synthetic aperture radar (SAR) imaging of a runway and surroundings from an aircraft approaching for landing, acoustic imaging of objects buried in soil, and lidar imaging of underwater objects. The highly squinted geometry of runway imaging necessitates the incorporation of wavefront curvature into the signal model. We investigate the feasibility of using the wavenumber-domain (ω - k) SAR inversion algorithm, which models the actual curvature of the wavefront, for runway imaging. We demonstrate the aberrations that the algorithm can produce when the squint angle is close to 90° and show that high-quality reconstruction is still possible provided that the interpolation is performed accurately enough, which can be achieved by increasing the temporal sampling rate. We compare the performance with that of a more general inversion method (GIM) that solves the measurement equation directly. The performances of both methods are comparable in the noise- free case. Being inherently robust to noise, GIM produces superior results in the noisy case. We also present a solution to the left-right ambiguity of runway imaging using interferometric processing. In imaging of objects buried in soil, we pursue an acoustic approach primarily for detection and imaging of cultural artifacts. We have developed a mathematical model and associated computer software in order to simulate the signals acquired by the actual experimental system, and a bistatic SAR-type algorithm for reconstruction. In the reconstructions from simulated data, objects were detectable, but near-field objects suffered from shifts and smears. To account for wavefront curvature, we formulated processing of the simulated data using the 3-D version of the monostatic ω - k algorithm. In lidar imaging of underwater objects, we formulate the problem as a 3-D tomographic reconstruction problem. We have
Full Text Available In this article the literature based review of the developing of (logistificated business processes and their reorganizations are shown briefly. The research of the service processes is also actual in our time giving work to managers and researches alike. In the narrowing market the increasing competition and the dominance of customers is a warning to the companies to carry out continuous rationalization and reductions of costs in order to increase efficiency. In this essay we would like to show briefly how we started our research primarily concentrating on technical literatures. First of all we concentrate on the improvement assets of processes. We will show some major tendencies in the process of Business Process Amelioration (BPA evolution. The production focused approach of services can mean significant process improvement therefore it is a good analysis method of the process improvement.
Parekh, Geet; DeLatte, David; Herman, Geoffrey L.; Oliva, Linda; Phatak, Dhananjay; Scheponik, Travis; Sherman, Alan T.
This paper presents and analyzes results of two Delphi processes that polled cybersecurity experts to rate cybersecurity topics based on importance, difficulty, and timelessness. These ratings can be used to identify core concepts--cross-cutting ideas that connect knowledge in the discipline. The first Delphi process identified core concepts that…
The seventeenth of a series of workshops sponsored by the IEEE Signal Processing Society and organized by the Machine Learning for Signal Processing Technical Committee (MLSP-TC). The field of machine learning has matured considerably in both methodology and real-world application domains and has...... become particularly important for solution of problems in signal processing. As reflected in this collection, machine learning for signal processing combines many ideas from adaptive signal/image processing, learning theory and models, and statistics in order to solve complex real-world signal processing......, and two papers from the winners of the Data Analysis Competition. The program included papers in the following areas: genomic signal processing, pattern recognition and classification, image and video processing, blind signal processing, models, learning algorithms, and applications of machine learning...
Full Text Available Theory of human capital views education as a specific production factor and as a specific sort of capital. Besides this theory, alternative concepts of education were developed. Philter theory which is interested in a selective function of education and created a different point of view of economic analysis phenomena in education. Signal equilibrium states for better or worse according to Pareto´s concept of efficiency and according to higher or lower difference between private and public educational returns.
Muhammad Ali Raza Anjum
Full Text Available A unified linear algebraic approach to adaptive signal processing (ASP is presented. Starting from just Ax=b, key ASP algorithms are derived in a simple, systematic, and integrated manner without requiring any background knowledge to the field. Algorithms covered are Steepest Descent, LMS, Normalized LMS, Kaczmarz, Affine Projection, RLS, Kalman filter, and MMSE/Least Square Wiener filters. By following this approach, readers will discover a synthesis; they will learn that one and only one equation is involved in all these algorithms. They will also learn that this one equation forms the basis of more advanced algorithms like reduced rank adaptive filters, extended Kalman filter, particle filters, multigrid methods, preconditioning methods, Krylov subspace methods and conjugate gradients. This will enable them to enter many sophisticated realms of modern research and development. Eventually, this one equation will not only become their passport to ASP but also to many highly specialized areas of computational science and engineering.
This thesis describes the physics and applications of quantum dot semiconductor optical ampliers through numerical simulations. As nano-structured materials with zero-dimensional quantum connement, semiconductor quantum dot material provides a number of unique physical properties compared...... with other semiconductor materials. The understanding of such properties is important in order to improve the performance of existing devices and to trigger the development of new semiconductor devices for dierent optical signal processing functionalities in the future. We present a detailed quantum dot...... semiconductor optical amplier model incorporating a carrier dynamics rate equation model for quantum dots with inhomogeneous broadening as well as equations describing propagation. A phenomenological description has been used to model the intradot electron scattering between discrete quantum dot states...
This book introduces the Statistical Drake Equation where, from a simple product of seven positive numbers, the Drake Equation is turned into the product of seven positive random variables. The mathematical consequences of this transformation are demonstrated and it is proven that the new random variable N for the number of communicating civilizations in the Galaxy must follow the lognormal probability distribution when the number of factors in the Drake equation is allowed to increase at will. Mathematical SETI also studies the proposed FOCAL (Fast Outgoing Cyclopean Astronomical Lens) space mission to the nearest Sun Focal Sphere at 550 AU and describes its consequences for future interstellar precursor missions and truly interstellar missions. In addition the author shows how SETI signal processing may be dramatically improved by use of the Karhunen-Loève Transform (KLT) rather than Fast Fourier Transform (FFT). Finally, he describes the efforts made to persuade the United Nations to make the central part...
Hao, Nan; Budnik, Bogdan A; Gunawardena, Jeremy; O'Shea, Erin K
Signaling pathways can induce different dynamics of transcription factor (TF) activation. We explored how TFs process signaling inputs to generate diverse dynamic responses. The budding yeast general stress-responsive TF Msn2 acted as a tunable signal processor that could track, filter, or integrate signals in an input-dependent manner. This tunable signal processing appears to originate from dual regulation of both nuclear import and export by phosphorylation, as mutants with one form of regulation sustained only one signal-processing function. Versatile signal processing by Msn2 is crucial for generating distinct dynamic responses to different natural stresses. Our findings reveal how complex signal-processing functions are integrated into a single molecule and provide a guide for the design of TFs with "programmable" signal-processing functions.
Continuous-Time and Discrete-Time Signals and SystemsIntroductionContinuous-Time SignalsPeriodic FunctionsUnit Step FunctionGraphical Representation of FunctionsEven and Odd Parts of a FunctionDirac-Delta ImpulseBasic Properties of the Dirac-Delta ImpulseOther Important Properties of the ImpulseContinuous-Time SystemsCausality, StabilityExamples of Electrical Continuous-Time SystemsMechanical SystemsTransfer Function and Frequency ResponseConvolution and CorrelationA Right-Sided and a Left-Sided FunctionConvolution with an Impulse and Its DerivativesAdditional Convolution PropertiesCorrelation FunctionProperties of the Correlation FunctionGraphical InterpretationCorrelation of Periodic FunctionsAverage, Energy and Power of Continuous-Time SignalsDiscrete-Time SignalsPeriodicityDifference EquationsEven/Odd DecompositionAverage Value, Energy and Power SequencesCausality, StabilityProblemsAnswers to Selected ProblemsFourier Series ExpansionTrigonometric Fourier SeriesExponential Fourier SeriesExponential versus ...
Tang, Rendong; Dai, Jiapei
The transmission and processing of neural information in the nervous system plays a key role in neural functions. It is well accepted that neural communication is mediated by bioelectricity and chemical molecules via the processes called bioelectrical and chemical transmission, respectively. Indeed, the traditional theories seem to give valuable explanations for the basic functions of the nervous system, but difficult to construct general accepted concepts or principles to provide reasonable explanations of higher brain functions and mental activities, such as perception, learning and memory, emotion and consciousness. Therefore, many unanswered questions and debates over the neural encoding and mechanisms of neuronal networks remain. Cell to cell communication by biophotons, also called ultra-weak photon emissions, has been demonstrated in several plants, bacteria and certain animal cells. Recently, both experimental evidence and theoretical speculation have suggested that biophotons may play a potential role in neural signal transmission and processing, contributing to the understanding of the high functions of nervous system. In this paper, we review the relevant experimental findings and discuss the possible underlying mechanisms of biophoton signal transmission and processing in the nervous system. Copyright © 2014 Elsevier B.V. All rights reserved.
ARL-TR-8276• FEB 2018 US Army Research Laboratory Signal Processing for Time-Series Functions on a Graph by Humberto Muñoz-Barona, Jean Vettel, and...ARL-TR-8276• FEB 2018 US Army Research Laboratory Signal Processing for Time-Series Functions on a Graph by Humberto Muñoz-Barona Southern University...email@example.com>. Previous research introduced signal processing on graphs, an approach to generalize signal processing tools such
Mu, Jiasong; Wang, Wei; Liang, Qilian; Pi, Yiming
The Proceedings of The Second International Conference on Communications, Signal Processing, and Systems provides the state-of-art developments of Communications, Signal Processing, and Systems. The conference covered such topics as wireless communications, networks, systems, signal processing for communications. This book is a collection of contributions coming out of The Second International Conference on Communications, Signal Processing, and Systems (CSPS) held September 2013 in Tianjin, China.
The 21st IEEE International Workshop on Machine Learning for Signal Processing will be held in Beijing, China, on September 18–21, 2011. The workshop series is the major annual technical event of the IEEE Signal Processing Society's Technical Committee on Machine Learning for Signal Processing...
Berg, Tommy Winther; Uskov, A. V.; Bischoff, Svend
The dynamics of quantum dot semiconductor amplifiers are investigated theoretically with respect to the potential for ultrafast signal processing. The high-speed signal processing capacity of these devices is found to be limited by the wetting layer dynamics in case of electrical pumping, while...... optical pumping partly removes this limitation. Also, the possibility of using spectral hole burning for signal processing is discussed....
STAR Performance with SPEAR ( Signal Processing Electronic Attack RFIC) Luciano Boglione, Clayton Davis, Joel Goodman, Matthew McKeon, David...Parrett, Sanghoon Shin and Naomi Walker Naval Research Laboratory Washington, DC, 20375 Figure 1: The Signal Processing Electronic Attack RFIC...SPEAR) system. Abstract: The Signal Processing Electronic Attack RFIC (SPEAR) is a simultaneous transmit and receive (STAR) system capable of
Liang, Qilian; Wang, Wei; Zhang, Baoju; Pi, Yiming
The Proceedings of The Third International Conference on Communications, Signal Processing, and Systems provides the state-of-art developments of communications, signal processing, and systems. This book is a collection of contributions from the conference and covers such topics as wireless communications, networks, systems, and signal processing for communications. The conference was held July 2014 in Hohhot, Inner Mongolia, China.
Multistatic Active Sonar Signal Processing," IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, Canada, May 26-31...2015, Genoa, Italy, May 18-21. HONORS/AWARDS/PRIZES Dr. Jian Li gave a plenary talk at the IEEE Sensor Array and Multichannel Signal Processing
During the formative years of irradiation processing, the 1950s and 1960s, there was laboratory and academic interest in the use of this form of energy transfer to initiate polymerization for the manufacture of plastics and in other chemical processes. Studies were often based on low-dose-rate Cobalt-60 systems. The electron beam (EB) accelerator technology of the time was not as yet at the robust and industrially reliable state that it is now at the beginning of the twenty-first century. A series of reactor designs illustrate how an electron beam can be incorporated into reactor vessels for initiating gas and liquid phase polymerizations on a continuous basis. Development of such approaches, which would rely upon contemporary, high current electron beams to initiate polymerization, would help the chemical processing industry alleviate its problems of catalyst disposal and its related environmental concerns. Systems for treating materials in bulk at low doses, such as those typically used for grain disinfection, at high through-put rates, are also illustrated. Simplified shielding is envisioned in each proposed process system
Meteor wind radar systems are a powerful tool for study of the horizontal wind field in the mesosphere and lower thermosphere (MLT). While such systems have been operated for many years, virtually no literature has focused on radar system error analysis. The instrumental error may prevent scientists from getting correct conclusions on geophysical variability. The radar system instrumental error comes from different sources, including hardware, software, algorithms and etc. Radar signal processing plays an important role in radar system and advanced signal processing algorithms may dramatically reduce the radar system errors. In this dissertation, radar system error propagation is analyzed and several advanced signal processing algorithms are proposed to optimize the performance of radar system without increasing the instrument costs. The first part of this dissertation is the development of a time-frequency waveform detector, which is invariant to noise level and stable to a wide range of decay rates. This detector is proposed to discriminate the underdense meteor echoes from the background white Gaussian noise. The performance of this detector is examined using Monte Carlo simulations. The resulting probability of detection is shown to outperform the often used power and energy detectors for the same probability of false alarm. Secondly, estimators to determine the Doppler shift, the decay rate and direction of arrival (DOA) of meteors are proposed and evaluated. The performance of these estimators is compared with the analytically derived Cramer-Rao bound (CRB). The results show that the fast maximum likelihood (FML) estimator for determination of the Doppler shift and decay rate and the spatial spectral method for determination of the DOAs perform best among the estimators commonly used on other radar systems. For most cases, the mean square error (MSE) of the estimator meets the CRB above a 10dB SNR. Thus meteor echoes with an estimated SNR below 10dB are
The future of the engineering discipline is arguably predicated heavily upon appealing to the future generation, in all its sensibilities. The greatest burden in doing so, one might rightly believe, lies on the shoulders of the educators. In examining the causal means by which the profession arrived at such a state, one finds that the technical revolution, precipitated by global war, had, as its catalyst, institutions as expansive as the government itself to satisfy the demand for engineers, who, as a result of such an existential crisis, were taught predominantly theoretical underpinnings to address a finite purpose. By contrast, the modern engineer, having expanded upon this vision and adapted to an evolving society, is increasingly placed in the proverbial role of the worker who must don many hats: not solely a scientist, yet often an artist; not a businessperson alone, but neither financially naive; not always a representative, though frequently a collaborator. Inasmuch as change then serves as the only constancy in a global climate, therefore, the educational system - if it is to mimic the demands of the industry - is left with an inherent need for perpetual revitalization to remain relevant. This work aims to serve that end. Motivated by existing research in engineering education, an epistemological challenge is molded into the framework of the electrical engineer with emphasis on digital signal processing. In particular, it is investigated whether students are better served by a learning paradigm that tolerates and, when feasible, encourages error via a medium free of traditional adjudication. Through the creation of learning modules using the Adobe Captivate environment, a wide range of fundamental knowledge in signal processing is challenged within the confines of existing undergraduate courses. It is found that such an approach not only conforms to the research agenda outlined for the engineering educator, but also reflects an often neglected reality
DISP-2003 is a two-term course given by CERN and University of Lausanne (UNIL) experts within the framework of the Technical Training Programme. The course will review the current techniques dealing with Digital Signal Processing, and it is intended for an audience who work or will work on digital signal processing aspects, and who need an introductory or refresher/update course. The course will be in English, with question and answers also in French. Spring 2 Term: DISP-2003: Advanced Digital Signal Processing 30 April 2003 - 21 May 2003, 4 lectures, Wednesdays afternoon. Attendance cost: 40.- CHF, registration required. Lecturers: Léonard Studer, UNIL; Laurent Deniau, AT-MTM; Elena Wildner, AT-MAS. Programme: Intelligent signal processing (ISP). Non-linear time series analysis. Image processing. Wavelets. Basic concepts and definitions have been introduced during the previous Spring 1 Term: DISP-2003: Introduction to Digital Signal Processing. DISP-2003 is open to all people interested, but registrat...
Twogood, R. E.
The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.
The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.
Kepner, J. V.; Janka, R. S.; Lebak, J.; Richards, M. A.
The Vector/Signal/Image Processing Library (VSIPL) is a DARPA initiated effort made up of industry, government and academic representatives who have defined an industry standard API for vector, signal, and image processing primitives for real-time signal processing on high performance systems. VSIPL supports a wide range of data types (int, float, complex, ...) and layouts (vectors, matrices and tensors) and is ideal for astronomical data processing. The VSIPL API is intended to serve as an open, vendor-neutral, industry standard interface. The object-based VSIPL API abstracts the memory architecture of the underlying machine by using the concept of memory blocks and views. Early experiments with VSIPL code conversions have been carried out by the High Performance Computing Program team at the UCSD. Commercially, several major vendors of signal processors are actively developing implementations. VSIPL has also been explicitly required as part of a recent Rome Labs teraflop procurement. This poster presents the VSIPL API, its functionality and the status of various implementations.
Kelly, J. J.
A detailed analysis of signal processing concerns for measuring aircraft flyover noise is presented. Development of a de-Dopplerization scheme for both corrected time history and spectral data is discussed along with an analysis of motion effects on measured spectra. A computer code was written to implement the de-Dopplerization scheme. Input to the code is the aircraft position data and the pressure time histories. To facilitate ensemble averaging, a level uniform flyover is considered in the study, but the code can accept more general flight profiles. The effects of spectral smearing and its removal are discussed. Using test data acquired from an XV-15 tilt-rotor flyover, comparisons are made between the measured and corrected spectra. Frequency shifts are accurately accounted for by the de-Dopplerization procedure. It is shown that by correcting for spherical spreading and Doppler amplitude, along with frequency, can give some idea about noise source directivity. The analysis indicated that smearing increases with frequency and is more severe on approach than recession.
Computing is rapidly becoming ubiquitous as users expect devices that can augment and interact naturally with the world around them. In these systems it is necessary to have an acoustic front-end that is able to capture and reproduce natural human communication. Whether the end point is a speech recognizer or another human listener, the reduction of noise, reverberation, and acoustic echoes are all necessary and complex challenges. The focus of this dissertation is to provide a general method for approaching these problems using spherical microphone and loudspeaker arrays.. In this work, a theory of capturing and reproducing three-dimensional acoustic fields is introduced from a signal processing perspective. In particular, the decomposition of the spatial part of the acoustic field into an orthogonal basis of spherical harmonics provides not only a general framework for analysis, but also many processing advantages. The spatial sampling error limits the upper frequency range with which a sound field can be accurately captured or reproduced. In broadband arrays, the cost and complexity of using multiple transducers is an issue. This work provides a flexible optimization method for determining the location of array elements to minimize the spatial aliasing error. The low frequency array processing ability is also limited by the SNR, mismatch, and placement error of transducers. To address this, a robust processing method is introduced and used to design a reproduction system for rendering over arbitrary loudspeaker arrays or binaurally over headphones. In addition to the beamforming problem, the multichannel acoustic echo cancellation (MCAEC) issue is also addressed. A MCAEC must adaptively estimate and track the constantly changing loudspeaker-room-microphone response to remove the sound field presented over the loudspeakers from that captured by the microphones. In the multichannel case, the system is overdetermined and many adaptive schemes fail to converge to
Spray, S.D.; Cooper, J.A.
The purpose of a unique signal (UQS) in a nuclear weapon system is to provide an unambiguous communication of intent to detonate from the UQS information input source device to a stronglink safety device in the weapon in a manner that is highly unlikely to be duplicated or simulated in normal environments and in a broad range of ill-defined abnormal environments. This report presents safety considerations for the design and implementation of UQSs in the context of the overall safety system.
Xu Chi; Guo Dongming; Jin Zhuji; Kang Renke
A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process. (semiconductor technology)
Fu, Chi Yung; Petrich, Loren I.
The method and system described herein use a biologically-based signal processing system for noise removal for signal extraction. A wavelet transform may be used in conjunction with a neural network to imitate a biological system. The neural network may be trained using ideal data derived from physical principles or noiseless signals to determine to remove noise from the signal.
Kring, C.T.; Babcock, S.M.; Watkin, D.C.; Oliver, R.P.
The Field Artillery Ammunition Processing System (FAAPS) is an initiative to introduce a palletized load system (PLS) that is transportable with an automated ammunition processing and storage system for use on the battlefield. System proponents have targeted a 20% increase in the ammunition processing rate over the current operation while simultaneously reducing the total number of assigned field artillery battalion personnel by 30. The overall objective of the FAAPS Project is the development and demonstration of an improved process to accomplish these goals. The initial phase of the FAAPS Project and the subject of this study is the FAAPS concept evaluation. The concept evaluation consists of (1) identifying assumptions and requirements, (2) documenting the process flow, (3) identifying and evaluating technologies available to accomplish the necessary ammunition processing and storage operations, and (4) presenting alternative concepts with associated costs, processing rates, and manpower requirements for accomplishing the operation. This study provides insight into the achievability of the desired objectives.
van den Broek, Egon; Janssen, Joris H.; van der Zwaag, Marjolein D.; Healey, Jennifer A.; Fred, A.; Filipe, J.; Gamboa, H.
This is the third part in a series on prerequisites for affective signal processing (ASP). So far, six prerequisites were identified: validation (e.g., mapping of constructs on signals), triangulation, a physiology-driven approach, and contributions of the signal processing community (van den Broek
Hansen, Benjamin Loer; Riis, Jesper; Hvam, Lars
This paper presents terminologies and concepts related to the IT automation of specification processes in companies manufacturing custom made products. Based on 11 cases from the Danish industry the most significant development trends are discussed.......This paper presents terminologies and concepts related to the IT automation of specification processes in companies manufacturing custom made products. Based on 11 cases from the Danish industry the most significant development trends are discussed....
Liu, Fu-Hua; Moreno, Pedro J; Stern, Richard M; Acero, Alejandro
.... The first algorithm. phone-dependent cepstral compensation, is similar in concept to the previously-described MFCDCN method, except that cepstral compensation vectors are selected according to the current phonetic...
Thompson, Stephen L.; Lotter, Christine; Fann, Xumei; Taylor, Laurie
Researchers examined how an inquiry-based instructional treatment emphasizing interrelated plant processes influenced 210 elementary pre-service teachers' (PTs) conceptions of three plant processes, photosynthesis, cellular respiration, and transpiration, and the interrelated nature of these processes. The instructional treatment required PTs to…
Misaridis, Thanassis; Jensen, Jørgen Arendt
This paper, the first from a series of three papers on the application of coded excitation signals in medical ultrasound, discusses the basic principles and ultrasound-related problems of pulse compression. The concepts of signal modulation and matched filtering are given, and a simple model of a...
Clausen, Anders; Mulvad, Hans Christian Hansen; Palushani, Evarist
To ensure that ultra high-speed serial data signals can be utilised in future optical communication networks, it is indispensable to have all-optical signal processing elements at our disposal. In this paper, the most recent advances in our use of non-linear materials incorporated in different...... function blocks for high-speed signal processing are reviewed....
Pazos Garcia, Antonio A.
This work have two clearly differentiated parts. The first one refers to the instrumental development of a digital short period seismic station, while in second one some aspects of the digital signal processing are studied. In the first part, it is shown as a simple capacitor in series with the load resistance increase the band width of the response in more than 30%, without any reduction of the effective gain and the internal noise is not increased regarding the classical load. Also, the system of acquisition has been developed, based on the analogical-digital converter CS5323/CS5322 that it provides a dynamic range of 130 decibels to 125 mps. The data acquisition for the parallel port of an embedded PC, working with LINUX operating system, is an innovation in this instrumental field. The programs and the necessary drivers were developed. The synchronization system was developed by a PLL software that permit a precision better than one millisecond. Finally, the calibration methods proposed by means of the measure of the equivalent impedance of the sensor (parametric method), as well as the modifications of the empiric calibration method by comparison the response of two sensors have been decisive, suggesting that the usually accepted models suffer of some parasite capacities that would justify the observed differences between both methods. In second part, a detailed analysis on the design of digital filters is showed, as much FIR as IIR filters. A non-linear filter that applies the coherent structures for levels, based on the Wavelet transform, is proposed. It includes the detection and reduction of "spikes" and a method for filtering periodic noises, based on the time Fourier series. Finally, an exhaustive comparison of several detection algorithms, working on a single component, is made, analyzing the detection percentages and their "picking" capabilities. Their results show that none of them is able to adapt to all the circumstances, highlighting those based on
This book presents the cutting-edge technologies of knee joint vibroarthrographic signal analysis for the screening and detection of knee joint injuries. It describes a number of effective computer-aided methods for analysis of the nonlinear and nonstationary biomedical signals generated by complex physiological mechanics. This book also introduces several popular machine learning and pattern recognition algorithms for biomedical signal classifications. The book is well-suited for all researchers looking to better understand knee joint biomechanics and the advanced technology for vibration arthrometry. Dr. Yunfeng Wu is an Associate Professor at the School of Information Science and Technology, Xiamen University, Xiamen, Fujian, China.
Cleland, W.E.; Stern, E.G.
We present the results of a study of the effects of thermal and pileup noise in liquid ionization calorimeters operating in a high luminosity calorimeters operating in a high luminosity environment. The method of optimal filtering of multiply-sampled signals which may be used to improve the timing and amplitude resolution of calorimeter signals is described, and its implications for signal shaping functions are examined. The dependence of the time and amplitude resolution on the relative strength of the pileup and thermal noise, which varies with such parameters as luminosity, rapidity and calorimeter cell size, is examined
Covariance Bounds," Proc 07th Asilo - mar Conf on Signals, Systems, and Computers, Pacific Grove, CA (November 1993). [MuS9l] C. T. Mullis and L. L. Scharf...Transforms," Proc Asilo - mar Con. on Signals, Systems, and Computers, Asilomar, CA (November 1991). [SpS94] M. Spurbeck and L. L. Scharf, "Least Squares...McWhorter and L. L. Scharf, "Multiwindow Estimators of Correlation," Proc 28th Annual Asilo - mar Conf on Signals, Systems, and Computers, Asilomar, CA
Fotheringham, Edeline B.
Have you ever found yourself listening to the music playing from the closest stereo rather than to the bromidic (uninspiring) person speaking to you? Your ears receive information from two sources but your brain listens to only one. What if your cell phone could distinguish among signals sharing the same bandwidth too? There would be no "full" channels to stop you from placing or receiving a call. This thesis presents a nonlinear optical circuit capable of distinguishing uncorrelated signals that have overlapping temporal bandwidths. This so called autotuning filter is the size of a U.S. quarter dollar and requires less than 3 mW of optical power to operate. It is basically an oscillator in which the losses are compensated with dynamic holographic gain. The combination of two photorefractive crystals in the resonator governs the filter's winner-take-all dynamics through signal-competition for gain. This physical circuit extracts what is mathematically referred to as the largest principal component of its spatio-temporal input space. The circuit's practicality is demonstrated by its incorporation in an RF-photonic system. An unknown mixture of unknown microwave signals, received by an antenna array, constitutes the input to the system. The output electronically returns one of the original microwave signals. The front-end of the system down converts the 10 GHz microwave signals and amplifies them before the signals phase modulate optical beams. The optical carrier is suppressed from these beams so that it may not be considered as a signal itself to the autotuning filter. The suppression is achieved with two-beam coupling in a single photorefractive crystal. The filter extracts the more intense of the signals present on the carrier-suppressed input beams. The detection of the extracted signal restores the microwave signal to an electronic form. The system, without the receiving antenna array, is packaged in a 13 x 18 x 6″ briefcase. Its power consumption equals that
van den Broek, Egon; Janssen, Joris H.; Westerink, Joyce H.D.M.; Healey, Jennifer A.; Encarnacao, P.; Veloso, A.
Although emotions are embraced by science, their recognition has not reached a satisfying level. Through a concise overview of affect, its signals, features, and classification methods, we provide understanding for the problems encountered. Next, we identify the prerequisites for successful
Kozel, David; Nelson, Richard
A signal to noise ratio dependent adaptive spectral subtraction algorithm is developed to eliminate noise from noise corrupted speech signals. The algorithm determines the signal to noise ratio and adjusts the spectral subtraction proportion appropriately. After spectra subtraction low amplitude signals are squelched. A single microphone is used to obtain both eh noise corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoice frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Applications include the emergency egress vehicle and the crawler transporter.
Webster, John G
ELECTROMAGNETIC VARIABLES MEASUREMENTVoltage MeasurementCurrent Measurement Power Measurement Power Factor Measurement Phase Measurement Energy Measurement Electrical Conductivity and Resistivity Charge Measurement Capacitance and Capacitance Measurements Permittivity Measurement Electric Field Strength Magnetic Field Measurement Permeability and Hysteresis MeasurementInductance Measurement Immittance MeasurementQ Factor Measurement Distortion Measurement Noise Measurement.Microwave Measurement SIGNAL PROCESSINGAmplifiers and Signal ConditionersModulation Filters Spectrum Analysis and Correlat
Full Text Available In this article we argue that the problem of the relationships between concepts and perception in cognitive science is blurred by the fact that the very notion of concept is rather confused. Since it is not always clear exactly what concepts are, it is not easy to say, for example, whether and in what measure concept possession involves entertaining and manipulating perceptual representations, whether concepts are entirely different from perceptual representations, and so on. As a paradigmatic example of this state of affairs, we will start by taking into consideration the distinction between conceptual and nonconceptual content. The analysis of such a distinction will lead us to the conclusion that concept is a heterogeneous notion. Then we shall take into account the so called dual process theories of mind; this approach also points to concepts being a heterogeneous phenomenon: different aspects of conceptual competence are likely to be ascribed to different types of systems. We conclude that without a clear specification of what concepts are, the problem of the relationships between concepts and perception is somewhat ill-posed.
Thornley, Tracey; West, Sandra
Individuals come to understand abstract constructs such as that of the 'expert' through the formation of concepts. Time and repeated opportunity for observation to support the generalisation and abstraction of the developing concept are essential if the concept is to form successfully. Development of an effective concept of the 'expert nurse' is critical for early career nurses who are attempting to integrate theory, values and beliefs as they develop their clinical practice. This study explores the use of a concept development framework in a grounded theory study of the 'expert nurse'. Qualitative. Using grounded theory methods for data collection and analysis, semi-structured interviews were conducted with registered nurses. The participants were asked to describe their concept of the 'expert nurse' and to discuss their experience of developing this. Participants reported forming their concept of the 'expert nurse', after multiple opportunities to engage with nurses identified as 'expert'. This identification did not necessarily relate to the designated position of the 'expert nurse' or assigned mentors. When the early career nurse does not successfully form a concept of the 'expert nurse', difficulties in personal and professional development including skill/knowledge development may arise. To underpin development of their clinical practice effectively, early career nurses need to be provided with opportunities that facilitate the purposive formation of their own concept of the 'expert nurse'. Formation of this concept is not well supported by the common practice of assigning mentors. Early career nurses must be provided with the time and the opportunity to individually develop and refine their concept of the 'expert nurse'. To achieve this, strategies including providing opportunities to engage with expert nurses and discussion of the process of concept formation and its place in underpinning personal judgments may be of assistance. © 2010 Blackwell Publishing
Erskine, David J.
A method of signal processing a high bandwidth signal by coherently subdividing it into many narrow bandwidth channels which are individually processed at lower frequencies in a parallel manner. Autocorrelation and correlations can be performed using reference frequencies which may drift slowly with time, reducing cost of device. Coordinated adjustment of channel phases alters temporal and spectral behavior of net signal process more precisely than a channel used individually. This is a method of implementing precision long coherent delays, interferometers, and filters for high bandwidth optical or microwave signals using low bandwidth electronics. High bandwidth signals can be recorded, mathematically manipulated, and synthesized.
Full Text Available The modern business is developing in the context of rapid social and political changes, which contributes to the changes in economic and cultural priorities as well as mindset and behaviour of people. This puts new requirements on development and implementation of business negotiation strategies, aiming to ensure that during bargaining, everything is done to understand the other party and related contexts, to achieve mutual understanding, to reach common agreement and eventually find the optimal negotiating decision. The author of this article researched and analysed negotiation process concepts in the global scientific literature and practice. The article examines negotiation and bargaining concepts. Also, the global analysis of the scientific literature revealed that there is no single negotiation planning concept. The author defines the basic conceptual negotiation planning concepts. The paper deals with negotiation strategy conceptions used by scientists around the world. Conclusions present the proposals for further business negotiation research.Article in Lithuanian
These proceedings contains refereed papers presented at the Fifteenth IEEE Workshop on Machine Learning for Signal Processing (MLSP’2005), held in Mystic, Connecticut, USA, September 28-30, 2005. This is a continuation of the IEEE Workshops on Neural Networks for Signal Processing (NNSP) organized...... by the NNSP Technical Committee of the IEEE Signal Processing Society. The name of the Technical Committee, hence of the Workshop, was changed to Machine Learning for Signal Processing in September 2003 to better reflect the areas represented by the Technical Committee. The conference is organized...... by the Machine Learning for Signal Processing Technical Committee with sponsorship of the IEEE Signal Processing Society. Following the practice started two years ago, the bound volume of the proceedings is going to be published by IEEE following the Workshop, and we are pleased to offer to conference attendees...
Chen, Jian; Chen, Hong; Cai, Xiaoxia; Weng, Pengfei; Nie, Hao
With the development of mathematical morphology theory, the application of mathematical morphology in image processing has been very extensive, in recent years, with in-depth study of mathematical morphology and its applications in signal processing development is receiving more and more attention. As a kind of nonlinear signal processing method, its signal feature extraction is performed in time domain, compared with some other nonlinear and non-stationary signal processing method, which has no phase offset and amplitude attenuation etc. many advantages, so this method is applied to the signal processing in various industries. This paper mainly expounds the basic theory of mathematical morphology, and puts forward the method of mathematical morphology denoising pretreatment. Finally, the paper summarizes the application of mathematical morphology in speech signal processing and the combination of neural network.
Zhang, Enzheng; Chen, Benyong; Yan, Liping; Yang, Tao; Hao, Qun; Dong, Wenjun; Li, Chaorong
A novel phase measurement method composed of the rising-edge locked signal processing and the digital frequency mixing is proposed for laser heterodyne interferometer. The rising-edge locked signal processing, which employs a high frequency clock signal to lock the rising-edges of the reference and measurement signals, not only can improve the steepness of the rising-edge, but also can eliminate the error counting caused by multi-rising-edge phenomenon in fringe counting. The digital frequency mixing is realized by mixing the digital interference signal with a digital base signal that is different from conventional frequency mixing with analogue signals. These signal processing can improve the measurement accuracy and enhance anti-interference and measurement stability. The principle and implementation of the method are described in detail. An experimental setup was constructed and a series of experiments verified the feasibility of the method in large displacement measurement with high speed and nanometer resolution.
Xuan Huang; Wenhua Zeng
As with the development of computer technology and informatization, network technique, sensor technique and communication technology become three necessary components of information industry. As the core technique of sensor application, signal processing mainly determines the sensor performances. For this reason, study on signal processing mode is very important to sensors and the application of sensor network. In this paper, we introduce a new sensor coarse signal processing mode based on ad...
The 21st IEEE International Workshop on Machine Learning for Signal Processing will be held in Beijing, China, on September 18–21, 2011. The workshop series is the major annual technical event of the IEEE Signal Processing Society's Technical Committee on Machine Learning for Signal Processing. T....... This year the workshop is held in the National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences....
The 21st IEEE International Workshop on Machine Learning for Signal Processing will be held in Beijing, China, on September 18–21, 2011. The workshop series is the major annual technical event of the IEEE Signal Processing Society's Technical Committee on Machine Learning for Signal Processing....... This year the workshop is held in the National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences....
Prabhu, K M M
Window functions-otherwise known as weighting functions, tapering functions, or apodization functions-are mathematical functions that are zero-valued outside the chosen interval. They are well established as a vital part of digital signal processing. Window Functions and their Applications in Signal Processing presents an exhaustive and detailed account of window functions and their applications in signal processing, focusing on the areas of digital spectral analysis, design of FIR filters, pulse compression radar, and speech signal processing.Comprehensively reviewing previous research and re
Poplová, Michaela; Sovka, Pavel; Cifra, Michal
Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.
Oxenløwe, Leif Katsuo; Galili, Michael; Mulvad, Hans Christian Hansen
. Combining time lenses into telescopic arrangements allows for more advanced signal processing, such as temporal or spectral compression or magnification. A spectral telescope may for instance allow for conversion of OFDM signals to DWDM-like signals, which can be separated passively, i.e. without additional...
van den Broek, Egon; Janssen, Joris H.; Healey, Jennifer A.; van der Zwaag, Marjolein; Fred, A.; Filipe, J.; Gamboa, H.
Last year, in van den Broek et al. (2009a), a start was made with defining prerequisites for affective signal processing (ASP). Four prerequisites were identified: validation (e.g., mapping of constructs on signals), triangulation, a physiology-driven approach, and contributions of the signal
This book provides a comprehensive review of the state-of-the art of optical signal processing technologies and devices. It presents breakthrough solutions for enabling a pervasive use of optics in data communication and signal storage applications. It presents presents optical signal processing as solution to overcome the capacity crunch in communication networks. The book content ranges from the development of innovative materials and devices, such as graphene and slow light structures, to the use of nonlinear optics for secure quantum information processing and overcoming the classical Shannon limit on channel capacity and microwave signal processing. Although it holds the promise for a substantial speed improvement, today’s communication infrastructure optics remains largely confined to the signal transport layer, as it lags behind electronics as far as signal processing is concerned. This situation will change in the near future as the tremendous growth of data traffic requires energy efficient and ful...
"Signal Conditioning” is a comprehensive introduction to electronic signal processing. The book presents the mathematical basics including the implications of various transformed domain representations in signal synthesis and analysis in an understandable and lucid fashion and illustrates the theory through many applications and examples from communication systems. The ease to learn is supported by well-chosen exercises which give readers the flavor of the subject. Supplementary electronic materials available on http://extras.springer.com including MATLAB codes illuminating applications in the domain of one dimensional electrical signal processing, image processing and speech processing. The book is an introduction for students with a basic understanding in engineering or natural sciences.
Stephen W. Lang
This project was focused on the development of tools for the automatic configuration of signal processing systems. The goal is to develop tools that will be useful in a variety of Government and commercial areas and useable by people who are not signal processing experts. In order to get the most benefit from signal processing techniques, deep technical expertise is often required in order to select appropriate algorithms, combine them into a processing chain, and tune algorithm parameters for best performance on a specific problem. Therefore a significant benefit would result from the assembly of a toolbox of processing algorithms that has been selected for their effectiveness in a group of related problem areas, along with the means to allow people who are not signal processing experts to reliably select, combine, and tune these algorithms to solve specific problems. Defining a vocabulary for problem domain experts that is sufficiently expressive to drive the configuration of signal processing functions will allow the expertise of signal processing experts to be captured in rules for automated configuration. In order to test the feasibility of this approach, we addressed a lightning classification problem, which was proposed by DOE as a surrogate for problems encountered in nuclear nonproliferation data processing. We coded a toolbox of low-level signal processing algorithms for extracting features of RF waveforms, and demonstrated a prototype tool for screening data. We showed examples of using the tool for expediting the generation of ground-truth metadata, for training a signal recognizer, and for searching for signals with particular characteristics. The public benefits of this approach, if successful, will accrue to Government and commercial activities that face the same general problem - the development of sensor systems for complex environments. It will enable problem domain experts (e.g. analysts) to construct signal and image processing chains without
Eddy, T.L.; Kong, P.C.; Raivo, B.D.; Anderson, G.L.
This report presents a preliminary determination of ex situ thermal processing system concepts and related processing considerations for application to remediation of transuranic (TRU)-contaminated buried wastes (TRUW) at the Radioactive Waste Management Complex (RWMC) of the Idaho National Engineering Laboratory (INEL). Beginning with top-level thermal treatment concepts and requirements identified in a previous Preliminary Systems Design Study (SDS), a more detailed consideration of the waste materials thermal processing problem is provided. Anticipated waste stream elements and problem characteristics are identified and considered. Final waste form performance criteria, requirements, and options are examined within the context of providing a high-integrity, low-leachability glass/ceramic, final waste form material. Thermal processing conditions required and capability of key systems components (equipment) to provide these material process conditions are considered. Information from closely related companion study reports on melter technology development needs assessment and INEL Iron-Enriched Basalt (IEB) research are considered. Five potentially practicable thermal process system design configuration concepts are defined and compared. A scenario for thermal processing of a mixed waste and soils stream with essentially no complex presorting and using a series process of incineration and high temperature melting is recommended. Recommendations for applied research and development necessary to further detail and demonstrate the final waste form, required thermal processes, and melter process equipment are provided.
Kovacevic, Branko; Veinović, Mladen; Marković, Milan
This book focuses on speech signal phenomena, presenting a robustification of the usual speech generation models with regard to the presumed types of excitation signals, which is equivalent to the introduction of a class of nonlinear models and the corresponding criterion functions for parameter estimation. Compared to the general class of nonlinear models, such as various neural networks, these models possess good properties of controlled complexity, the option of working in “online” mode, as well as a low information volume for efficient speech encoding and transmission. Providing comprehensive insights, the book is based on the authors’ research, which has already been published, supplemented by additional texts discussing general considerations of speech modeling, linear predictive analysis and robust parameter estimation.
and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 11 Example: ẋ = f(t) 27 12 Example: RC circuit with...deterministic chirp driving force 28 13 Example: RLC circuit with generic deterministic input 29 14 Example: The Exact Solution to the Gliding Tone Problem 30...The Wigner distribution highlights the lowpass behavior of the system, that filters out the input signal as t →∞. 13 Example: RLC circuit with
Smith, Stephen J.; Whitford, Chris H.; Fraser, George W.
We describe optimal filtering algorithms for determining energy and position resolution in position-sensitive Transition Edge Sensor (TES) Distributed Read-Out Imaging Devices (DROIDs). Improved algorithms, developed using a small-signal finite-element model, are based on least-squares minimisation of the total noise power in the correlated dual TES DROID. Through numerical simulations we show that significant improvements in energy and position resolution are theoretically possible over existing methods
Amyotte, Paul R, E-mail: firstname.lastname@example.org [Department of Process Engineering and Applied Science, Dalhousie University, 1360 Barrington Street, Halifax, NS B3J 2X4 (Canada)
The answer to the question posed by the title of this paper is yes - with adaptation to the specific hazards and challenges found in the field of nanotechnology. The validity of this affirmative response is demonstrated by relating key process safety concepts to various aspects of the nanotechnology industry in which these concepts are either already practised or could be further applied. This is accomplished by drawing on the current author's experience in process safety practice and education as well as a review of the relevant literature on the safety of nanomaterials and their production. The process safety concepts selected for analysis include: (i) risk management, (ii) inherently safer design, (iii) human error and human factors, (iv) safety management systems, and (v) safety culture.
Amyotte, Paul R
The answer to the question posed by the title of this paper is yes - with adaptation to the specific hazards and challenges found in the field of nanotechnology. The validity of this affirmative response is demonstrated by relating key process safety concepts to various aspects of the nanotechnology industry in which these concepts are either already practised or could be further applied. This is accomplished by drawing on the current author's experience in process safety practice and education as well as a review of the relevant literature on the safety of nanomaterials and their production. The process safety concepts selected for analysis include: (i) risk management, (ii) inherently safer design, (iii) human error and human factors, (iv) safety management systems, and (v) safety culture.
Meyers, James F.; Clemmons, James I., Jr.
A new scheme for processing signals from laser velocimeter systems is described. The technique utilizes the capabilities of advanced digital electronics to yield a smart instrument that is able to configure itself, based on the characteristics of the input signals, for optimum measurement accuracy. The signal processor is composed of a high-speed 2-bit transient recorder for signal capture and a combination of adaptive digital filters with energy and/or zero crossing detection signal processing. The system is designed to accept signals with frequencies up to 100 MHz with standard deviations up to 20 percent of the average signal frequency. Results from comparative simulation studies indicate measurement accuracies 2.5 times better than with a high-speed burst counter, from signals with as few as 150 photons per burst.
Many space station processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect of the human/machine interface is the analysis and display of process information. Human operators can be overwhelmed by large clusters of alarms that inhibit their ability to diagnose and respond to a disturbance. Using artificial intelligence techniques and a knowledge base approach to this problem, the power of the computer can be used to filter and analyze plant sensor data. This will provide operators with a better description of the process state. Once a process state is recognized, automatic action could be initiated and proper system response monitored.
Streamlining Digital Signal Processing, Second Edition, presents recent advances in DSP that simplify or increase the computational speed of common signal processing operations and provides practical, real-world tips and tricks not covered in conventional DSP textbooks. It offers new implementations of digital filter design, spectrum analysis, signal generation, high-speed function approximation, and various other DSP functions. It provides:Great tips, tricks of the trade, secrets, practical shortcuts, and clever engineering solutions from seasoned signal processing professionalsAn assortment.
Integrates topics of signal processing from sonar, radar, and medical system technologies by identifying their concept similarities. This book covers non-invasive medical diagnostic system applications, including intracranial ultrasound, a technology that attempts to address non-invasive detection on brain injuries and stroke.
Li, Zhenzhen; Wu, Xiaoming
Adventitious respiratory sound signal processing has been an important researching topic in the field of computerized respiratory sound analysis system. In recent years, new progress has been achieved in adventitious respiratory sound signal analysis due to the applications of techniques of non-stationary random signal processing. Algorithm progress of adventitious respiratory sound detections is discussed in detail in this paper. Then the state of art of adventitious respiratory sound analysis is reviewed, and development directions of next phase are pointed out.
Shankar, R.; Lane, S.S.; Paradiso, T.J.; Quinn, J.R.
The techniques developed in this work provide a means of sizing underclad cracks and quality control methods for assessing the accuracy of the data. Data were collected with a minicomputer (LSI 11-02), a transient recorder (Biomaton 8100) and anti-aliasing filter. Three techniques were developed: the calibration curve, phase velocity and epicentral. The phase reversal characteristic in the data is a strong indication of the nature of the signal source. That is, cracks are clearly seperable from two isolated inclusions on the basis of observed phase reversal. These methods have been implemented on a computer and appear to provide an accurate rapid method to discriminate and size underclad cracks
Ezell, N. Dianne Bull [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Britton, Jr, Charles L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Roberts, Michael [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
This report summarizes the newly developed algorithm that subtracted the Electromagnetic Interference (EMI). The EMI performance is very important to this measurement because any interference in the form on pickup from external signal sources from such as fluorescent lighting ballasts, motors, etc. can skew the measurement. Two methods of removing EMI were developed and tested at various locations. This report also summarizes the testing performed at different facilities outside Oak Ridge National Laboratory using both EMI removal techniques. The first EMI removal technique reviewed in previous milestone reports and therefore this report will detail the second method.
Peucheret, Christophe; Ding, Yunhong; Xu, Jing
Our recent results on the demonstration of on-chip mode-division multiplexing are reviewed, with special emphasis on nonlinear all-optical signal processing. Mode-selective parametric processes are demonstrated in a silicon-on-insulator waveguide.......Our recent results on the demonstration of on-chip mode-division multiplexing are reviewed, with special emphasis on nonlinear all-optical signal processing. Mode-selective parametric processes are demonstrated in a silicon-on-insulator waveguide....
The electronics of the LUX-ZEPLIN (LZ) experiment, the 10-tonne dark matter detector to be installed at the Sanford Underground Research Facility (SURF), consists of low-noise dual-gain amplifiers and a 100-MHz, 14-bit data acquisition system for the TPC PMTs. Pre-prototypes of the analog amplifiers and the 32-channel digitizers were tested extensively with simulated pulses that are similar to the prompt scintillation light and the electroluminescence signals expected in LZ. These studies are used to characterize the noise and to measure the linearity of the system. By increasing the amplitude of the test signals, the effect of saturating the amplifier and the digitizers was studied. The RMS ADC noise of the digitizer channels was measured to be 1.19± 0.01 ADCC. When a high-energy channel of the amplifier is connected to the digitizer, the measured noise remained virtually unchanged, while the noise added by a low-energy channel was estimated to be 0.38 ± 0.02 ADCC (46 ± 2 μV). A test facility is under construction to study saturation, mitigate noise and measure the performance of the LZ electronics and data acquisition chain
These proceedings contains refereed papers presented at the sixteenth IEEE Workshop on Machine Learning for Signal Processing (MLSP'2006), held in Maynooth, Co. Kildare, Ireland, September 6-8, 2006. This is a continuation of the IEEE Workshops on Neural Networks for Signal Processing (NNSP......). The name of the Technical Committee, hence of the Workshop, was changed to Machine Learning for Signal Processing in September 2003 to better reflect the areas represented by the Technical Committee. The conference is organized by the Machine Learning for Signal Processing Technical Committee...... the same standard as the printed version and facilitates the reading and searching of the papers. The field of machine learning has matured considerably in both methodology and real-world application domains and has become particularly important for solution of problems in signal processing. As reflected...
The introductions of Digital Beam Forming (DBF), original signal exploitation and waveform multiplexing techniques have led to the design of novel radar concepts. Passive Coherent Locator (PCL) and Multiple-Input Multiple-Output (MIMO) sensors are two examples of innovative approaches. Beside the
Chou, Yongxin; Zhang, Aihua; Ou, Jiqing; Qi, Yusheng
In order to derive dynamic pulse rate variability (DPRV) signal from dynamic pulse signal in real time, a method for extracting DPRV signal was proposed and a portable mobile monitoring system was designed. The system consists of a front end for collecting and wireless sending pulse signal and a mobile terminal. The proposed method is employed to extract DPRV from dynamic pulse signal in mobile terminal, and the DPRV signal is analyzed both in the time domain and the frequency domain and also with non-linear method in real time. The results show that the proposed method can accurately derive DPRV signal in real time, the system can be used for processing and analyzing DPRV signal in real time.
Kompis, Martin; Kurz, Anja; Pfiffner, Flurin; Senn, Pascal; Arnold, Andreas; Caversaccio, Marco
To establish whether complex signal processing is beneficial for users of bone anchored hearing aids. Review and analysis of two studies from our own group, each comparing a speech processor with basic digital signal processing (either Baha Divino or Baha Intenso) and a processor with complex digital signal processing (either Baha BP100 or Baha BP110 power). The main differences between basic and complex signal processing are the number of audiologist accessible frequency channels and the availability and complexity of the directional multi-microphone noise reduction and loudness compression systems. Both studies show a small, statistically non-significant improvement of speech understanding in quiet with the complex digital signal processing. The average improvement for speech in noise is +0.9 dB, if speech and noise are emitted both from the front of the listener. If noise is emitted from the rear and speech from the front of the listener, the advantage of the devices with complex digital signal processing as opposed to those with basic signal processing increases, on average, to +3.2 dB (range +2.3 … +5.1 dB, p ≤ 0.0032). Complex digital signal processing does indeed improve speech understanding, especially in noise coming from the rear. This finding has been supported by another study, which has been published recently by a different research group. When compared to basic digital signal processing, complex digital signal processing can increase speech understanding of users of bone anchored hearing aids. The benefit is most significant for speech understanding in noise.
Full Text Available For a long time, issue of Management for the Process network occupies attention of research teams consisted of scientist and their assistants from "Center for Quality" of the Mechanical Engineering in Podgorica. Research of Process approach commenced on issues of automation information systems implementation in process management, and they have become especially actual when ISO 9000:2000 standards appeared and QMS concept. This work analyzes possibilities for improvement of quality process through the process network or should we say through definition of efficient process architecture using modified BSP method and HIPO+P method. Using already published approaches of the TQM as well as our own research the relation matrix has been defined the matrix of goal structure of process structure of team structure and of quality process structure that shows that concepts of improvement and management of processes are hard to set apart and that any business process can be improved through by management of those who implement them, there for by the owners of process.
Young, Derek P.; Jacklin, Neil; Punnoose, Ratish J.; Counsil, David T.
Time-reversal is a wave focusing technique that makes use of the reciprocity of wireless propagation channels. It works particularly well in a cluttered environment with associated multipath reflection. This technique uses the multipath in the environment to increase focusing ability. Time-reversal can also be used to null signals, either to reduce unintentional interference or to prevent eavesdropping. It does not require controlled geometric placement of the transmit antennas. Unlike existing techniques it can work without line-of-sight. We have explored the performance of time-reversal focusing in a variety of simulated environments. We have also developed new algorithms to simultaneously focus at a location while nulling at an eavesdropper location. We have experimentally verified these techniques in a realistic cluttered environment.
Ratnasari, D.; Sukarmin, S.; Suparmi, S.; Aminah, N. S.
This research is aimed to analyze the effect of students’ conception toward science process skill. This is a descriptive research with subjects of the research were 10th-grade students in Surakarta from high, medium and low categorized school. The sample selection uses purposive sampling technique based on physics score in national examination four latest years. Data in this research collecting from essay test, two-tier multiple choice test, and interview. Two-tier multiple choice test consists of 30 question that contains an indicator of science process skill. Based on the result of the research and analysis, it shows that students’ conception of heat and temperature affect science process skill of students. The students’ conception that still contains the wrong concept can emerge misconception. For the future research, it is suggested to improve students’ conceptual understanding and students’ science process skill with appropriate learning method and assessment instrument because heat and temperature is one of physics material that closely related with students’ daily life.
Freedle, Roy; Lewis, Michael
The purpose of this paper is to outline some application of the Markov Process to the study of state and state changes. The essence of this mathematical concept consists of the analysis of sequences of infant responses in interaction with its environment. Categories can be defined which reflect the joint occurrence of an infant's behavior (or…
Electrocardiogram (ECG) signals are among the most important sources of diagnostic information in healthcare so improvements in their analysis may also have telling consequences. Both the underlying signal technology and a burgeoning variety of algorithms and systems developments have proved successful targets for recent rapid advances in research. ECG Signal Processing, Classification and Interpretation shows how the various paradigms of Computational Intelligence, employed either singly or in combination, can produce an effective structure for obtaining often vital information from ECG signals. Neural networks do well at capturing the nonlinear nature of the signals, information granules realized as fuzzy sets help to confer interpretability on the data and evolutionary optimization may be critical in supporting the structural development of ECG classifiers and models of ECG signals. The contributors address concepts, methodology, algorithms, and case studies and applications exploiting the paradigm of Comp...
Azim, Noor ul; Jun, Wang
Signal processing is one of the main parts of any radar system. Different signal processing algorithms are used to extract information about different parameters like range, speed, direction etc, of a target in the field of radar communication. This paper presents LFM (Linear Frequency Modulation) pulsed radar signal processing algorithms which are used to improve target detection, range resolution and to estimate the speed of a target. Firstly, these algorithms are simulated in MATLAB to verify the concept and theory. After the conceptual verification in MATLAB, the simulation is converted into implementation on hardware using Xilinx FPGA. Chosen FPGA is Xilinx Virtex-6 (XC6LVX75T). For hardware implementation pipeline optimization is adopted and also other factors are considered for resources optimization in the process of implementation. Focusing algorithms in this work for improving target detection, range resolution and speed estimation are hardware optimized fast convolution processing based pulse compression and pulse Doppler processing.
Clausen, Anders; Guan, Pengyu; Mulvad, Hans Christian Hansen
All-optical time-domain Optical Fourier Transformation utilised for signal processing of ultra-high-speed OTDM signals and OFDM signals will be presented.......All-optical time-domain Optical Fourier Transformation utilised for signal processing of ultra-high-speed OTDM signals and OFDM signals will be presented....
Church, E.L.; Takacs, P.Z.
This paper proposes the use of a simple two-scale model of surface roughness for testing and specifying the topographic figure and finish of synchrotron-radiation mirrors. In this approach the effects of figure and finish are described in terms of their slope distribution and power spectrum, respectively, which are then combined with the system point spread function to produce a composite image. The result can be used to predict mirror performance or to translate design requirements into manufacturing specifications. Pacing problems in this approach are the development of a practical long-trace slope-profiling instrument and realistic statistical models for figure and finish errors
Oxenløwe, Leif Katsuo; Galili, Michael; Pu, Minhao
We describe recent demonstrations of exploiting highly nonlinear silicon nanowires for processing Tbit/s optical data signals. We perform demultiplexing and optical waveform sampling of 1.28 Tbit/s and wavelength conversion of 640 Gbit/s data signals.......We describe recent demonstrations of exploiting highly nonlinear silicon nanowires for processing Tbit/s optical data signals. We perform demultiplexing and optical waveform sampling of 1.28 Tbit/s and wavelength conversion of 640 Gbit/s data signals....
Zhang Tao; Wang Yonggang; Li Kai; Yan Tianxin
In order to meet the requirement of the fast luminosity monitor system of Beijing electron-positron collider (BEPC II), a high-speed bunch-by-bunch luminosity signal processing and displaying system was designed. The techniques such as fast signal amplification, discrimination, long-distance signal transmission, anti-coincidence event judgment, counting for each bunch and ping-pang storage were involved effectively. The preliminary test result shows that the system can process and display the luminosity signals for bunches with 4 ns separation. (authors)
Elo, Kristofer; Sundin, Erik
There is a large variety of electrical and electronic equipment products, for example liquid crystal display television sets (LCD TVs), in the waste stream today. Many LCD TVs contain mercury, which is a challenge to treat at the recycling plants. Two current used processes to recycle LCD TVs are automated shredding and manual disassembly. This paper aims to present concepts for semi-automated dismantling processes for LCD TVs in order to achieve higher productivity and flexibility, and in tu...
During the 1930s and 1940s Norbert Wiener and others invented the core concepts of linear signal processing. These ideas quickly became popular and played a significant role in the Allies' victory in World War II. During and after the war, linear signal processing theory was greatly expanded and began to take on the character of an imposing monolith. By the mid- 1940s, Wiener (and others, such as Dennis Gabor) came to recognize that linear signal processing theory, while interesting and very useful, was only a piece of a much larger picture. In 1946 and 1958 Gabor and Wiener, respectively, attempted to address the whole picture. While they were not completely successful, they did implicitly set an agenda for a more general approach to signal processing. Although a few others have, from time to time, addressed this agenda; in terms of the signal processing community as a whole it still remains lost in the shadow of the ever-growing monolith of linear signal processing theory. The thesis of this paper is that it is now time to get on with the Wiener and Gabor agenda. It is time to make general signal processing the mainstream focus of the subject. It is argued here that the best way to do this is to abandon the transfer function/Fourier analysis/z-transform approach of the current linear signal processing regime and replace it with a much more natural intellectual framework for general signal processing--the framework offered by neurocomputing. A potential benefit of this refocusing of the field is that the detailed engineering might soon be left to machines, while human technologists will be able to concentrate on the art of signal sculpting.
Justice, Jamie; Miller, Jordan D.; Newman, John C.; Hashmi, Shahrukh K.; Halter, Jeffrey; Austad, Steve N.; Barzilai, Nir
Therapies targeted at fundamental processes of aging may hold great promise for enhancing the health of a wide population by delaying or preventing a range of age-related diseases and conditions—a concept dubbed the “geroscience hypothesis.” Early, proof-of-concept clinical trials will be a key step in the translation of therapies emerging from model organism and preclinical studies into clinical practice. This article summarizes the outcomes of an international meeting partly funded through the NIH R24 Geroscience Network, whose purpose was to generate concepts and frameworks for early, proof-of-concept clinical trials for therapeutic interventions that target fundamental processes of aging. The goals of proof-of-concept trials include generating preliminary signals of efficacy in an aging-related disease or outcome that will reduce the risk of conducting larger trials, contributing data and biological samples to support larger-scale research by strategic networks, and furthering a dialogue with regulatory agencies on appropriate registration indications. We describe three frameworks for proof-of-concept trials that target age-related chronic diseases, geriatric syndromes, or resilience to stressors. We propose strategic infrastructure and shared resources that could accelerate development of therapies that target fundamental aging processes. PMID:27535966
Clairet, F.; Bottereau, C.; Ricaud, B.; Briolle, F.; Heuraux, S.
Reflectometry profile measurement requires an accurate determination of the plasma reflected signal. Along with a good resolution and a high signal to noise ratio of the phase measurement, adequate data analysis is required. A new data processing based on time-frequency tomographic representation is used. It provides a clearer separation between multiple components and improves isolation of the relevant signals. In this paper, this data processing technique is applied to two sets of signals coming from two different reflectometer devices used on the Tore Supra tokamak. For the standard density profile reflectometry, it improves the initialization process and its reliability, providing a more accurate profile determination in the far scrape-off layer with density measurements as low as 10 16 m -1 . For a second reflectometer, which provides measurements in front of a lower hybrid launcher, this method improves the separation of the relevant plasma signal from multi-reflection processes due to the proximity of the plasma.
Peng, Fulai; Liu, Hongyun; Wang, Weidong
A photoplethysmographic (PPG) signal can provide very useful information about a subject's cardiovascular status. Motion artifacts (MAs), which usually deteriorate the waveform of a PPG signal, severely obstruct its applications in the clinical diagnosis and healthcare area. To reduce the MAs from a PPG signal, in the present study we present a comb filter based signal processing method. Firstly, wavelet de-noising was implemented to preliminarily suppress a part of the MAs. Then, the PPG signal in the time domain was transformed into the frequency domain by a fast Fourier transform (FFT). Thirdly, the PPG signal period was estimated from the frequency domain by tracking the fundamental frequency peak of the PPG signal. Lastly, the MAs were removed by the comb filter which was designed based on the obtained PPG signal period. Experiments with synthetic and real-world datasets were implemented to validate the performance of the method. Results show that the proposed method can effectively restore the PPG signals from the MA corrupted signals. Also, the accuracy of blood oxygen saturation (SpO2), calculated from red and infrared PPG signals, was significantly improved after the MA reduction by the proposed method. Our study demonstrates that the comb filter can effectively reduce the MAs from a PPG signal provided that the PPG signal period is obtained.
Koush, Yury; Zvyagintsev, Mikhail; Dyck, Miriam; Mathiak, Krystyna A; Mathiak, Klaus
Real-time fMRI allows analysis and visualization of the brain activity online, i.e. within one repetition time. It can be used in neurofeedback applications where subjects attempt to control an activation level in a specified region of interest (ROI) of their brain. The signal derived from the ROI is contaminated with noise and artifacts, namely with physiological noise from breathing and heart beat, scanner drift, motion-related artifacts and measurement noise. We developed a Bayesian approach to reduce noise and to remove artifacts in real-time using a modified Kalman filter. The system performs several signal processing operations: subtraction of constant and low-frequency signal components, spike removal and signal smoothing. Quantitative feedback signal quality analysis was used to estimate the quality of the neurofeedback time series and performance of the applied signal processing on different ROIs. The signal-to-noise ratio (SNR) across the entire time series and the group event-related SNR (eSNR) were significantly higher for the processed time series in comparison to the raw data. Applied signal processing improved the t-statistic increasing the significance of blood oxygen level-dependent (BOLD) signal changes. Accordingly, the contrast-to-noise ratio (CNR) of the feedback time series was improved as well. In addition, the data revealed increase of localized self-control across feedback sessions. The new signal processing approach provided reliable neurofeedback, performed precise artifacts removal, reduced noise, and required minimal manual adjustments of parameters. Advanced and fast online signal processing algorithms considerably increased the quality as well as the information content of the control signal which in turn resulted in higher contingency in the neurofeedback loop. Copyright © 2011 Elsevier Inc. All rights reserved.
This revised edition is made up of two parts: theory and applications. Though many of the fundamental results are still valid and used, new and revised material is woven throughout the text. As with the original book, the theory of sum-of-squares trigonometric polynomials is presented unitarily based on the concept of Gram matrix (extended to Gram pair or Gram set). The programming environment has also evolved, and the books examples are changed accordingly. The applications section is organized as a collection of related problems that use systematically the theoretical results. All the problems are brought to a semi-definite programming form, ready to be solved with algorithms freely available, like those from the libraries SeDuMi, CVX and Pos3Poly. A new chapter discusses applications in super-resolution theory, where Bounded Real Lemma for trigonometric polynomials is an important tool. This revision is written to be more appealing and easier to use for new readers. < Features updated information on LMI...
Hoogeboom, P.; Dekker, R.J.; Otten, M.P.G.
Synthetic Aperture Radar (SAR) is today a valuable source of remote sensing information. SAR is a side-looking imaging radar and operates from airborne and spacebome platforms. Coverage, resolution and image quality are strongly influenced by the platform. SAR processing can be performed on standard
Li, Shuqing; Zhang, Huan; Tao, Zhifei
Grating digital geophone is designed based on grating measurement technique benefiting averaging-error effect and wide dynamic range to improve weak signal detected precision. This paper introduced the principle of grating digital geophone and its post signal processing system. The signal acquisition circuit use Atmega 32 chip as core part and display the waveform on the Labwindows through the RS232 data link. Wavelet transform is adopted this paper to filter the grating digital geophone' output signal since the signal is unstable. This data processing method is compared with the FIR filter that widespread use in current domestic. The result indicates that the wavelet algorithm has more advantages and the SNR of seismic signal improve obviously.
INTRODUCTIONSignal Processing for Future Mobile Communications Systems: Challenges and Perspectives; Quazi Mehbubar Rahman and Mohamed IbnkahlaCHANNEL MODELING AND ESTIMATIONMultipath Propagation Models for Broadband Wireless Systems; Andreas F. Molisch and Fredrik TufvessonModeling and Estimation of Mobile Channels; Jitendra K. TugnaitMobile Satellite Channels: Statistical Models and Performance Analysis; Giovanni E. Corazza, Alessandro Vanelli-Coralli, Raffaella Pedone, and Massimo NeriMobile Velocity Estimation for Wireless Communications; Bouchra Senadji, Ghazem Azemi, and Boualem Boashash
Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W
A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.
Peucheret, Christophe; Oxenløwe, Leif Katsuo; Mulvad, Hans Christian Hansen
We review recent progress in all-optical signal processing techniques making use of conventional silica-based highly nonlinear fibres. In particular, we focus on recent demonstrations of ultra-fast processing at 640 Gbit/s and above, as well as on signal processing of novel modulation formats...... relying on the phase of the optical field. Topics covered include all-optical switching of 640 Gbit/s and 1.28 Tbit/s serial data, wavelength conversion at 640 Gbit/s, optical amplitude regeneration of differential phase shift keying (DPSK) signals, as well as midspan spectral inversion for differential 8...
Poularikas, Alexander D
Engineers in all fields will appreciate a practical guide that combines several new effective MATLAB® problem-solving approaches and the very latest in discrete random signal processing and filtering.Numerous Useful Examples, Problems, and Solutions - An Extensive and Powerful ReviewWritten for practicing engineers seeking to strengthen their practical grasp of random signal processing, Discrete Random Signal Processing and Filtering Primer with MATLAB provides the opportunity to doubly enhance their skills. The author, a leading expert in the field of electrical and computer engineering, offe
Song, Tai Kyong
Ultrasonic imaging is the most widely used modality among modern imaging device for medical diagnosis and the system performance has been improved dramatically since early 90's due to the rapid advances in DSP performance and VLSI technology that made it possible to employ more sophisticated algorithms. This paper describes 'main stream' digital signal processing functions along with the associated implementation considerations in modern medical ultrasound imaging systems. Topics covered include signal processing methods for resolution improvement, ultrasound imaging system architectures, roles and necessity of the applications of DSP and VLSI technology in the development of the medical ultrasound imaging systems, and array signal processing techniques for ultrasound focusing
Kumar, S; ICSIP 2012
The proceedings includes cutting-edge research articles from the Fourth International Conference on Signal and Image Processing (ICSIP), which is organised by Dr. N.G.P. Institute of Technology, Kalapatti, Coimbatore. The Conference provides academia and industry to discuss and present the latest technological advances and research results in the fields of theoretical, experimental, and application of signal, image and video processing. The book provides latest and most informative content from engineers and scientists in signal, image and video processing from around the world, which will benefit the future research community to work in a more cohesive and collaborative way.
Kumar Maurya, Rakesh; Pal, Dev Datt; Kumar Agarwal, Avinash
Diagnosis of combustion is necessary for the estimation of the combustion quality, and control of combustion timing in advanced combustion concepts like HCCI. Combustion diagnostics is often performed using digital processing of pressure signals measured using piezoelectric sensor installed in the combustion chamber of the engine. Four-step pressure signal processing consisting of (i) absolute pressure correction, (ii) phasing w.r.t. crank angle, (iii) cycle averaging and (iv) smoothening is used to get cylinder pressure data from the engine experiments, which is further analyzed to get information about combustion characteristics. This study focuses on various aspect of signal processing (cycle averaging and smoothing) of in-cylinder pressure signal from a HCCI engine acquired using a piezoelectric pressure sensor. Experimental investigations are conducted on a HCCI combustion engine operating at different engine speed/load/air-fuel ratio conditions. The cylinder pressure history of 3000 consecutive engine cycles is acquired for analysis using piezoelectric pressure sensor. This study determines the optimum number of engine cycles to be acquired for reasonably good pressure signals based on standard deviation of in-cylinder pressure, rate of pressure rise and rate of heat release signals. Different signal smoothening methods (using various digital filters) are also analyzed and their results are compared. This study also presents effect of signal processing methods on pressure, pressure rise rate and rate of heat release curves at different engine operating conditions.
D'Addario, Larry; Simmons, Samuel
Motivation for the study is: (1) Lunar Radio Array for low frequency, high redshift Dark Ages/Epoch of Reionization observations (z =6-50, f=30-200 MHz) (2) High precision cosmological measurements of 21 cm H I line fluctuations (3) Probe universe before first star formation and provide information about the Intergalactic Medium and evolution of large scale structures (5) Does the current cosmological model accurately describe the Universe before reionization? Lunar Radio Array is for (1) Radio interferometer based on the far side of the moon (1a) Necessary for precision measurements, (1b) Shielding from earth-based and solar RFI (12) No permanent ionosphere, (2) Minimum collecting area of approximately 1 square km and brightness sensitivity 10 mK (3)Several technologies must be developed before deployment The power needed to process signals from a large array of nonsteerable elements is not prohibitive, even for the Moon, and even in current technology. Two different concepts have been proposed: (1) Dark Ages Radio Interferometer (DALI) (2)( Lunar Array for Radio Cosmology (LARC)
Rao, K. R.
In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.
Szcześniak, Adam; Szcześniak, Zbigniew
This article is a presentation of designed methods which interpolate signals from the optoelectronic transducer. This enables a way to distinguish the motion direction of the optoelectronic transducer and also to increase its accuracy. In this article methods based on logic functions, logic functions and RC circuits, phase processing were analyzed. In methods which are based on processing logic functions of transducer's signals there is a possibility of two times and four times increase in the transducer glass scale. The presented method of generating and processing sine signals with 18 degrees of the shift enables the reception of square signals with five times higher frequency compared to the basic signals. This method is universal and it can be used to the different scale of frequency multiplication of the optoelectronic transducer. The simulations of the methods were performed by using the MATLAB-SIMULINK software.
Mulvad, Hans Christian Hansen; Palushani, Evarist; Hu, Hao
We review recent advances in the optical signal processing of ultra-high-speed serial data signals up to 1.28 Tbit/s, with focus on applications of time-domain optical Fourier transformation. Experimental methods for the generation of symbol rates up to 1.28 Tbaud are also described....
Abstract. Binary and ternary sequences with peaky autocorrelation, measured in terms of high discrimination and merit factor have been searched earlier, using optimization techniques. It is shown that the use of neural network processing of the return signal is much more advantageous. It opens up a new signal design ...
Software-defined radios and test equipment use a variety of digital signal processing techniques to improve system performance. Interpolation is one technique that can be used to increase the sample rate of digital signals. In this work, we illustrated interpolation in the time domain by writing appropriate codes using ...
Complete analytical expressions for distortions caused by signal processing in analog AM modulators are developed. The salient features in these expressions are shown to be consistent with displays of actual spectra of AM signals. Finally suggestions are given on how the distortions may be practically minimized. (author). 6 refs, 3 figs
M. R. HAIDER
Full Text Available A low-power signal processing and telemetry circuit for any generic biosensor applications has been presented. The complete system manifests a potentiostat, a signal processing block and a modulator block. The on-chip potentiostat biases the sensor electrodes for proper extraction of the sensor signals. The signal processing block integrates and buffers the sensor signal to make it a data signal and finally a simple modulator block converts this data signal to an on-off-keying (OOK signal with a high frequency carrier. Package pin of the fabricated circuit is used as an antenna and measurement results demonstrate the successful signal transmission from the chip within a few cm ranges. The entire system has been realized using 0.35 μm CMOS technology that consumes only 400 μW of power and occupies an area of 0.66 mm2. Test results show that this scheme is an effective candidate for low-power sensor applications.
Ludwig, David E.
The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.
Fyhn, Karsten; Arildsen, Thomas; Larsen, Torben
We show that to lower the sampling rate in a spread spectrum communication system using Direct Sequence Spread Spectrum (DSSS), compressive signal processing can be applied to demodulate the received signal. This may lead to a decrease in the power consumption or the manufacturing price of wireless...
Istepanian, Robert S H; Sungoor, Ala; Nebel, Jean-Christophe
Genomic signal processing is a new area of research that combines advanced digital signal processing methodologies for enhanced genetic data analysis. It has many promising applications in bioinformatics and next generation of healthcare systems, in particular, in the field of microarray data clustering. In this paper we present a comparative performance analysis of enhanced digital spectral analysis methods for robust clustering of gene expression across multiple microarray data samples. Three digital signal processing methods: linear predictive coding, wavelet decomposition, and fractal dimension are studied to provide a comparative evaluation of the clustering performance of these methods on several microarray datasets. The results of this study show that the fractal approach provides the best clustering accuracy compared to other digital signal processing and well known statistical methods.
Pham, Timothy T.; Jongeling, Andre P.
In this paper, we will describe the benefits of arraying and past as well as expected future use of this application. The signal processing aspects of array system are described. Field measurements via actual tracking spacecraft are also presented.
Mørk, Jesper; Romstad, Francis Pascal; Højfeldt, Sune
Reverse-biased semiconductor waveguides are efficient saturable absorbers and have a number of promising all-optical signal processing applications. Results on ultrafast modulator dynamics as well as demonstrations and investigations of wavelength conversion and regeneration are presented....
Some measuring methods and signal processing systems based on analogue and digital technics, which have been applied in magnetic field research using magnetometers with ferromagnetic transducers, are presented. (author)
The physical principles and signal processing techniques underlying bat echolocation are investigated. It is shown, by calculation and simulation, how the measured echolocation performance of bats can be achieved.
Tompkins, Willis J; Wilson, J
In the early 1990's we developed a special computer program called UW DigiScope to provide a mechanism for anyone interested in biomedical digital signal processing to study the field without requiring any other instrument except a personal computer. There are many digital filtering and pattern recognition algorithms used in processing biomedical signals. In general, students have very limited opportunity to have hands-on access to the mechanisms of digital signal processing. In a typical course, the filters are designed non-interactively, which does not provide the student with significant understanding of the design constraints of such filters nor their actual performance characteristics. UW DigiScope 3.0 is the first major update since version 2.0 was released in 1994. This paper provides details on how the new version based on MATLAB! works with signals, including the filter design tool that is the programming interface between UW DigiScope and processing algorithms.
This document is a collection of the papers presented at international conferences and in international journals by the analogue signal processing group of the Department of Information Technology, Technical University of Denmark, in 1996 and 1997....
Peucheret, Christophe; Ding, Yunhong; Ou, Haiyan
We review our recent achievements on the use of silicon micro-ring resonators for linear optical signal processing applications, including modulation format conversion, phase-to-intensity modulation conversion and waveform shaping....
Gopi, E S
Mathematical Summary for Digital Signal Processing Applications with Matlab consists of Mathematics which is not usually dealt with in the DSP core subject, but used in DSP applications. It gives Matlab programs with illustrations.
Thomas, J. B.
Over the past year, two Rogue GPS prototype receivers have been assembled and successfully subjected to a variety of laboratory and field tests. A functional description is presented of signal processing in the Rogue receiver, tracing the signal from RF input to the output values of group delay, phase, and data bits. The receiver can track up to eight satellites, without time multiplexing among satellites or channels, simultaneously measuring both group delay and phase for each of three channels (L1-C/A, L1-P, L2-P). The Rogue signal processing described requires generation of the code for all three channels. Receiver functional design, which emphasized accuracy, reliability, flexibility, and dynamic capability, is summarized. A detailed functional description of signal processing is presented, including C/A-channel and P-channel processing, carrier-aided averaging of group delays, checks for cycle slips, acquistion, and distinctive features.
Benjamin, R; Costrell, L
Electronics and Instrumentation, Volume 35: Modulation, Resolution and Signal Processing in Radar, Sonar and Related Systems presents the practical limitations and potentialities of advanced modulation systems. This book discusses the concepts and techniques in the radar context, but they are equally essential to sonar and to a wide range of signaling and data-processing applications, including seismology, radio astronomy, and band-spread communications.Organized into 15 chapters, this volume begins with an overview of the principal developments sought in pulse radar. This text then provides a
Spaten, Ole Michael
Interviews and observations from a longitudinal study (from 1998 - 2009) has been analyzed to approach a contextual understanding of children's identity and self-concept development (Spaten 2007). Bronfenbrenner assumed (2005) that scientific limitations in widespread approaches to research...... on childrens's development may be conquered by broader perspectives in theory and, methodology. He proposed a scientific perspective as the ecology of human development and, the Person-Process-Context-Time model (ibid). Our results includes that childrens's and adolescent's active internalization (Valsiner...... & Van der Veer, 1988). amd dialogical, cultural self-autorship are important themes for an understanding of processes of self-concept development among Danish children and adolescents from diverse cultual backgrounds. Limitations for this research as well as further directions for new studies...
Mu, Jiasong; Wang, Wei; Zhang, Baoju
This book brings together papers presented at the 4th International Conference on Communications, Signal Processing, and Systems, which provides a venue to disseminate the latest developments and to discuss the interactions and links between these multidisciplinary fields. Spanning topics ranging from Communications, Signal Processing and Systems, this book is aimed at undergraduate and graduate students in Electrical Engineering, Computer Science and Mathematics, researchers and engineers from academia and industry as well as government employees (such as NSF, DOD, DOE, etc).
Nielsen, M L; Mørk, Jesper
Significant advancements in technology and basic understanding of device physics are bringing optical signal processing closer to a commercial breakthrough. In this paper we describe the main challenges in high-speed SOA-based switching.......Significant advancements in technology and basic understanding of device physics are bringing optical signal processing closer to a commercial breakthrough. In this paper we describe the main challenges in high-speed SOA-based switching....
Galili, Michael; Hu, Hao; Guan, Pengyu
This paper will discuss time lenses and their broad range of applications. A number of recent demonstrations of complex high-speed optical signal processing using time lenses will be outlined with focus on the operating principle.......This paper will discuss time lenses and their broad range of applications. A number of recent demonstrations of complex high-speed optical signal processing using time lenses will be outlined with focus on the operating principle....
This document is a collection of the papers presented at international conferences and in international journals by the analogue signal processing group of Electronics Institute, Technical University of Denmark, in 1994 and 1995.......This document is a collection of the papers presented at international conferences and in international journals by the analogue signal processing group of Electronics Institute, Technical University of Denmark, in 1994 and 1995....
Převorovský, Zdeněk; Krofta, Josef; Kober, Jan; Dvořáková, Zuzana; Chlada, Milan; Dos Santos, S.
Roč. 19, č. 12 (2014) ISSN 1435-4934. [European Conference on Non-Destructive Testing (ECNDT 2014) /11./. Praha, 06.10.2014-10.10.2014] Institutional support: RVO:61388998 Keywords : acoustic emission (AE) * ultrasonic testing (UT) * signal processing * source location * time reversal acoustic s * acoustic emission * signal processing and transfer Subject RIV: BI - Acoustic s http://www.ndt.net/events/ECNDT2014/app/content/Slides/637_Prevorovsky.pdf
Eryurek, E.; Upadhyaya, B.R.; Kavaklioglu, K.
Signal validation and plant subsystem tracking in power and process industries require the prediction of one or more state variables. Both heteroassociative and auotassociative neural networks were applied for characterizing relationships among sets of signals. A multi-layer neural network paradigm was applied for sensor and process monitoring in a Pressurized Water Reactor (PWR). This nonlinear interpolation technique was found to be very effective for these applications
Zeynalov, Sh.S.; Ahmadov, Q.S.
Full text : Data acquisition systems for nuclear spectroscopy have traditionally been based on systems with analog shaping amplifiers followed by analog-to-digital converters. Recently, however, new systems based on digital signal processing make possible to replace the analog shaping and timing circuitry the numerical algorithms to derive properties of the pulse such as its amplitude. DSP is a fully numerical analysis of the detector pulse signals and this technique demonstrates significant advantages over analog systems in some circumstances. From a mathematical point of view, one can consider the signal evolution from the detector to the ADC as a sequence of transformations that can be described by precisely defined mathematical expressions. Digital signal processing with ADCs has the possibility to utilize further information on the signal pulses from radiation detectors. In the experiment each step of the signal generation in the 3He filled proportional counter was described using digital signal processing techniques (DSP). The electronic system has consisted of a detector, a preamplifier and a digital oscilloscope. The pulses from the detector were digitized using a digital storage oscilloscope. This oscilloscope allowed signal digitization with accuracy of 8 bit (256 levels) and with frequency of up to 5 * 10 8 samples/s. As a neutron source was used Cf-252. To obtain detector output current pulse I(t) created by the motions of the ions/electrons pairs was written an algorithm which can easily be programmed using modern computer programming languages.
Radioisotopes can be used to obtain signals or images in order to recognize the information inside the industrial systems. The main problems of using these techniques are the difficulty of identification of the obtained signals or images and the requirement of skilled experts for the interpretation process of the output data of these applications. Now, the interpretation of the output data from these applications is performed mainly manually, depending heavily on the skills and the experience of trained operators. This process is time consuming and the results typically suffer from inconsistency and errors. The objective of the thesis is to apply the advanced digital signal processing techniques for improving the treatment and the interpretation of the output data from the different Industrial Radioisotopes Applications (IRA). This thesis focuses on two IRA; the Residence Time Distribution (RTD) measurement and the defect inspection of welded pipes using a gamma source (gamma radiography). In RTD measurement application, this thesis presents methods for signal pre-processing and modeling of the RTD signals. Simulation results have been presented for two case studies. The first case study is a laboratory experiment for measuring the RTD in a water flow rig. The second case study is an experiment for measuring the RTD in a phosphate production unit. The thesis proposes an approach for RTD signal identification in the presence of noise. In this approach, after signal processing, the Mel Frequency Cepstral Coefficients (MFCCs) and polynomial coefficients are extracted from the processed signal or from one of its transforms. The Discrete Wavelet Transform (DWT), Discrete Cosine Transform (DCT), and Discrete Sine Transform (DST) have been tested and compared for efficient feature extraction. Neural networks have been used for matching of the extracted features. Furthermore, the Power Density Spectrum (PDS) of the RTD signal has been also used instead of the discrete
Chabries, D M; Christiansen, R W; Brey, R H; Robinette, M S; Harris, R W
A major complaint of individuals with normal hearing and hearing impairments is a reduced ability to understand speech in a noisy environment. This paper describes the concept of adaptive noise cancelling for removing noise from corrupted speech signals. Application of adaptive digital signal processing has long been known and is described from a historical as well as technical perspective. The Widrow-Hoff LMS (least mean square) algorithm developed in 1959 forms the introduction to modern adaptive signal processing. This method uses a "primary" input which consists of the desired speech signal corrupted with noise and a second "reference" signal which is used to estimate the primary noise signal. By subtracting the adaptively filtered estimate of the noise, the desired speech signal is obtained. Recent developments in the field as they relate to noise cancellation are described. These developments include more computationally efficient algorithms as well as algorithms that exhibit improved learning performance. A second method for removing noise from speech, for use when no independent reference for the noise exists, is referred to as single channel noise suppression. Both adaptive and spectral subtraction techniques have been applied to this problem--often with the result of decreased speech intelligibility. Current techniques applied to this problem are described, including signal processing techniques that offer promise in the noise suppression application.
Ashour, M.A.; Abo Shosha, A.M.
In this paper, a Digital Signal Processing (DSP) Computer-based system for the nuclear scintillation signals with exponential decay is presented. The main objective of this work is to identify the characteristics of the acquired signals smoothly, this can be done by transferring the signal environment from random signal domain to deterministic domain using digital manipulation techniques. The proposed system consists of two major parts. The first part is the high performance data acquisition system (DAQ) that depends on a multi-channel Logic Scope. Which is interfaced with the host computer through the General Purpose Interface Board (GPIB) Ver. IEEE 488.2. Also, a Graphical User Interface (GUI) has been designed for this purpose using the graphical programming facilities. The second of the system is the DSP software Algorithm which analyses, demonstrates, monitoring these data to obtain the main characteristics of the acquired signals; the amplitude, the pulse count, the pulse width, decay factor, and the arrival time
Comoretto, Gianni; Chiello, Riccardo; Roberts, Matt; Halsall, Rob; Adami, Kristian Zarb; Alderighi, Monica; Aminaei, Amin; Baker, Jeremy; Belli, Carolina; Chiarucci, Simone; D'Angelo, Sergio; De Marco, Andrea; Mura, Gabriele Dalle; Magro, Alessio; Mattana, Andrea; Monari, Jader; Naldi, Giovanni; Pastore, Sandro; Perini, Federico; Poloni, Marco; Pupillo, Giuseppe; Rusticelli, Simone; Schiaffino, Marco; Schillirò, Francesco; Zaccaro, Emanuele
The signal processing firmware that has been developed for the Low Frequency Aperture Array component of the Square Kilometre Array (SKA) is described. The firmware is implemented on a dual FPGA board, that is capable of processing the streams from 16 dual polarization antennas. Data processing includes channelization of the sampled data for each antenna, correction for instrumental response and for geometric delays and formation of one or more beams by combining the aligned streams. The channelizer uses an oversampling polyphase filterbank architecture, allowing a frequency continuous processing of the input signal without discontinuities between spectral channels. Each board processes the streams from 16 antennas, as part of larger beamforming system, linked by standard Ethernet interconnections. These are envisaged to be 8192 of these signal processing platforms in the first phase of the SKA so particular attention has been devoted to ensure the design is low cost and low power.
Wiley, H. Steven
The ability of cells to detect and decode information about their extracellular environment is critical to generating an appropriate response. In multicellular organisms, cells must decode dozens of signals from their neighbors and extracellular matrix to maintain tissue homeostasis while still responding to environmental stressors. How cells detect and process information from their surroundings through a surprisingly limited number of signal transduction pathways is one of the most important question in biology. Despite many decades of research, many of the fundamental principles that underlie cell signal processing remain obscure. However, in this issue of Cell Systems, Gillies et al present compelling evidence that the early response gene circuit can act as a linear signal integrator, thus providing significant insight into how cells handle fluctuating signals and noise in their environment.
Thomas, J. B.
Signal-processing theory for the TurboRogue receiver is presented. The signal form is traced from its formation at the GPS satellite, to the receiver antenna, and then through the various stages of the receiver, including extraction of phase and delay. The analysis treats the effects of ionosphere, troposphere, signal quantization, receiver components, and system noise, covering processing in both the 'code mode' when the P code is not encrypted and in the 'P-codeless mode' when the P code is encrypted. As a possible future improvement to the current analog front end, an example of a highly digital front end is analyzed.
Rebizant, Waldemar; Wiszniewski, Andrzej
Digital Signal Processing in Power System Protection and Control bridges the gap between the theory of protection and control and the practical applications of protection equipment. Understanding how protection functions is crucial not only for equipment developers and manufacturers, but also for their users who need to install, set and operate the protection devices in an appropriate manner. After introductory chapters related to protection technology and functions, Digital Signal Processing in Power System Protection and Control presents the digital algorithms for signal filtering, followed
Turley, Jordan A; Nilsson, Michael; Walker, Frederick Rohan; Johnson, Sarah J
Intrinsic Optical Signal imaging is a technique which allows the visualisation and mapping of activity related changes within the brain with excellent spatial and temporal resolution. We analysed a variety of signal and image processing techniques applied to real mouse imaging data. The results were compared in an attempt to overcome the unique issues faced when performing the technique on mice and improve the understanding of post processing options available.
Oxenløwe, Leif Katsuo; Galili, Michael; Hu, Hao
This paper reviews our recent advances in ultra-high speed serial optical communications. It describes Tbit/s optical signal processing and various materials allowing for this, as well as network scenarios embracing this technology......This paper reviews our recent advances in ultra-high speed serial optical communications. It describes Tbit/s optical signal processing and various materials allowing for this, as well as network scenarios embracing this technology...
Fu, Chi Yung; Petrich, Loren
A signal processing method and system combining smooth level wavelet pre-processing together with artificial neural networks all in the wavelet domain for signal denoising and extraction. Upon receiving a signal corrupted with noise, an n-level decomposition of the signal is performed using a discrete wavelet transform to produce a smooth component and a rough component for each decomposition level. The n.sup.th level smooth component is then inputted into a corresponding neural network pre-trained to filter out noise in that component by pattern recognition in the wavelet domain. Additional rough components, beginning at the highest level, may also be retained and inputted into corresponding neural networks pre-trained to filter out noise in those components also by pattern recognition in the wavelet domain. In any case, an inverse discrete wavelet transform is performed on the combined output from all the neural networks to recover a clean signal back in the time domain.
Podboy, Gary; Stephens, David
Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.
Roncagliolo, Pablo; Arredondo, Luis; Gonzalez, AgustIn
This article describes technical aspects involved in the programming of a system of acquisition, processing and transmission of biomedical signals by using mobile devices. This task is aligned with the permanent development of new technologies for the diagnosis and sickness treatment, based on the feasibility of measuring continuously different variables as electrocardiographic signals, blood pressure, oxygen concentration, pulse or simply temperature. The contribution of this technology is settled on its portability and low cost, which allows its massive use. Specifically this work analyzes the feasibility of acquisition and the processing of signals from a standard smartphone. Work results allow to state that nowadays these equipments have enough processing capacity to execute signals acquisition systems. These systems along with external servers make it possible to imagine a near future where the possibility of making continuous measures of biomedical variables will not be restricted only to hospitals but will also begin to be more frequently used in the daily life and at home
The purpose of this study is to develop methods in array signal processing which achieve accurate signal reconstruction from limited observations resulting in high-resolution imaging. The focus is on underwater acoustic applications and sonar signal processing both in active (transmit and receive...... in active sonar signal processing for detection and imaging of submerged oil contamination in sea water from a deep-water oil leak. The submerged oil _eld is modeled as a uid medium exhibiting spatial perturbations in the acoustic parameters from their mean ambient values which cause weak scattering......) and passive (only receive) mode. The study addresses the limitations of existing methods and shows that, in many cases, the proposed methods overcome these limitations and outperform traditional methods for acoustic imaging. The project comprises two parts; The first part deals with computational methods...
Cosnac, B. de; Gariod, R.; Max, J.; Monge, V.
This note is a general survey of the processing of physiological signals. After an introduction about electrodes and their limitations, the physiological nature of the main signals are shortly recalled. Different methods (signal averaging, spectral analysis, shape morphological analysis) are described as applications to the fields of magnetocardiography, electro-encephalography, cardiography, electronystagmography. As for processing means (single portable instruments and programmable), they are described through the example of application to rheography and to the Plurimat'S general system. As a conclusion the methods of signal processing are dominated by the morphological analysis of curves and by the necessity of a more important introduction of the statistical classification. As for the instruments, microprocessors will appear but specific operators linked to computer will certainly grow [fr
Fourier analysis is one of the most useful tools in many applied sciences. The recent developments of wavelet analysis indicates that in spite of its long history and well-established applications, the field is still one of active research. This text bridges the gap between engineering and mathematics, providing a rigorously mathematical introduction of Fourier analysis, wavelet analysis and related mathematical methods, while emphasizing their uses in signal processing and other applications in communications engineering. The interplay between Fourier series and Fourier transforms is at the heart of signal processing, which is couched most naturally in terms of the Dirac delta function and Lebesgue integrals. The exposition is organized into four parts. The first is a discussion of one-dimensional Fourier theory, including the classical results on convergence and the Poisson sum formula. The second part is devoted to the mathematical foundations of signal processing - sampling, filtering, digital signal proc...
Moon, Hee Gun; Park, Sang Min; Kim, Jung Seon; Shon, Chang Ho; Park, Heui Youn; Koo, In Soo
The function of PIS(Process Instrumentation System) for SMART is to acquire the process data from sensor or transmitter. The PIS consists of signal conditioner, A/D converter, DSP(Digital Signal Process) and NIC(Network Interface Card). So, It is fully digital system after A/D converter. The PI cabinet and PDAS(Plant Data Acquisition System) in commercial plant is responsible for data acquisition of the sensor or transmitter include RTD, TC, level, flow, pressure and so on. The PDAS has the software that processes each sensor data and PI cabinet has the signal conditioner, which is need for maintenance and test. The signal conditioner has the potentiometer to adjust the span and zero for test and maintenance. The PIS of SMART also has the signal conditioner which has the span and zero adjust same as the commercial plant because the signal conditioner perform the signal condition for AD converter such as 0∼10Vdc. But, To adjust span and zero is manual test and calibration. So, This paper presents the method of signal validation and calibration, which is used by digital feature in SMART. There are I/E(current to voltage), R/E(resistor to voltage), F/E(frequency to voltage), V/V(voltage to voltage). Etc. In this paper show only the signal validation and calibration about I/E converter that convert level, pressure, flow such as 4∼20mA into signal for AD conversion such as 0∼10Vdc
Wu, Hau-Tieng; Talmon, Ronen; Lo, Yu-Lun
In this paper, two modern adaptive signal processing techniques, empirical intrinsic geometry and synchrosqueezing transform, are applied to quantify different dynamical features of the respiratory and electroencephalographic signals. We show that the proposed features are theoretically rigorously supported, as well as capture the sleep information hidden inside the signals. The features are used as input to multiclass support vector machines with the radial basis function to automatically classify sleep stages. The effectiveness of the classification based on the proposed features is shown to be comparable to human expert classification-the proposed classification of awake, REM, N1, N2, and N3 sleeping stages based on the respiratory signal (resp. respiratory and EEG signals) has the overall accuracy 81.7% (resp. 89.3%) in the relatively normal subject group. In addition, by examining the combination of the respiratory signal with the electroencephalographic signal, we conclude that the respiratory signal consists of ample sleep information, which supplements to the information stored in the electroencephalographic signal.
Gore, Brian Francis; Wolter, Cynthia A.
A necessary step when developing next generation systems is to understand the tasks that operators will perform. One NextGen concept under evaluation termed Single Pilot Operations (SPO) is designed to improve the efficiency of airline operations. One SPO concept includes a Pilot on Board (PoB), a Ground Station Operator (GSO), and automation. A number of procedural changes are likely to result when such changes in roles and responsibilities are undertaken. Automation is expected to relieve the PoB and GSO of some tasks (e.g. radio frequency changes, loading expected arrival information). A major difference in the SPO environment is the shift to communication-cued crosschecks (verbal / automated) rather than movement-cued crosschecks that occur in a shared cockpit. The current article highlights a task analytic process of the roles and responsibilities between a PoB, an approach-phase GSO, and automation.
At the Specialists' Meeting on Sodium Boiling Detection organized by the International Working Group on Fast Reactors (IWGFR) of the International Atomic Energy Agency at Chester in the United Kingdom in 1981 various methods of detecting sodium boiling were reported. But, it was not possible to make a comparative assessment of these methods because the signal condition in each experiment was different from others. That is why participants of this meeting recommended that a benchmark test should be carried out in order to evaluate and compare signal processing methods for boiling detection. Organization of the Co-ordinated Research Programme (CRP) on signal processing techniques for sodium boiling noise detection was also recommended at the 16th meeting of the IWGFR. The CRP on Signal Processing Techniques for Sodium Boiling Noise Detection was set up in 1984. Eight laboratories from six countries have agreed to participate in this CRP. The overall objective of the programme was the development of reliable on-line signal processing techniques which could be used for the detection of sodium boiling in an LMFBR core. During the first stage of the programme a number of existing processing techniques used by different countries have been compared and evaluated. In the course of further work, an algorithm for implementation of this sodium boiling detection system in the nuclear reactor will be developed. It was also considered that the acoustic signal processing techniques developed for boiling detection could well make a useful contribution to other acoustic applications in the reactor. This publication consists of two parts. Part I is the final report of the co-ordinated research programme on signal processing techniques for sodium boiling noise detection. Part II contains two introductory papers and 20 papers presented at four research co-ordination meetings since 1985. A separate abstract was prepared for each of these 22 papers. Refs, figs and tabs
Full Text Available Abstract Background Heart signals represent an important way to evaluate cardiovascular function and often what is desired is to quantify the level of some signal of interest against the louder backdrop of the beating of the heart itself. An example of this type of application is the quantification of cavitation in mechanical heart valve patients. Methods An algorithm is presented for the quantification of high-frequency, non-deterministic events such as cavitation from recorded signals. A closed-form mathematical analysis of the algorithm investigates its capabilities. The algorithm is implemented on real heart signals to investigate usability and implementation issues. Improvements are suggested to the base algorithm including aligning heart sounds, and the implementation of the Short-Time Fourier Transform to study the time evolution of the energy in the signal. Results The improvements result in better heart beat alignment and better detection and measurement of the random events in the heart signals, so that they may provide a method to quantify nondeterministic events in heart signals. The use of the Short-Time Fourier Transform allows the examination of the random events in both time and frequency allowing for further investigation and interpretation of the signal. Conclusions The presented algorithm does allow for the quantification of nondeterministic events but proper care in signal acquisition and processing must be taken to obtain meaningful results.
Millette, Véronique; Baddour, Natalie
Heart signals represent an important way to evaluate cardiovascular function and often what is desired is to quantify the level of some signal of interest against the louder backdrop of the beating of the heart itself. An example of this type of application is the quantification of cavitation in mechanical heart valve patients. An algorithm is presented for the quantification of high-frequency, non-deterministic events such as cavitation from recorded signals. A closed-form mathematical analysis of the algorithm investigates its capabilities. The algorithm is implemented on real heart signals to investigate usability and implementation issues. Improvements are suggested to the base algorithm including aligning heart sounds, and the implementation of the Short-Time Fourier Transform to study the time evolution of the energy in the signal. The improvements result in better heart beat alignment and better detection and measurement of the random events in the heart signals, so that they may provide a method to quantify nondeterministic events in heart signals. The use of the Short-Time Fourier Transform allows the examination of the random events in both time and frequency allowing for further investigation and interpretation of the signal. The presented algorithm does allow for the quantification of nondeterministic events but proper care in signal acquisition and processing must be taken to obtain meaningful results.
Malyshkin, G. S.; Sidel'nikov, G. B.
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
Kuo, Sen M; Tian, Wenshun
Combines both the DSP principles and real-time implementations and applications, and now updated with the new eZdsp USB Stick, which is very low cost, portable and widely employed at many DSP labs. Real-Time Digital Signal Processing introduces fundamental digital signal processing (DSP) principles and will be updated to include the latest DSP applications, introduce new software development tools and adjust the software design process to reflect the latest advances in the field. In the 3rd edition of the book, the key aspect of hands-on experiments will be enhanced to make the DSP principle
Full Text Available Deep learning is a subfield of machine learning, which aims to learn a hierarchy of features from input data. Nowadays, researchers have intensively investigated deep learning algorithms for solving challenging problems in many areas such as image classification, speech recognition, signal processing, and natural language processing. In this study, we not only review typical deep learning algorithms in computer vision and signal processing but also provide detailed information on how to apply deep learning to specific areas such as road crack detection, fault diagnosis, and human activity detection. Besides, this study also discusses the challenges of designing and training deep neural networks.
I. P. Gurov
Full Text Available The paper deals with modification of the recurrent processing algorithm for discrete sequence of interferometric signal samples. The algorithm is based on subsequent reference signal prediction at specifying a set (“cloud” of values for signal parameters vector by Monte Carlo method, comparison with the measured signal value and usage of the residual for enhancing the values of signal parameters at each discretization step. The concept of multi-cloud prediction model is used in the proposed modified algorithm. A set of normally distributed clouds is created with expectation values selected on the base of criterion of minimum residual between prediction and observation values. Experimental testing of the proposed method applied to estimation of fringe initial phase in the phase shifting interferometry has been conducted. The estimate variance of the signal reconstructed according to estimated initial phase from initial signal does not exceed 2% of the maximum signal value. It has been shown that the proposed algorithm application makes it possible to avoid the 2π-ambiguity and ensure sustainable recovery of interference fringes phase of a complicated type without involving a priori information about interference fringe phase distribution. The usage of the proposed algorithm applied to estimation of interferometric signals parameters gives the possibility for improving the filter stability with respect to influence of random noise and decreasing requirements for accuracy of a priori filtration parameters setting as compared with conventional (single-cloud implementation of the sequential Monte Carlo method.
Oxenløwe, Leif Katsuo; Pu, Minhao; Ding, Yunhong
This paper presents an overview of recent wo rk on the use of silicon waveguides for processing optical data signals. We will describe ultra-fast, ultra-broadband, polarisation-insensitive and phase-sensitive applications including processing of spectrally-efficient data formats and optical phase...
The author analyzes the basic characteristic of parallel DSP (digital signal processor) TMS320C80 and proposes related optimized image algorithm and the parallel processing method based on parallel DSP. The realtime for many image processing can be achieved in this way
Millecamps, Alexandre; Lowry, Kristin A; Brach, Jennifer S; Perera, Subashan; Redfern, Mark S; Sejdić, Ervin
Gait accelerometry is an important approach for gait assessment. Previous contributions have adopted various pre-processing approaches for gait accelerometry signals, but none have thoroughly investigated the effects of such pre-processing operations on the obtained results. Therefore, this paper investigated the influence of pre-processing operations on signal features extracted from gait accelerometry signals. These signals were collected from 35 participants aged over 65years: 14 of them were healthy controls (HC), 10 had Parkinson׳s disease (PD) and 11 had peripheral neuropathy (PN). The participants walked on a treadmill at preferred speed. Signal features in time, frequency and time-frequency domains were computed for both raw and pre-processed signals. The pre-processing stage consisted of applying tilt correction and denoising operations to acquired signals. We first examined the effects of these operations separately, followed by the investigation of their joint effects. Several important observations were made based on the obtained results. First, the denoising operation alone had almost no effects in comparison to the trends observed in the raw data. Second, the tilt correction affected the reported results to a certain degree, which could lead to a better discrimination between groups. Third, the combination of the two pre-processing operations yielded similar trends as the tilt correction alone. These results indicated that while gait accelerometry is a valuable approach for the gait assessment, one has to carefully adopt any pre-processing steps as they alter the observed findings. Copyright © 2015 Elsevier Ltd. All rights reserved.
Full Text Available In this paper, we present a review on silicon-based nonlinear devices for all optical nonlinear processing of complex telecommunication signals. We discuss some recent developments achieved by our research group, through extensive collaborations with academic partners across Europe, on optical signal processing using silicon-germanium and amorphous silicon based waveguides as well as novel materials such as silicon rich silicon nitride and tantalum pentoxide. We review the performance of four wave mixing wavelength conversion applied on complex signals such as Differential Phase Shift Keying (DPSK, Quadrature Phase Shift Keying (QPSK, 16-Quadrature Amplitude Modulation (QAM and 64-QAM that dramatically enhance the telecom signal spectral efficiency, paving the way to next generation terabit all-optical networks.
Krell, Mario M.; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Teiwes, Johannes; Metzen, Jan H.; Kirchner, Elsa A.; Kirchner, Frank
In neuroscience large amounts of data are recorded to provide insights into cerebral information processing and function. The successful extraction of the relevant signals becomes more and more challenging due to increasing complexities in acquisition techniques and questions addressed. Here, automated signal processing and machine learning tools can help to process the data, e.g., to separate signal and noise. With the presented software pySPACE (http://pyspace.github.io/pyspace), signal processing algorithms can be compared and applied automatically on time series data, either with the aim of finding a suitable preprocessing, or of training supervised algorithms to classify the data. pySPACE originally has been built to process multi-sensor windowed time series data, like event-related potentials from the electroencephalogram (EEG). The software provides automated data handling, distributed processing, modular build-up of signal processing chains and tools for visualization and performance evaluation. Included in the software are various algorithms like temporal and spatial filters, feature generation and selection, classification algorithms, and evaluation schemes. Further, interfaces to other signal processing tools are provided and, since pySPACE is a modular framework, it can be extended with new algorithms according to individual needs. In the presented work, the structural hierarchies are described. It is illustrated how users and developers can interface the software and execute offline and online modes. Configuration of pySPACE is realized with the YAML format, so that programming skills are not mandatory for usage. The concept of pySPACE is to have one comprehensive tool that can be used to perform complete signal processing and classification tasks. It further allows to define own algorithms, or to integrate and use already existing libraries. PMID:24399965
Mario Michael Krell
Full Text Available In neuroscience large amounts of data are recorded to provide insights into cerebral information processing and function. The successful extraction of the relevant signals becomes more and more challenging due to increasing complexities in acquisition techniques and questions addressed. Here, automated signal processing and machine learning tools can help to process the data, e.g., to separate signal and noise. With the presented software pySPACE (http://pyspace.github.io/pyspace, signal processing algorithms can be compared and applied automatically on time series data, either with the aim of finding a suitable preprocessing, or of training supervised algorithms to classify the data. pySPACE originally has been built to process multi-sensor windowed time series data, like event-related potentials from the electroencephalogram (EEG. The software provides automated data handling, distributed processing, modular build-up of signal processing chains and tools for visualization and performance evaluation. Included in the software are various algorithms like temporal and spatial filters, feature generation and selection, classification algorithms and evaluation schemes. Further, interfaces to other signal processing tools are provided and, since pySPACE is a modular framework, it can be extended with new algorithms according to individual needs. In the presented work, the structural hierarchies are described. It is illustrated how users and developers can interface the software and execute offline and online modes. Configuration of pySPACE is realized with the YAML format, so that programming skills are not mandatory for usage. The concept of pySPACE is to have one comprehensive tool that can be used to perform complete signal processing and classification tasks. It further allows to define own algorithms, or to integrate and use already existing libraries.
Krell, Mario M; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Teiwes, Johannes; Metzen, Jan H; Kirchner, Elsa A; Kirchner, Frank
In neuroscience large amounts of data are recorded to provide insights into cerebral information processing and function. The successful extraction of the relevant signals becomes more and more challenging due to increasing complexities in acquisition techniques and questions addressed. Here, automated signal processing and machine learning tools can help to process the data, e.g., to separate signal and noise. With the presented software pySPACE (http://pyspace.github.io/pyspace), signal processing algorithms can be compared and applied automatically on time series data, either with the aim of finding a suitable preprocessing, or of training supervised algorithms to classify the data. pySPACE originally has been built to process multi-sensor windowed time series data, like event-related potentials from the electroencephalogram (EEG). The software provides automated data handling, distributed processing, modular build-up of signal processing chains and tools for visualization and performance evaluation. Included in the software are various algorithms like temporal and spatial filters, feature generation and selection, classification algorithms, and evaluation schemes. Further, interfaces to other signal processing tools are provided and, since pySPACE is a modular framework, it can be extended with new algorithms according to individual needs. In the presented work, the structural hierarchies are described. It is illustrated how users and developers can interface the software and execute offline and online modes. Configuration of pySPACE is realized with the YAML format, so that programming skills are not mandatory for usage. The concept of pySPACE is to have one comprehensive tool that can be used to perform complete signal processing and classification tasks. It further allows to define own algorithms, or to integrate and use already existing libraries.
Seabaugh, P.W.; Sellers, D.E.; Woltermann, H.A.; Boh, D.R.; Miles, J.C.; Fushimi, F.C.
A methodology (controllable unit accountability) is described that identifies controlling errors for corrective action, locates areas and time frames of suspected diversions, defines time and sensitivity limits of diversion flags, defines the time frame in which pass-through quantities of accountable material and by inference SNM remain controllable and provides a basis for identification of incremental cost associated with purely safeguards considerations. The concept provides a rationale from which measurement variability and specific safeguard criteria can be converted into a numerical value that represents the degree of control or improvement attainable with a specific measurement system or combination of systems. Currently the methodology is being applied to a high-throughput, mixed-oxide fuel fabrication process. The process described is merely used to illustrate a procedure that can be applied to other more pertinent processes
Ana Lúcia Tinoco Cabral
Full Text Available Students - at different levels, ranging from early grades up to PhD - face problems both on comprehension and text production. This paper focuses on the text plan concept according to the DTA (Discourse Text Analysis approach, i.e., a principle of organization that allows students to put into practice the production intention as well as to arrange text information while producing; being responsible for the text compositional structure (Adam, 2008. The study analyzes the relation between text plan and the writing planning process, in which the first one provides the second with theoretical support. In order to develop such research, the study covers some issues related to the reading skill, analyzes an argumentative text as per its textual plan, and presents some reflections on the writing process, focusing on the relation between textual plan and the writing planning process.
Tu, Bing; Li, De Sheng; Lin, En Huai; Ji, Miao Miao
Wireless measure while drilling (MWD) transmits data by using mud pulse signal ; the ground decoding system collects the mud pulse signal and then decodes and displays the parameters under the down-hole according to the designed encoding rules and the correct detection and recognition of the ground decoding system towards the received mud pulse signal is one kind of the key technology of MWD. This paper introduces digit of Manchester encoding that transmits data and the format of the wireless transmission of data under the down-hole and develops a set of ground decoding systems. The ground decoding algorithm uses FIR (Finite impulse response) digital filtering to make de-noising on the mud pulse signal, then adopts the related base value modulating algorithm to eliminate the pump pulse base value of the denoised mud pulse signal, finally analyzes the mud pulse signal waveform shape of the selected Manchester encoding in three bits cycles, and applies the pattern similarity recognition algorithm to the mud pulse signal recognition. The field experiment results show that the developed device can make correctly extraction and recognition for the mud pulse signal with simple and practical decoding process and meet the requirements of engineering application.
Full Text Available Biomedical signal and image processing establish a dynamic area of specialization in both academic as well as research aspects of biomedical engineering. The concepts of signal and image processing have been widely used for extracting the physiological information in implementing many clinical procedures for sophisticated medical practices and applications. In this paper, the relationship between electrophysiological signals, i.e., electrocardiogram (ECG, electromyogram (EMG, electroencephalogram (EEG and functional image processing and their derived interactions have been discussed. Examples have been investigated in various case studies such as neurosciences, functional imaging, and cardiovascular system, by using different algorithms and methods. The interaction between the extracted information obtained from multiple signals and modalities seems to be very promising. The advanced algorithms and methods in the area of information retrieval based on time-frequency representation have been investigated. Finally, some examples of algorithms have been discussed in which the electrophysiological signals and functional images have been properly extracted and have a significant impact on various biomedical applications. Keywords: Biomedical signals and images, Processing, Analysis
Sheng, Hu; Qiu, TianShuang
Fractional processes are widely found in science, technology and engineering systems. In Fractional Processes and Fractional-order Signal Processing, some complex random signals, characterized by the presence of a heavy-tailed distribution or non-negligible dependence between distant observations (local and long memory), are introduced and examined from the ‘fractional’ perspective using simulation, fractional-order modeling and filtering and realization of fractional-order systems. These fractional-order signal processing (FOSP) techniques are based on fractional calculus, the fractional Fourier transform and fractional lower-order moments. Fractional Processes and Fractional-order Signal Processing: • presents fractional processes of fixed, variable and distributed order studied as the output of fractional-order differential systems; • introduces FOSP techniques and the fractional signals and fractional systems point of view; • details real-world-application examples of FOSP techniques to demonstr...
Ghodsi, Zara; Silva, Emmanuel Sirimal; Hassani, Hossein
The maternal segmentation coordinate gene bicoid plays a significant role during Drosophila embryogenesis. The gradient of Bicoid, the protein encoded by this gene, determines most aspects of head and thorax development. This paper seeks to explore the applicability of a variety of signal processing techniques at extracting bicoid expression signal, and whether these methods can outperform the current model. We evaluate the use of six different powerful and widely-used models representing both parametric and nonparametric signal processing techniques to determine the most efficient method for signal extraction in bicoid. The results are evaluated using both real and simulated data. Our findings show that the Singular Spectrum Analysis technique proposed in this paper outperforms the synthesis diffusion degradation model for filtering the noisy protein profile of bicoid whilst the exponential smoothing technique was found to be the next best alternative followed by the autoregressive integrated moving average. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.
Nedergaard, Nicky; Jones, Richard
actually implement these capabilities. A conceptual model showing how managing concept design processes can help firms systematically develop dynamic capabilities and help bridge the gap between the market-oriented and resource-focused strategic perspectives is presented. By placing this model in a design......-driven innovation perspective three theoretical propositions is derived explicating both the paper’s implementation approach to dynamic capabilities as well as new ways of understanding these capabilities. Concluding remarks are made discussing both the paper’s contribution to the strategic marketing literature...
V. T. Dmitrieva
Full Text Available The paper deals with the phenomenon of «Digital Earth» as a concept that can be used in teaching, and, on the other hand, as a fundamentally new phenomenon that needs special education to work with them. The article analyzes the experience of using «Digital Earth» in education. It considers the need for specialized training to work with raster images, collaborative creation of data, training in the use of diverse data sets and analysis of the dynamics of processes in time to ensure sustainable development.
Pause, Bettina M
Brain development in mammals has been proposed to be promoted by successful adaptations to the social complexity as well as to the social and non-social chemical environment. Therefore, the communication via chemosensory signals might have been and might still be a phylogenetically ancient communication channel transmitting evolutionary significant information. In humans, the neuronal underpinnings of the processing of social chemosignals have been investigated in relation to kin recognition, mate choice, the reproductive state and emotional contagion. These studies reveal that human chemosignals are probably not processed within olfactory brain areas but through neuronal relays responsible for the processing of social information. It is concluded that the processing of human social chemosignals resembles the processing of social signals originating from other modalities, except that human social chemosignals are usually communicated without the allocation of attentional resources, that is below the threshold of consciousness. Deviances in the processing of human social chemosignals might be related to the development and maintenance of mental disorders.
Jack B. Dennis
Full Text Available Complex signal-processing problems are naturally described by compositions of program modules that process streams of data. In this article we discuss how such compositions may be analyzed and mapped onto multiprocessor computers to effectively exploit the massive parallelism of these applications. The methods are illustrated with an example of signal processing for an optical surveillance problem. Program transformation and analysis are used to construct a program description tree that represents the given computation as an acyclic interconnection of stream-processing modules. Each module may be mapped to a set of threads run on a group of processing elements of a target multiprocessor. Performance is considered for two forms of multiprocessor architecture, one based on conventional DSP technology and the other on a multithreaded-processing element design.
Processing Techniques for Landmine Detection Using GPR The views, opinions and/or findings contained in this report are those of the author(s) and should not...310 Jesse Hall Columbia, MO 65211 -1230 654808 633606 ABSTRACT Advanced Statistical Signal Processing Techniques for Landmine Detection Using GPR Report...Aggregation Operator For Humanitarian Demining Using Hand- Held GPR , , (01 2008): . doi: D. Ho, P. Gader, J. Wilson, H. Frigui. Subspace Processing
Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten
A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell t...
Gopi, E S
This book examines signal processing techniques used in wireless communication illustrated by using the Matlab program. The author discusses these techniques as they relate to Doppler spread; delay spread; Rayleigh and Rician channel modeling; rake receiver; diversity techniques; MIMO and OFDM -based transmission techniques; and array signal processing. Related topics such as detection theory, link budget, multiple access techniques, and spread spectrum are also covered. · Illustrates signal processing techniques involved in wireless communication using Matlab · Discusses multiple access techniques such as Frequency division multiple access, Time division multiple access, and Code division multiple access · Covers band pass modulation techniques such as Binary phase shift keying, Differential phase shift keying, Quadrature phase shift keying, Binary frequency shift keying, Minimum shift keying, and Gaussian minimum shift keying.
Dorband, John E.; Aburdene, Maurice F.
Recently, networked and cluster computation have become very popular for both signal processing and system simulation. A new language is ideally suited for parallel signal processing applications and system simulation since it allows the programmer to explicitly express the computations that can be performed concurrently. In addition, the new C based parallel language (ace C) for architecture-adaptive programming allows programmers to implement algorithms and system simulation applications on parallel architectures by providing them with the assurance that future parallel architectures will be able to run their applications with a minimum of modification. In this paper, we will focus on some fundamental features of ace C and present a signal processing application (FFT).
Woo, Wai; Sulaiman, Hamzah; Othman, Mohd; Saat, Mohd
This book presents important research findings and recent innovations in the field of machine learning and signal processing. A wide range of topics relating to machine learning and signal processing techniques and their applications are addressed in order to provide both researchers and practitioners with a valuable resource documenting the latest advances and trends. The book comprises a careful selection of the papers submitted to the 2015 International Conference on Machine Learning and Signal Processing (MALSIP 2015), which was held on 15–17 December 2015 in Ho Chi Minh City, Vietnam with the aim of offering researchers, academicians, and practitioners an ideal opportunity to disseminate their findings and achievements. All of the included contributions were chosen by expert peer reviewers from across the world on the basis of their interest to the community. In addition to presenting the latest in design, development, and research, the book provides access to numerous new algorithms for machine learni...
Andrés Julio Demski
Full Text Available The electrocardiogram kit ('ecg-kit' for Matlab is an application-programming interface (API developed to provide users a common interface to access and process cardiovascular signals. In the current version, the toolbox supports several ECG recording formats, most of them used by the most popular databases, which allows access to more than 7 TB of information, stored in public databases such as those included in Physionet or the THEW project. The toolbox includes several algorithms frequently used in cardiovascular signal processing, such as heartbeat detectors and classifiers, pulse detectors for pulsatile signals and an ECG delineator. In addition, it provides a tool for manually reviewing and correcting the results produced by the automatic algorithms. The results obtained can be stored in a Matlab (.MAT file for backup or subsequent processing, or used to create a PDF report.
Zejnalova, O.; Zejnalov, Sh.; Hambsch, F.J.; Oberstedt, S.
Digital signal processing algorithms for nuclear particle spectroscopy are described along with a digital pile-up elimination method applicable to equidistantly sampled detector signals pre-processed by a charge-sensitive preamplifier. The signal processing algorithms are provided as recursive one- or multi-step procedures which can be easily programmed using modern computer programming languages. The influence of the number of bits of the sampling analogue-to-digital converter on the final signal-to-noise ratio of the spectrometer is considered. Algorithms for a digital shaping-filter amplifier, for a digital pile-up elimination scheme and for ballistic deficit correction were investigated using a high purity germanium detector. The pile-up elimination method was originally developed for fission fragment spectroscopy using a Frisch-grid back-to-back double ionization chamber and was mainly intended for pile-up elimination in case of high alpha-radioactivity of the fissile target. The developed pile-up elimination method affects only the electronic noise generated by the preamplifier. Therefore the influence of the pile-up elimination scheme on the final resolution of the spectrometer is investigated in terms of the distance between pile-up pulses. The efficiency of the developed algorithms is compared with other signal processing schemes published in literature
Inspired by ongoing evolutions in the field of wireless body area networks (WBANs), this tutorial paper presents a conceptual and exploratory study of wireless electroencephalography (EEG) sensor networks (WESNs), with an emphasis on distributed signal processing aspects. A WESN is conceived as a modular neuromonitoring platform for high-density EEG recordings, in which each node is equipped with an electrode array, a signal processing unit, and facilities for wireless communication. We first address the advantages of such a modular approach, and we explain how distributed signal processing algorithms make WESNs more power-efficient, in particular by avoiding data centralization. We provide an overview of distributed signal processing algorithms that are potentially applicable in WESNs, and for illustration purposes, we also provide a more detailed case study of a distributed eye blink artifact removal algorithm. Finally, we study the power efficiency of these distributed algorithms in comparison to their centralized counterparts in which all the raw sensor signals are centralized in a near-end or far-end fusion center.
Ahmadov, Q.S.; Institute of Radiation Problems, ANAS, Baku
Full text: Data acquisition systems for nuclear spectroscopy have traditionally been based on systems with analog shaping amplifiers followed by analog-to-digital converters. Recently, however, new systems based on digital signal processing allow us to replace the analog shaping and timing circuitry the numerical algorithms to derive properties of the pulse such as its amplitude. DSP is a fully numerical analysis of the detector pulse signals and this technique demonstrates significant advantages over analog systems in some circumstances. From a mathematical point of view, one can consider the signal evolution from the detector to the ADC as a sequence of transformations that can be described by precisely defined mathematical expressions.Digital signal processing with ADCs has the possibility to utilize further information on the signal pulses from radiation detectors  . In the experiment each step of the signal generation in the 3He filled proportional counter was described using digital signal processing techniques (DSP). The electronic system has consisted of a detector, a preamplifier and a digital oscilloscope. The pulses from the detector were digitized using a OTSZS-02 (250USB)-4 digital storage oscilloscope from ZAO R UDNEV-SHILYAYEV . This oscilloscope allowed signal digitization with accuracy of 8 bit(256 levels) and with frequency of up to 5.10''8 samples/s. As a neutron source was used Cf-252.To obtain detector output current pulse I(t) created by the motions of the ions/electrons pairs was written an algorithm which can easily be programmed using modern computer programming languages
Matthews, N. D.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.
Seventeen interpreters ranked sets of computer-generated radar imagery to assess the value of post-correlation processing on the interpretability of SAR (synthetic aperture radar) imagery. The post-correlation processing evaluated amounts to a nonlinear mapping of the signal exiting a digital correlator and allows full use of signal bandwidth for improving the spatial resolution or for noise reduction. The results indicate that it is reasonable to hypothesize an optimal SAR presentation format for specific applications even though this study was too limited to be specific.
Oxenløwe, Leif Katsuo
We have recently seen that nanowires can provide unparalleled optical signal processing (OSP) bandwidth and provide ultra-fast operation as well. Utilising the ultra-broad OSP bandwidth for e.g. optical time lenses allows for energy-efficient optical switching. © 2015 OSA.......We have recently seen that nanowires can provide unparalleled optical signal processing (OSP) bandwidth and provide ultra-fast operation as well. Utilising the ultra-broad OSP bandwidth for e.g. optical time lenses allows for energy-efficient optical switching. © 2015 OSA....
""a useful addition to the fields of both magnetic resonance (MR) as well as signal processing. … immensely useful as a practical resource handbook to dip into from time to time and to find specific advice on issues faced during the course of work in MR techniques for cancer research. … the best feature of this book is how it positions the very practical area of digital signal processing in the contextual framework of a much more esoteric and fundamental field-that of quantum mechanics. The direct link between quantum-mechanical spectral analysis and rational response functions and the gene
Shankar, R.; Paradiso, T.J.; Lane, S.S.; Quinn, J.R.
Ultrasonic digital data were collected from underclad cracks in sample pressure vessel specimen blocks. These blocks were weld cladded under different processes to simulate actual conditions in US Pressure Water Reactors. Each crack was represented by a flaw-echo dynamic curve which is a plot of the transducer motion on the surface as a function of the ultrasonic response into the material. Crack depth sizing was performed by identifying in the dynamic curve the crack tip diffraction signals from the upper and lower tips. This paper describes the experimental procedure, digital signal processing methods used and algorithms developed for crack depth sizing
Chen, Xuefeng; Mukhopadhyay, Subhas
This book highlights the latest advances and trends in advanced signal processing (such as wavelet theory, time-frequency analysis, empirical mode decomposition, compressive sensing and sparse representation, and stochastic resonance) for structural health monitoring (SHM). Its primary focus is on the utilization of advanced signal processing techniques to help monitor the health status of critical structures and machines encountered in our daily lives: wind turbines, gas turbines, machine tools, etc. As such, it offers a key reference guide for researchers, graduate students, and industry professionals who work in the field of SHM.
Jackson, G.P.; Elliott, A.
Beam detectors such as striplines and wall current monitors rely on matched electrical networks to transmit and process beam information. Frequency bandwidth, noise immunity, reflections, and signal to noise ratio are considerations that require compromises limiting the quality of the measurement. Recent advances in fiber optics related technologies have made it possible to acquire and process beam signals in the optical domain. This paper describes recent developments in the application of these technologies to accelerator beam diagnostics. The design and construction of an optical notch filter used for a stochastic cooling system is used as an example. Conceptual ideas for future beam detectors are also presented
El-Shennawy, Khamies Mohammed Ali
This book is tailored to fulfil the requirements in the area of the signal processing in communication systems. The book contains numerous examples, solved problems and exercises to explain the methodology of Fourier Series, Fourier Analysis, Fourier Transform and properties, Fast Fourier Transform FFT, Discrete Fourier Transform DFT and properties, Discrete Cosine Transform DCT, Discrete Wavelet Transform DWT and Contourlet Transform CT. The book is characterized by three directions, the communication theory and signal processing point of view, the mathematical point of view and utility compu
Wagner, Anita; Pals, Carina; de Blecourt, Charlotte M; Sarampalis, Anastasios; Başkent, Deniz
Speech perception is formed based on both the acoustic signal and listeners' knowledge of the world and semantic context. Access to semantic information can facilitate interpretation of degraded speech, such as speech in background noise or the speech signal transmitted via cochlear implants (CIs). This paper focuses on the latter, and investigates the time course of understanding words, and how sentential context reduces listeners' dependency on the acoustic signal for natural and degraded speech via an acoustic CI simulation.In an eye-tracking experiment we combined recordings of listeners' gaze fixations with pupillometry, to capture effects of semantic information on both the time course and effort of speech processing. Normal-hearing listeners were presented with sentences with or without a semantically constraining verb (e.g., crawl) preceding the target (baby), and their ocular responses were recorded to four pictures, including the target, a phonological (bay) competitor and a semantic (worm) and an unrelated distractor.The results show that in natural speech, listeners' gazes reflect their uptake of acoustic information, and integration of preceding semantic context. Degradation of the signal leads to a later disambiguation of phonologically similar words, and to a delay in integration of semantic information. Complementary to this, the pupil dilation data show that early semantic integration reduces the effort in disambiguating phonologically similar words. Processing degraded speech comes with increased effort due to the impoverished nature of the signal. Delayed integration of semantic information further constrains listeners' ability to compensate for inaudible signals.
Schobben, Daniel W E
Blind Signal Separation (BSS) deals with recovering (filtered versions of) source signals from an observed mixture thereof. The term `blind' relates to the fact that there are no reference signals for the source signals and also that the mixing system is unknown. This book presents a new method for blind signal separation, which is developed to work on microphone signals. Acoustic Echo Cancellation (AEC) is a well-known technique to suppress the echo that a microphone picks up from a loudspeaker in the same room. Such acoustic feedback occurs for example in hands-free telephony and can lead to a perceived loud tone. For an application such as a voice-controlled television, a stereo AEC is required to suppress the contribution of the stereo loudspeaker setup. A generalized AEC is presented that is suited for multi-channel operation. New algorithms for Blind Signal Separation and multi-channel Acoustic Echo Cancellation are presented. A background is given in array signal processing methods, adaptive filter the...
Zhang, Weifeng; Yao, Jianping
Since the discovery of the Bragg's law in 1913, Bragg gratings have become important optical devices and have been extensively used in various systems. In particular, the successful inscription of a Bragg grating in a fiber core has significantly boosted its engineering applications. However, a conventional grating device is usually designed for a particular use, which limits general-purpose applications since its index modulation profile is fixed after fabrication. In this article, we propose to implement a fully reconfigurable grating, which is fast and electrically reconfigurable by field programming. The concept is verified by fabricating an integrated grating on a silicon-on-insulator platform, which is employed as a programmable signal processor to perform multiple signal processing functions including temporal differentiation, microwave time delay, and frequency identification. The availability of ultrafast and reconfigurable gratings opens new avenues for programmable optical signal processing at the speed of light.
Full Text Available RNA processing involves a variety of processes affecting gene expression, including the removal of introns through RNA splicing, as well as 3' end processing (cleavage and polyadenylation. Alternative RNA processing is fundamentally important for gene regulation, and aberrant processing is associated with the initiation and progression of cancer. Deregulated Wnt signaling, which is the initiating event in the development of most cases of human colorectal cancer (CRC, has been linked to modified RNA processing, which may contribute to Wnt-mediated colonic carcinogenesis. Crosstalk between Wnt signaling and alternative RNA splicing with relevance to CRC includes effects on the expression of Rac1b, an alternatively spliced gene associated with tumorigenesis, which exhibits alternative RNA splicing that is influenced by Wnt activity. In addition, Tcf4, a crucial component of Wnt signaling, also exhibits alternative splicing, which is likely involved in colonic tumorigenesis. Modulation of 3' end formation, including of the Wnt target gene COX-2, also can influence the neoplastic process, with implications for CRC. While many human genes are dependent on introns and splicing for normal levels of gene expression, naturally intronless genes exist with a unique metabolism that allows for intron-independent gene expression. Effects of Wnt activity on the RNA metabolism of the intronless Wnt-target gene c-jun is a likely contributor to cancer development. Further, butyrate, a breakdown product of dietary fiber and a histone deacetylase inhibitor, upregulates Wnt activity in CRC cells, and also modulates RNA processing; therefore, the interplay between Wnt activity, the modulation of this activity by butyrate, and differential RNA metabolism in colonic cells can significantly influence tumorigenesis. Determining the role played by altered RNA processing in Wnt-mediated neoplasia may lead to novel interventions aimed at restoring normal RNA metabolism for
Kilby, J; Prasad, K; Mawston, G
This paper covers the design aspects of a new multi-channel electrode for the acquisition of surface electromyography signals from a selected muscle. The new multi-channel electrode has 11 pins where the monopolar signals produced will be configured in a software either as Linear array or Laplacian configuration. The design specification of the pre-amplifier ideally was to have a voltage gain of 500 with bandpass filtering of 5 Hz-1 kHz. The final design of the pre-amplifier circuit using an INA 118 instrumentation amplifier was built and tested to give values for voltage gain of 484 with bandpass filtering of 6.8 Hz-1.02 kHz. The software configuration that gives clearer and more defined signals in terms of motor unit action potentials for future signal processing is the Laplacian rather than Linear array.
Luis E. Mendoza
Full Text Available This paper presents the results obtained from the recording, processing and classification of words in the Spanish language by means of the analysis of subvocal speech signals. The processed database has six words (forward, backward, right, left, start and stop. In this work, the signals were sensed with surface electrodes placed on the surface of the throat and acquired with a sampling frequency of 50 kHz. The signal conditioning consisted in: the location of area of interest using energy analysis, and filtering using Discrete Wavelet Transform. Finally, the feature extraction was made in the time-frequency domain using Wavelet Packet and statistical techniques for windowing. The classification was carried out with a backpropagation neural network whose training was performed with 70% of the database obtained. The correct classification rate was 75%±2.
Dharmapurikar, A.; Bhattacharya, S.; Mukhopadhyay, P.K.; Sawhney, A.; Patil, R.K.
A networked radiation and parameter monitoring system with three tier architecture is being developed. Signal Processing Device (SPD) is a second level sub-system node in the network. SPD is an embedded system which has multiple input channels and output communication interfaces. It acquires and processes data from first level parametric sensor devices, and sends to third level devices in response to request commands received from host. It also performs scheduled diagnostic operations and passes on the information to host. It supports inputs in the form of differential digital signals and analog voltage signals. SPD communicates with higher level devices over RS232/RS422/USB channels. The system has been designed with main requirements of minimal power consumption and harsh environment in radioactive plants. This paper discusses the hardware and software design details of SPD. (author)
Comorett, G.; Chiarucc, S.; Belli, C.
Modern radio telescopes require the processing of wideband signals, with sample rates from tens of MHz to tens of GHz, and are composed from hundreds up to a million of individual antennas. Digital signal processing of these signals include digital receivers (the digital equivalent of the heterodyne receiver), beamformers, channelizers, spectrometers. FPGAs present the advantage of providing a relatively low power consumption, relative to GPUs or dedicated computers, a wide signal data path, and high interconnectivity. Efficient algorithms have been developed for these applications. Here we will review some of the signal processing cores developed for the SKA telescope. The LFAA beamformer/channelizer architecture is based on an oversampling channelizer, where the channelizer output sampling rate and channel spacing can be set independently. This is useful where an overlap between adjacent channels is required to provide an uniform spectral coverage. The architecture allows for an efficient and distributed channelization scheme, with a final resolution corresponding to a million of spectral channels, minimum leakage and high out-of-band rejection. An optimized filter design procedure is used to provide an equiripple response with a very large number of spectral channels. A wideband digital receiver has been designed in order to select the processed bandwidth of the SKA Mid receiver. The receiver extracts a 2.5 MHz bandwidth form a 14 GHz input bandwidth. The design allows for non-integer ratios between the input and output sampling rates, with a resource usage comparable to that of a conventional decimating digital receiver. Finally, some considerations on quantization of radioastronomic signals are presented. Due to the stochastic nature of the signal, quantization using few data bits is possible. Good accuracies and dynamic range are possible even with 2-3 bits, but the nonlinearity in the correlation process must be corrected in post-processing. With at least 6
Search for better materials and processes has been a part of the evolution of mankind and it still continues to be so as it is being realized that earth's resources are not everlasting and effect of rapid growth on environment may adversely affect the future development. Sustainable development is the only choice for today for long term survival. Better quality and high functional materials, made by superior technologies are being demanded by the society. Radiation processing technology has significantly contributed to meet the expectation of the people in providing superior products and processes while preserving the environment. Processes are being developed where resources are fully utilized with maximum advantages and little disturbance to the environment. More than 1500 electron beam accelerators and about 500 Gamma Irradiators are presently in use and many are being deployed for radiation processing of medical supplies, pharmaceuticals and herbal materials, treat effluents and preserve food and agricultural products and several industrial products. DAE has an ambitious plan to deploy radiation technology for societal benefits in India. In the presentations some interesting applications of Radiation Processing Technology will be discussed which includes (1) Radiation Processing of Cashew Apple fruit for bio-ethanol production (2) High Energy Battery separators (3) Plant Growth Promoters and (4) Tunable biodegradability. The discussion would reveal how a waste product like cashew apple can be converted to useful materials and advanced materials like HEB separators and Tunable Biodegradable films can be made using radiation technology. Use of radiation de-polymerized polysaccharides in some experiments have shown unexpected increase in agriculture output giving new concepts to increase the productivity. (author)
Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement
A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).
Suwanich, Thanapat; Chutima, Parames
This research focuses on the problem occurred in the reactive dye synthesis process of a global manufacturer in Thailand which producing various chemicals for reactive dye products to supply global industries such as chemicals, textiles and garments. The product named “Reactive Blue Base” is selected in this study because it has highest demand and the current chemical yield shows a high variation, i.e. yield variation of 90.4% - 99.1% (S.D. = 2.405 and Cpk = -0.08) and average yield is 94.5% (lower than the 95% standard set by the company). The Six Sigma concept is applied aiming at increasing yield and reducing variation of this process. This approach is suitable since it provides a systematic guideline with five improvement phases (DMAIC) to effectively tackle the problem and find the appropriate parameter settings of the process. Under the new parameter settings, the process yield variation is reduced to range between 96.5% - 98.5% (S.D. = 0.525 and Cpk = 1.83) and the average yield is increased to 97.5% (higher than the 95% standard set by the company).
Yli-Harja, Olli; Ylipää, Antti; Nykter, Matti; Zhang, Wei
In this editorial we introduce the research paradigms of signal processing in the era of systems biology. Signal processing is a field of science traditionally focused on modeling electronic and communications systems, but recently it has turned to biological applications with astounding results. The essence of signal processing is to describe the natural world by mathematical models and then, based on these models, develop efficient computational tools for solving engineering problems. Here, we underline, with examples, the endless possibilities which arise when the battle-hardened tools of engineering are applied to solve the problems that have tormented cancer researchers. Based on this approach, a new field has emerged, called cancer systems biology. Despite its short history, cancer systems biology has already produced several success stories tackling previously impracticable problems. Perhaps most importantly, it has been accepted as an integral part of the major endeavors of cancer research, such as analyzing the genomic and epigenomic data produced by The Cancer Genome Atlas (TCGA) project. Finally, we show that signal processing and cancer research, two fields that are seemingly distant from each other, have merged into a field that is indeed more than the sum of its parts. PMID:21439242
An excellent introductory text, this book covers the basic theoretical, algorithmic and real-time aspects of digital signal processing (DSP). Detailed information is provided on off-line, real-time and DSP programming and the reader is effortlessly guided through advanced topics such as DSP hardware design, FIR and IIR filter design and difference equation manipulation.
Abstract. This paper is concerned with a review of some recent work on derivation and synthesis of lattice structures for digital signal processing (DSP). In particular, synthesis of canonical structures for both finite impulse response (FIR) and infinite impulse response (IIR) transfer functions is presented in detail. This has ...
Poulsen, Henrik Nørskov; Jepsen, Kim Stokholm; Clausen, Anders
As all-optical signal processing is maturing, optical time division multiplexing (OTDM) has also gained interest for simple networking in high capacity backbone networks. As an example of a network scenario we show an OTDM bus interconnecting another OTDM bus, a single high capacity user represen...
Popovskiy, V. V.
The universal method of processing of signals with the help of downturn of their dimensions is considered. This work has methodical, generalizing nature, and is aimed at drawing the attention of specialist and scientist to the unity of the definitions and solutions of the problems, connected with N-dimensional presentation and confluent presentations.
The application of multiplexing and signal processing techniques used for reactor operation and utilisation of data from the in-core instrumentation system is described. After a brief recall about in-core instrumentation, the aims, the advantages of multiplexing are presented. One of the aims of this realization is to show the compatibity between the technologies used with a PWR environment [fr
Dos Santos, S.; Vejvodová, Šárka; Převorovský, Zdeněk
Roč. 59, č. 2 (2010), s. 108-117 ISSN 1736-6046 Institutional research plan: CEZ:AV0Z20760514 Keywords : nonlinear signal processing * TR-NEWS * symmetry analysis * DORT Subject RIV: BI - Acoustics Impact factor: 0.464, year: 2010 www.eap.ee/proceedings
Manchón, Carles Navarro
-division multiplexing (OFDM) systems, with a particular emphasis on the 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) standard as a study case. Signal processing in wireless receivers can be designed following different strategies. On the one hand, one can use intuitive argumentation to define...
Witte, H; Ungureanu, M; Ligges, C; Hemmelmann, D; Wüstenberg, T; Reichenbach, J; Astolfi, L; Babiloni, F; Leistritz, L
The main objective is to show current topics and future trends in the field of medical signal processing which are derived from current research concepts. Signal processing as an integrative concept within the scope of medical informatics is demonstrated. For all examples time-variant multivariate autoregressive models were used. Based on this modeling, the concept of Granger causality in terms of the time-variant Granger causality index and the time-variant partial directed coherence was realized to investigate directed information transfer between different brain regions. Signal informatics encompasses several diverse domains including: processing steps, methodologies, levels and subject fields, and applications. Five trends can be recognized and in order to illustrate these trends, three analysis strategies derived from current neuroscientific studies are presented. These examples comprise high-dimensional fMRI and EEG data. In the first example, the quantification of time-variant-directed information transfer between activated brain regions on the basis of fast-fMRI data is introduced and discussed. The second example deals with the investigation of differences in word processing between dyslexic and normal reading children. Different dynamic neural networks of the directed information transfer are identified on the basis of event-related potentials. The third example shows time-variant cortical connectivity networks derived from a source model. These examples strongly emphasize the integrative nature of signal informatics, encompassing processing steps, methodologies, levels and subject fields, and applications.
the capa- Lility of the AFI’ signal processing laboratory and to make it mor , user-oriented. Three areas--digitizing operations, siqll p)rocess un...ilurrird (Thlcd. Tl’lery aind A ’pplic,- ii ,1 ij;ita 1Siqnal Procossing. Englow wod Cliffs: Pren- -Ii’ .t I c. , 1975. ,). igal T ’ic ,ii ,qy, InC. I
Harak, A.E.; Little, W.E.; Faulders, C.R.
A new concept for above-ground retorting of oil shale was disclosed by A.E. Harak in US Patent No. 4,340,463, dated July 20, 1982, and assigned to the US Department of Energy. This patent titled System for Utilizing Oil Shale Fines, describes a process wherein oil shale fines of one-half inch diameter and less are pyrolyzed in an entrained-flow reactor using hot gas from a cyclone combustor. Spent shale and supplemental fuel are burned at slagging conditions in this combustor. Because of fines utilization, the designation Use It All Retorting Process (UIARP) has been adopted. A preliminary process engineering design of the UIARP, analytical tests on six samples of raw oil shale, and a preliminary technical and economic evaluation of the process were performed. The results of these investigations are summarized in this report. The patent description is included. It was concluded that such changes as deleting air preheating in the slag quench and replacing the condenser with a quench-oil scrubber are recognized as being essential. The addition of an entrained flow raw shale preheater ahead of the cyclone retort is probably required, but final acceptance is felt to be contingent on some verification that adequate reaction time cannot be obtained with only the cyclone, or possibly some other twin-cyclone configuration. Sufficient raw shale preheating could probably be done more simply in another manner, perhaps in a screw conveyor shale transporting system. Results of the technical and economic evaluations of Jacobs Engineering indicate that further investigation of the UIARP is definitely worthwhile. The projected capital and operating costs are competitive with costs of other processes as long as electric power generation and sales are part of the processing facility.
Full Text Available Growing interest worldwide to boost innovation in business sector activities, especially the technology, is intended to maintain or increase national economic competitiveness, inclusively as an effect of awareness concerning the effects resulting from economic activity on consumption of resources and environment, which requires design of new patterns of production and consumption. In this paper we review the most important contributions in the literature in terms of the implications of technological innovation in the economy, at the microand macroeconomic level, viewing the organization's ability to generate new ideas in support of increasing production, employment and environmental protection, starting from the concepts of innovation, innovation process and, respectively, from the innovation typology analysis.
Zhao, Yuan; Hu, Bingliang; He, Zhen-An; Xie, Wenjia; Gao, Xiaohui
We demonstrate an optical quadrature phase-shift keying (QPSK) signal transmitter and an optical receiver for demodulating optical QPSK signal with homodyne detection and digital signal processing (DSP). DSP on the homodyne detection scheme is employed without locking the phase of the local oscillator (LO). In this paper, we present an extracting one-dimensional array of down-sampling method for reducing unwanted samples of constellation diagram measurement. Such a novel scheme embodies the following major advantages over the other conventional optical QPSK signal detection methods. First, this homodyne detection scheme does not need strict requirement on LO in comparison with linear optical sampling, such as having a flat spectral density and phase over the spectral support of the source under test. Second, the LabVIEW software is directly used for recovering the QPSK signal constellation without employing complex DSP circuit. Third, this scheme is applicable to multilevel modulation formats such as M-ary PSK and quadrature amplitude modulation (QAM) or higher speed signals by making minor changes.
Gogniat, Guy; Morawiec, Adam; Erdogan, Ahmet
Advances in signal and image processing together with increasing computing power are bringing mobile technology closer to applications in a variety of domains like automotive, health, telecommunication, multimedia, entertainment and many others. The development of these leading applications, involving a large diversity of algorithms (e.g. signal, image, video, 3D, communication, cryptography) is classically divided into three consecutive steps: a theoretical study of the algorithms, a study of the target architecture, and finally the implementation. Such a linear design flow is reaching its li
Elliott, Taffeta M; Christensen-Dalsgaard, Jakob; Kelley, Darcy B
Perception of the temporal structure of acoustic signals contributes critically to vocal signaling. In the aquatic clawed frog Xenopus laevis, calls differ primarily in the temporal parameter of click rate, which conveys sexual identity and reproductive state. We show here that an ensemble...... click rates ranged from 4 to 50 Hz, the rate at which the clicks begin to overlap. Frequency selectivity and temporal processing were characterized using response-intensity curves, temporal-discharge patterns, and autocorrelations of reduplicated responses to click trains. Characteristic frequencies...
Jiang, Zhongjin; Qiu, Xiaojun; Lin, Jun; Chen, Zubin
In conventional vibroseis signal processing, algorithms including cross correlation and deconvolution are applied to convert the raw trace data into a seismic section. However, their performance deteriorates when the trace data are corrupted by the ambient noise, so the mathematical tool for time-frequency analysis and wavelet transform is applied in this paper to overcome the difficulty. A time-frequency cross correlation (TFCC) algorithm based on wavelet transform is proposed to extract the reflection from the trace data by detecting the reflected sweeps and estimating their time delay. The source signal and the trace data are transformed into time-frequency domain with respect to a same wavelet basis function; then time-frequency cross correlation is performed between the source signal and the trace data. The reflected sweeps are converted into time-frequency correlation wavelets in the result; meanwhile, the trace data are converted into seismic section. In wavelet decomposition, the high-frequency noise can be suppressed automatically. In the time-frequency representation of the trace data, the ambient noise and the reflected sweeps can be separated from each other. So in the TFCC algorithm, the interference of the ambient noise can be decreased considerably, and the weak reflections can be extracted clearly. Real vibroseis trace data were processed with the TFCC algorithm and the conventional cross correlation. The results showed the superiority of the proposed new algorithm in vibroseis signal processing.
A comprehensive and invaluable guide to 5G technology, implementation and practice in one single volume. For all things 5G, this book is a must-read. Signal processing techniques have played the most important role in wireless communications since the second generation of cellular systems. It is anticipated that new techniques employed in 5G wireless networks will not only improve peak service rates significantly, but also enhance capacity, coverage, reliability , low-latency, efficiency, flexibility, compatibility and convergence to meet the increasing demands imposed by applications such as big data, cloud service, machine-to-machine (M2M) and mission-critical communications. This book is a comprehensive and detailed guide to all signal processing techniques employed in 5G wireless networks. Uniquely organized into four categories, New Modulation and &n sp;Coding, New Spatial Processing, New Spectrum Opportunities and New System-level Enabling Technologies, it covers everything from network architecture...
Boyraz, Pinar; Takeda, Kazuya; Abut, Hüseyin
Compiled from papers of the 4th Biennial Workshop on DSP (Digital Signal Processing) for In-Vehicle Systems and Safety this edited collection features world-class experts from diverse fields focusing on integrating smart in-vehicle systems with human factors to enhance safety in automobiles. Digital Signal Processing for In-Vehicle Systems and Safety presents new approaches on how to reduce driver inattention and prevent road accidents. The material addresses DSP technologies in adaptive automobiles, in-vehicle dialogue systems, human machine interfaces, video and audio processing, and in-vehicle speech systems. The volume also features: Recent advances in Smart-Car technology – vehicles that take into account and conform to the driver Driver-vehicle interfaces that take into account the driving task and cognitive load of the driver Best practices for In-Vehicle Corpus Development and distribution Information on multi-sensor analysis and fusion techniques for robust driver monitoring and driver recognition ...
Nguyen, Christelle; Poiraudeau, Serge; Rannou, François
Late-1980s MRI-detected vertebral-endplate subchondral bone signal changes associated with degenerative disc disease as well as recent studies suggest that in some patients, non-specific chronic low back pain (NS cLBP) can be defined by specific clinical, radiological and biological features, for a concept of active discopathy. This concept allows for associating a particular NS cLBP phenotype to a specific anatomical lesion, namely those with Modic 1 signal changes seen on MRI. Local inflammation is thought to play a pivotal role in these changes. Other etiopathogenic processes may include local infection and mechanical or biochemical stress combined with predisposing genetic factors; treatment strategies remain debated. Modic 1 changes detected by MRI can be considered a first biomarker in NS cLBP. Such changes are of high clinical relevance because they are associated with a specific clinical phenotype and can be targeted by specific treatments. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Roy, Suman Deb
This book provides a comprehensive coverage of the state-of-the-art in understanding media popularity and trends in online social networks through social multimedia signals. With insights from the study of popularity and sharing patterns of online media, trend spread in social media, social network analysis for multimedia and visualizing diffusion of media in online social networks. In particular, the book will address the following important issues: Understanding social network phenomena from a signal processing point of view; The existence and popularity of multimedia as shared and social me
Nisii, V.; Saccorotti, G.
Wavelets transforms allow for precise time-frequency localization in the analysis of non-stationary signals. In wavelet analysis the trade-off between frequency bandwidth and time duration, also known as Heisenberg inequality, is by-passed using a fully scalable modulated window which solves the signal-cutting problem of Windowed Fourier Transform. We propose a new seismic array data processing procedure capable of displaying the localized spatial coherence of the signal in both the time- and frequency-domain, in turn deriving the propagation parameters of the most coherent signals crossing the array. The procedure consists in: a) Wavelet coherence analysis for each station pair of the instruments array, aimed at retrieving the frequency- and time-localisation of coherent signals. To this purpose, we use the normalised wavelet cross- power spectrum, smoothed along the time and scale domains. We calculate different coherence spectra adopting smoothing windows of increasing lengths; a final, robust estimate of the time-frequency localisation of spatially-coherent signals is eventually retrieved from the stack of the individual coherence distribution. This step allows for a quick and reliable signal discrimination: wave groups propagating across the network will manifest as high-coherence patches spanning the corresponding time-scale region. b) Once the signals have been localised in the time and frequency domain,their propagation parameters are estimated using a modified MUSIC (MUltiple SIgnal Characterization) algorithm. We select the MUSIC approach as it demonstrated superior performances in the case of low SNR signals, more plane waves contemporaneously impinging at the array and closely separated sources. The narrow-band Coherent Signal Subspace technique is applied to the complex Continuous Wavelet Transform of multichannel data for improving the singularity of the estimated cross-covariance matrix and the accuracy of the estimated signal eigenvectors. Using
Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten
A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997...... discrimination with pure tones and broadband noise, tone-in-noise detection, spectral masking with narrow-band signals and maskers, forward masking with tone signals and tone or noise maskers, and amplitude-modulation detection with narrow- and wideband noise carriers. The model can account for most of the key...... properties of the data and is more powerful than the original model. The model might be useful as a front end in technical applications....
Kim, Ji Chul; Large, Edward W
Oscillatory instability at the Hopf bifurcation is a dynamical phenomenon that has been suggested to characterize active non-linear processes observed in the auditory system. Networks of oscillators poised near Hopf bifurcation points and tuned to tonotopically distributed frequencies have been used as models of auditory processing at various levels, but systematic investigation of the dynamical properties of such oscillatory networks is still lacking. Here we provide a dynamical systems analysis of a canonical model for gradient frequency neural networks driven by a periodic signal. We use linear stability analysis to identify various driven behaviors of canonical oscillators for all possible ranges of model and forcing parameters. The analysis shows that canonical oscillators exhibit qualitatively different sets of driven states and transitions for different regimes of model parameters. We classify the parameter regimes into four main categories based on their distinct signal processing capabilities. This analysis will lead to deeper understanding of the diverse behaviors of neural systems under periodic forcing and can inform the design of oscillatory network models of auditory signal processing.
Jepsen, Morten L; Ewert, Stephan D; Dau, Torsten
A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity discrimination with pure tones and broadband noise, tone-in-noise detection, spectral masking with narrow-band signals and maskers, forward masking with tone signals and tone or noise maskers, and amplitude-modulation detection with narrow- and wideband noise carriers. The model can account for most of the key properties of the data and is more powerful than the original model. The model might be useful as a front end in technical applications.
Sugimoto, Yohei; Ozawa, Satoru; Inaba, Noriyasu
Synthetic Aperture Radar (SAR) imagery requires image reproduction through successive signal processing of received data before browsing images and extracting information. The received signal data records of the ALOS-2/PALSAR-2 are stored in the onboard mission data storage and transmitted to the ground. In order to compensate the storage usage and the capacity of transmission data through the mission date communication networks, the operation duty of the PALSAR-2 is limited. This balance strongly relies on the network availability. The observation operations of the present spaceborne SAR systems are rigorously planned by simulating the mission data balance, given conflicting user demands. This problem should be solved such that we do not have to compromise the operations and the potential of the next-generation spaceborne SAR systems. One of the solutions is to compress the SAR data through onboard image reproduction and information extraction from the reproduced images. This is also beneficial for fast delivery of information products and event-driven observations by constellation. The Emergence Studio (Sōhatsu kōbō in Japanese) with Japan Aerospace Exploration Agency is developing evaluation models of FPGA-based signal processing system for onboard SAR image reproduction. The model, namely, "Fast L1 Processor (FLIP)" developed in 2016 can reproduce a 10m-resolution single look complex image (Level 1.1) from ALOS/PALSAR raw signal data (Level 1.0). The processing speed of the FLIP at 200 MHz results in twice faster than CPU-based computing at 3.7 GHz. The image processed by the FLIP is no way inferior to the image processed with 32-bit computing in MATLAB.
Adithya, Prashanth Chetlur; Sankar, Ravi; Moreno, Wilfrido A; Hart, Stuart
In this study, a novel acoustic stethoscope based on an intravenous catheter was introduced to measure vascular pressures from a Yorkshire pig. Our hypothesis is that by means of this single device (measurement system) and by applying signal analysis and processing framework, multiple vital bio signals can be extracted. In contrast, current conventional state-of-the-art technologies use multiple devices to provide the same information. The framework used to extract these bio signals consisted of a noise cancellation technique and wavelet based source separation. The preliminary results obtained from the acquired pressure data revealed the presence of acoustic heart and respiratory pulses. Finally, the computed heart and respiratory rates were benchmarked with actual values measured using conventional devices to validate our hypothesis.
Chen, Kun; Liu, Quan; Ai, Qingsong; Zhou, Zude; Xie, Sheng Quan; Meng, Wei
The research on brain computer interfaces (BCIs) has become a hotspot in recent years because it offers benefit to disabled people to communicate with the outside world. Steady state visual evoked potential (SSVEP)-based BCIs are more widely used because of higher signal to noise ratio and greater information transfer rate compared with other BCI techniques. In this paper, a multiple signal classification based method was proposed for multi-dimensional SSVEP feature extraction. 2-second data epochs from four electrodes achieved excellent accuracy rates including idle state detection. In some asynchronous mode experiments, the recognition accuracy reached up to 100%. The experimental results showed that the proposed method attained good frequency resolution. In most situations, the recognition accuracy was higher than canonical correlation analysis, which is a typical method for multi-channel SSVEP signal processing. Also, a virtual keyboard was successfully controlled by different subjects in an unshielded environment, which proved the feasibility of the proposed method for multi-dimensional SSVEP signal processing in practical applications.
Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul
Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application
Pablo A. Alvarado-Durán
Full Text Available In this paper we present a new methodology for detecting sound events in music signals using Gaussian Processes. Our method firstly takes a time-frequency representation, i.e. the spectrogram, of the input audio signal. Secondly the spectrogram dimension is reduced translating the linear Hertz frequency scale into the logarithmic Mel frequency scale using a triangular filter bank. Finally every short-time spectrum, i.e. every Mel spectrogram column, is classified as “Event” or “Not Event” by a Gaussian Processes Classifier. We compare our method with other event detection techniques widely used. To do so, we use MATLAB® to program each technique and test them using two datasets of music with different levels of complexity. Results show that the new methodology outperforms the standard approaches, getting an improvement by about 1.66 % on the dataset one and 0.45 % on the dataset two in terms of F-measure.
Despite the commercial success, video streaming remains a black art owing to its roots in proprietary commercial development. As such, many challenging technological issues that need to be addressed are not even well understood. The purpose of this paper is to review several important signal processing issues related to video streaming, and put them in the context of a client-server based media streaming architecture on the Internet. Such a context is critical, as we shall see that a number of solutions proposed by signal processing researchers are simply unrealistic for real-world video streaming on the Internet. We identify a family of viable solutions and evaluate their pros and cons. We further identify areas of research that have received less attention and point to the problems to which a better solution is eagerly sought by the industry.
Pretlow, Robert A., III; Stoughton, John W.
Research and development is presented of real time signal processing methodologies for the detection of fetal heart tones within a noise-contaminated signal from a passive acoustic sensor. A linear predictor algorithm is utilized for detection of the heart tone event and additional processing derives heart rate. The linear predictor is adaptively 'trained' in a least mean square error sense on generic fetal heart tones recorded from patients. A real time monitor system is described which outputs to a strip chart recorder for plotting the time history of the fetal heart rate. The system is validated in the context of the fetal nonstress test. Comparisons are made with ultrasonic nonstress tests on a series of patients. Comparative data provides favorable indications of the feasibility of the acoustic monitor for clinical use.
Ogawa, Takashi; Wanielik, Gerd
In recent years, the lidar sensor has been receiving greater attention as being one of the prospective sensors for future intelligent vehicles. In order to enable advanced applications in a variety of road environments, it has become more important to detect various objects at a wider distance. Therefore, in this research we have focused on lidar signal processing to detect low signal-to-noise ratio (SNR) targets and proposed a higher sensitive detector. The detector is based on the constant false alarm rate (CFAR) processing framework in which an additional functionality of adaptive intensity integration is incorporated. Fundamental results through static experiments have shown a significant advantage in the detection performance in comparison to a conventional detector with constant thresholding.
Order statistics and morphological filters belong to a class of nonlinear filters that have recently found many applications in signal analysis and image processing. In this paper, order statistics and morphological filters have been applied to enhance the features of the ultrasonic signal when it has been contaminated by multiple interfering microstructure echoes with random amplitudes and phases. These interfering echoes (i.e., speckles or grain scattering noise) often become significant to the point where detection of flaw echoes becomes very difficult. We have examined order statistic, and morphological filters for improved ultrasonic flaw detection. In particular, the performance of these filters has been evaluated using different ranks of order statistics (minimum, median, maximum), and different shapes of structuring elements in the application of morphological filters. The processed experimental results in testing steel samples demonstrate that these filters are capable of improving flaw detection in ultrasonic systems
Mittendorfer, J.; Gratzl, F.; Hanis, D.
In this paper the status of process qualification and control in electron beam irradiation is analyzed in terms of requirements, concepts, methods and challenges for a state-of-the-art process control concept for medical device sterilization. Aspects from process qualification to routine process control are described together with the associated process variables. As a case study the 10 MeV beams at Mediscan GmbH are considered. Process control concepts like statistical process control (SPC) and a new concept to determine process capability is briefly discussed
exercises in the summer of 1994, and demonstrated its ability to provide real-time collection of acoustic data over 32 channels, to perform onsite data... corrupted by having a support ship in the area of testing. Therefore, the challenge was to develop an acoustic data collection system that allowed full... exercise . The solution proposed by NRL was to develop a system that incorporated on-site signal processing and storage along with a satellite data
Jeonggon Harrison Kim
Full Text Available A new signal processing algorithm for absolute temperature measurement using white light interferometry has been proposed and investigated theoretically. The proposed algorithm determines the phase delay of an interferometer with very high precision (<< one fringe by identifying the zero order fringe peak of cross-correlation of two fringe scans of white light interferometer. The algorithm features cross-correlation of interferometer fringe scans, hypothesis testing and fine tuning. The hypothesis test looks for a zero order fringe peak candidate about which the cross-correlation is symmetric minimizing the uncertainty of mis-identification. Fine tuning provides the proposed algorithm with high precision subsample resolution phase delay estimation capability. The shot noise limited performance of the proposed algorithm has been analyzed using computer simulations. Root-mean-square (RMS phase error of the estimated zero order fringe peak has been calculated for the changes of three different parameters (SNR, fringe scan sample rate, coherence length of light source. Computer simulations showed that the proposed signal processing algorithm identified the zero order fringe peak with a miss rate of 3 x 10-4 at 31 dB SNR and the extrapolated miss rate at 35 dB was 3 x 10-8. Also, at 35 dB SNR, RMS phase error less than 10-3 fringe was obtained. The proposed signal processing algorithm uses a software approach that is potentially inexpensive, simple and fast.
Roth, Michael; Hendeby, Gustaf; Fritsche, Carsten; Gustafsson, Fredrik
The ensemble Kalman filter (EnKF) is a Monte Carlo-based implementation of the Kalman filter (KF) for extremely high-dimensional, possibly nonlinear, and non-Gaussian state estimation problems. Its ability to handle state dimensions in the order of millions has made the EnKF a popular algorithm in different geoscientific disciplines. Despite a similarly vital need for scalable algorithms in signal processing, e.g., to make sense of the ever increasing amount of sensor data, the EnKF is hardly discussed in our field. This self-contained review is aimed at signal processing researchers and provides all the knowledge to get started with the EnKF. The algorithm is derived in a KF framework, without the often encountered geoscientific terminology. Algorithmic challenges and required extensions of the EnKF are provided, as well as relations to sigma point KF and particle filters. The relevant EnKF literature is summarized in an extensive survey and unique simulation examples, including popular benchmark problems, complement the theory with practical insights. The signal processing perspective highlights new directions of research and facilitates the exchange of potentially beneficial ideas, both for the EnKF and high-dimensional nonlinear and non-Gaussian filtering in general.
Lapedes, A.; Farber, R.
The backpropagation learning algorithm for neural networks is developed into a formalism for nonlinear signal processing. We illustrate the method by selecting two common topics in signal processing, prediction and system modelling, and show that nonlinear applications can be handled extremely well by using neural networks. The formalism is a natural, nonlinear extension of the linear Least Mean Squares algorithm commonly used in adaptive signal processing. Simulations are presented that document the additional performance achieved by using nonlinear neural networks. First, we demonstrate that the formalism may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method. Deterministic chaos is thought to be involved in many physical situations including the onset of turbulence in fluids, chemical reactions and plasma physics. Secondly, we demonstrate the use of the formalism in nonlinear system modelling by providing a graphic example in which it is clear that the neural network has accurately modelled the nonlinear transfer function. It is interesting to note that the formalism provides explicit, analytic, global, approximations to the nonlinear maps underlying the various time series. Furthermore, the neural net seems to be extremely parsimonious in its requirements for data points from the time series. We show that the neural net is able to perform well because it globally approximates the relevant maps by performing a kind of generalized mode decomposition of the maps. 24 refs., 13 figs.
Thomas, Jr., Jess B. (Inventor)
A digital signal processor and processing method therefor for use in receivers of the NAVSTAR/GLOBAL POSITIONING SYSTEM (GPS) employs a digital carrier down-converter, digital code correlator and digital tracking processor. The digital carrier down-converter and code correlator consists of an all-digital, minimum bit implementation that utilizes digital chip and phase advancers, providing exceptional control and accuracy in feedback phase and in feedback delay. Roundoff and commensurability errors can be reduced to extremely small values (e.g., less than 100 nanochips and 100 nanocycles roundoff errors and 0.1 millichip and 1 millicycle commensurability errors). The digital tracking processor bases the fast feedback for phase and for group delay in the C/A, P.sub.1, and P.sub.2 channels on the L.sub.1 C/A carrier phase thereby maintaining lock at lower signal-to-noise ratios, reducing errors in feedback delays, reducing the frequency of cycle slips and in some cases obviating the need for quadrature processing in the P channels. Simple and reliable methods are employed for data bit synchronization, data bit removal and cycle counting. Improved precision in averaged output delay values is provided by carrier-aided data-compression techniques. The signal processor employs purely digital operations in the sense that exactly the same carrier phase and group delay measurements are obtained, to the last decimal place, every time the same sampled data (i.e., exactly the same bits) are processed.
Slipko, Valeriy A.; Pershin, Yuriy V.
Traditional studies of memristive devices have mainly focused on their applications in nonvolatile information storage and information processing. Here, we demonstrate that the third fundamental component of information technologies—the transfer of information—can also be employed with memristive devices. For this purpose, we introduce a metastable memristive circuit. Combining metastable memristive circuits into a line, one obtains an architecture capable of transferring a signal edge from one space location to another. We emphasize that the suggested metastable memristive lines employ only resistive circuit components. Moreover, their networks (for example, Y-connected lines) have an information processing capability.
van Straten, W.; Bailes, M.
dspsr is a high-performance, open-source, object-oriented, digital signal processing software library and application suite for use in radio pulsar astronomy. Written primarily in C++, the library implements an extensive range of modular algorithms that can optionally exploit both multiple-core processors and general-purpose graphics processing units. After over a decade of research and development, dspsr is now stable and in widespread use in the community. This paper presents a detailed description of its functionality, justification of major design decisions, analysis of phase-coherent dispersion removal algorithms, and demonstration of performance on some contemporary microprocessor architectures.
Chakrabarti, S.; Patrick, W.C.; Duplancic, N.
Digital signal processing, a technique commonly used in the fields of electrical engineering and communication technology, has been successfully used to analyze creep closure data obtained from a 0.91 m diameter by 5.13 deep borehole in bedded salt. By filtering the ''noise'' component of the closure data from a test borehole, important data trends were made more evident and average creep closure rates were able to be calculated. This process provided accurate estimates of closure rates that are used in the design of lined boreholes in which heat-generating transuranic nuclear wastes are emplaced at the Waste Isolation Pilot Plant
Hugo D Critchley
Full Text Available Visceral afferent signals to the brain influence thoughts, feelings and behaviour. Here we highlight the findings of a set of empirical investigations in humans concerning body-mind interaction that focus on how feedback from states of autonomic arousal shapes cognition and emotion. There is a longstanding debate regarding the contribution of the body, to mental processes. Recent theoretical models broadly acknowledge the role of (autonomically-mediated physiological arousal to emotional, social and motivational behaviours, yet the underlying mechanisms are only partially characterized. Neuroimaging is overcoming this shortfall; first, by demonstrating correlations between autonomic change and discrete patterns of evoked, and task-independent, neural activity; second, by mapping the central consequences of clinical perturbations in autonomic response and; third, by probing how dynamic fluctuations in peripheral autonomic state are integrated with perceptual, cognitive and emotional processes. Building on the notion that an important source of the brain’s representation of physiological arousal is derived from afferent information from arterial baroreceptors, we have exploited the phasic nature of these signals to show their differential contribution to the processing of emotionally-salient stimuli. This recent work highlights the facilitation at neural and behavioral levels of fear and threat processing that contrasts with the more established observations of the inhibition of central pain processing during baroreceptors activation. The implications of this body-brain-mind axis are discussed.
Klünder, Mario; Sawodny, Oliver; Amend, Bastian; Ederer, Michael; Kelp, Alexandra; Sievert, Karl-Dietrich; Stenzl, Arnulf; Feuer, Ronny
Urethral pressure profilometry (UPP) is used in the diagnosis of stress urinary incontinence (SUI) which is a significant medical, social, and economic problem. Low spatial pressure resolution, common occurrence of artifacts, and uncertainties in data location limit the diagnostic value of UPP. To overcome these limitations, high definition urethral pressure profilometry (HD-UPP) combining enhanced UPP hardware and signal processing algorithms has been developed. In this work, we present the different signal processing steps in HD-UPP and show experimental results from female minipigs. We use a special microtip catheter with high angular pressure resolution and an integrated inclination sensor. Signals from the catheter are filtered and time-correlated artifacts removed. A signal reconstruction algorithm processes pressure data into a detailed pressure image on the urethra's inside. Finally, the pressure distribution on the urethra's outside is calculated through deconvolution. A mathematical model of the urethra is contained in a point-spread-function (PSF) which is identified depending on geometric and material properties of the urethra. We additionally investigate the PSF's frequency response to determine the relevant frequency band for pressure information on the urinary sphincter. Experimental pressure data are spatially located and processed into high resolution pressure images. Artifacts are successfully removed from data without blurring other details. The pressure distribution on the urethra's outside is reconstructed and compared to the one on the inside. Finally, the pressure images are mapped onto the urethral geometry calculated from inclination and position data to provide an integrated image of pressure distribution, anatomical shape, and location. With its advanced sensing capabilities, the novel microtip catheter collects an unprecedented amount of urethral pressure data. Through sequential signal processing steps, physicians are provided with
Ze'ev R Abrams
Full Text Available Cerebellar Purkinje cells in vitro fire recurrent sequences of Sodium and Calcium spikes. Here, we analyze the Purkinje cell using harmonic analysis, and our experiments reveal that its output signal is comprised of three distinct frequency bands, which are combined using Amplitude and Frequency Modulation (AM/FM. We find that the three characteristic frequencies - Sodium, Calcium and Switching – occur in various combinations in all waveforms observed using whole-cell current clamp recordings. We found that the Calcium frequency can display a frequency doubling of its frequency mode, and the Switching frequency can act as a possible generator of pauses that are typically seen in Purkinje output recordings. Using a reversibly photo-switchable kainate receptor agonist, we demonstrate the external modulation of the Calcium and Switching frequencies. These experiments and Fourier analysis suggest that the Purkinje cell can be understood as a harmonic signal oscillator, enabling a higher level of interpretation of Purkinje signaling based on modern signal processing techniques.
Full Text Available Currently used diagnostics systems are not always efficient and do not give straightforward results which allow for the assessment of the technological condition of the engine or for the identification of the possible damages in their early stages of development. Growing requirements concerning durability, reliability, reduction of costs to minimum and decrease of negative influence on the natural environment are the reasons why there is a need to acquire information about the technological condition of each of the elements of a vehicle during its exploitation. One of the possibilities to achieve information about technological condition of a vehicle are vibroacoustic phenomena. Symptoms of defects, achieved as a result of advanced methods of vibroacoustic signals processing can serve as models which can be used during construction of intelligent diagnostic system based on artificial neural networks. The work presents conception of use artificial neural networks in the task of combustion engines diagnosis.
Rouphael, Tony J
Wireless Receiver Architectures and Design presents the various designs and architectures of wireless receivers in the context of modern multi-mode and multi-standard devices. This one-stop reference and guide to designing low-cost low-power multi-mode, multi-standard receivers treats analog and digital signal processing simultaneously, with equal detail given to the chosen architecture and modulating waveform. It provides a complete understanding of the receiver's analog front end and the digital backend, and how each affects the other. The book explains the design process in great detail, s
Poelmans, J.; Ignatov, D.I.; Kuznetsov, S.O.; Dedene, G.
This is the second part of a large survey paper in which we analyze recent literature on Formal Concept Analysis (FCA) and some closely related disciplines using FCA. We collected 1072 papers published between 2003 and 2011 mentioning terms related to Formal Concept Analysis in the title, abstract
Wu, Yun-Wu; Weng, Kuo-Hua; Young, Li-Ming
Generally, in the foundation course of architectural design, much emphasis is placed on teaching of the basic design skills without focusing on teaching students to apply the basic design concepts in their architectural designs or promoting students' own creativity. Therefore, this study aims to propose a concept transformation learning model to…
Dudik, Joshua M; Coyle, James L; Sejdić, Ervin
Cervical auscultation is the recording of sounds and vibrations caused by the human body from the throat during swallowing. While traditionally done by a trained clinician with a stethoscope, much work has been put towards developing more sensitive and clinically useful methods to characterize the data obtained with this technique. The eventual goal of the field is to improve the effectiveness of screening algorithms designed to predict the risk that swallowing disorders pose to individual patients' health and safety. This paper provides an overview of these signal processing techniques and summarizes recent advances made with digital transducers in hopes of organizing the highly varied research on cervical auscultation. It investigates where on the body these transducers are placed in order to record a signal as well as the collection of analog and digital filtering techniques used to further improve the signal quality. It also presents the wide array of methods and features used to characterize these signals, ranging from simply counting the number of swallows that occur over a period of time to calculating various descriptive features in the time, frequency, and phase space domains. Finally, this paper presents the algorithms that have been used to classify this data into 'normal' and 'abnormal' categories. Both linear as well as non-linear techniques are presented in this regard.
Csaba, György, E-mail: email@example.com [Center for Nano Science and Technology, University of Notre Dame (United States); Faculty for Information Technology and Bionics, Pázmány Péter Catholic University (Hungary); Papp, Ádám [Center for Nano Science and Technology, University of Notre Dame (United States); Faculty for Information Technology and Bionics, Pázmány Péter Catholic University (Hungary); Porod, Wolfgang [Center for Nano Science and Technology, University of Notre Dame (United States)
Highlights: • We give an overview of spin wave-based computing with emphasis on non-Boolean signal processors. • Spin waves can combine the best of electronics and photonics and do it in an on-chip and integrable way. • Copying successful approaches from microelectronics may not be the best way toward spin-wave based computing. • Practical devices can be constructed by minimizing the number of required magneto-electric interconnections. - Abstract: Almost all the world's information is processed and transmitted by either electric currents or photons. Now they may get a serious contender: spin-wave-based devices may just perform some information-processing tasks in a lot more efficient and practical way. In this article, we give an engineering perspective of the potential of spin-wave-based devices. After reviewing various flavors for spin-wave-based processing devices, we argue that the niche for spin-wave-based devices is low-power, compact and high-speed signal-processing devices, where most traditional electronics show poor performance.
Csaba, György; Papp, Ádám; Porod, Wolfgang
Highlights: • We give an overview of spin wave-based computing with emphasis on non-Boolean signal processors. • Spin waves can combine the best of electronics and photonics and do it in an on-chip and integrable way. • Copying successful approaches from microelectronics may not be the best way toward spin-wave based computing. • Practical devices can be constructed by minimizing the number of required magneto-electric interconnections. - Abstract: Almost all the world's information is processed and transmitted by either electric currents or photons. Now they may get a serious contender: spin-wave-based devices may just perform some information-processing tasks in a lot more efficient and practical way. In this article, we give an engineering perspective of the potential of spin-wave-based devices. After reviewing various flavors for spin-wave-based processing devices, we argue that the niche for spin-wave-based devices is low-power, compact and high-speed signal-processing devices, where most traditional electronics show poor performance.
In this thesis, digital signal processing (DSP) algorithms are studied to compensate for physical layer impairments in optical fiber coherent communication systems. The physical layer impairments investigated in this thesis include optical fiber chromatic dispersion, polarization demultiplexing......, light sources frequency and phase offset and phase noise. The studied DSP algorithms are considered as key building blocks in digital coherent receivers for the next generation of optical communication systems such as 112-Gb/s dual polarization (DP) quadrature phase shift keying (QPSK) optical...... spectrum narrowing tolerance 112-Gb/s DP-QPSK optical coherent systems using digital adaptive equalizer. The demonstrated results show that off-line DSP algorithms are able to reduce the bit error rate (BER) penalty induced by signal spectrum narrowing. Third, we also investigate bi...
Oxenløwe, Leif Katsuo; Ji, Hua; Jensen, Asger Sellerup
with a photonic layer on top to interconnect them. For such systems, silicon is an attractive candidate enabling both electronic and photonic control. For some network scenarios, it may be beneficial to use optical on-chip packet switching, and for high data-density environments one may take advantage...... of the ultra-fast nonlinear response of silicon photonic waveguides. These chips offer ultra-broadband wavelength operation, ultra-high timing resolution and ultra-fast response, and when used appropriately offer energy-efficient switching. In this presentation we review some all-optical functionalities based...... on silicon photonics. In particular we use nano-engineered silicon waveguides (nanowires)  enabling efficient phasematched four-wave mixing (FWM), cross-phase modulation (XPM) or self-phase modulation (SPM) for ultra-high-speed optical signal processing of ultra-high bit rate serial data signals. We show...
Digital Signal Processing; CAS 2007
These proceedings present the lectures given at the twenty-first specialized course organized by the CERN Accelerator School (CAS), the topic being Digital Signal Processing. The course was held in Sigtuna, Sweden, from 31 May–9 June 2007. This is the first time this topic has been selected for a specialized course. Taking into account the number of related applications currently in use in accelerators around the world, it was recognized that such a topic should definitively be incorporated into the CAS series of specialized courses. The specific aim of the course was to introduce the participants to the use and programming of Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) evaluation boards. The course consisted of lectures in the mornings covering fundamental background knowledge in mathematics, controls theory, design tools, programming hardware platforms, and implementation details. In the afternoons the students split into two groups with people working in pairs. One group w...
Rajasekhar, Pradeep; Poole, Daniel P; Veldhuis, Nicholas A
Transient receptor potential (TRP) ion channels are important signaling components in nociceptive and inflammatory pathways. This is attributed to their ability to function as polymodal sensors of environmental stimuli (chemical and mechanical) and as effector molecules in receptor signaling pathways. TRP vanilloid 4 (TRPV4) is a nonselective cation channel that is activated by multiple endogenous stimuli including shear stress, membrane stretch, and arachidonic acid metabolites. TRPV4 contributes to many important physiological processes and dysregulation of its activity is associated with chronic conditions of metabolism, inflammation, peripheral neuropathies, musculoskeletal development, and cardiovascular regulation. Mechanosensory and receptor- or lipid-mediated signaling functions of TRPV4 have historically been attributed to central and peripheral neurons. However, with the development of potent and selective pharmacological tools, transgenic mice and improved molecular and imaging techniques, many new roles for TRPV4 have been revealed in nonneuronal cells. In this chapter, we discuss these recent findings and highlight the need for greater characterization of TRPV4-mediated signaling in nonneuronal cell types that are either directly associated with neurons or indirectly control their excitability through release of sensitizing cellular factors. We address the integral role of these cells in sensory and inflammatory processes as well as their importance when considering undesirable on-target effects that may be caused by systemic delivery of TRPV4-selective pharmaceutical agents for treatment of chronic diseases. In future, this will drive a need for targeted drug delivery strategies to regulate such a diverse and promiscuous protein. © 2017 Elsevier Inc. All rights reserved.
Full Text Available The problem of hypothesis testing in the Neyman–Pearson formulation is considered from a geometric viewpoint. In particular, a concise geometric interpretation of deterministic and random signal detection in the philosophy of information geometry is presented. In such a framework, both hypotheses and detectors can be treated as geometrical objects on the statistical manifold of a parameterized family of probability distributions. Both the detector and detection performance are geometrically elucidated in terms of the Kullback–Leibler divergence. Compared to the likelihood ratio test, the geometric interpretation provides a consistent but more comprehensive means to understand and deal with signal detection problems in a rather convenient manner. Example of the geometry based detector in radar constant false alarm rate (CFAR detection is presented, which shows its advantage over the classical processing method.
Chadwick, K.H.; Oosterheert, W.F.
The associations between the dosimetry concepts, Minimum absorbed dose (Dsub(min)), maximum absorbed dose (Dsub(max), and average dose and median dose are investigated for the case of a large cobalt-60 plaque source irradiating homogeneous bulk product in a two-pass, two-sided irradiation. It is assumed that to a first approximation the intensity of radiation decreases exponentially with the depth, t, in the product. A series of mathematical relationships is derived for the average dose, the maximum and minimum dose, the median dose [defined as (Dsub(maX) + Dsub(min)/2], and the uniformity ratio (defined U.R. = (Dsub(max)/Dsub(min). The relationships are derived in terms of a constant D 0 (the dose on the surface of the products in the pass close to the source) and the relaxation length (μt) of the radiation in the product. Since the uniformity ratio and other dose parameters can be calculated for certain chosen values of μt, the individual values Dsub(min) to Dsub(max) into 10 equal fractions, the amount of product irradiated to each of the fractions is calculated and it is shown that, independent of the value of U.R., about a third of the product receives a dose in the first fraction above Dsub(min). It is also shown that for a given median dose, the average dose decreases as U.R. increases. The calculated dose relationships are confirmed by measurement in homogeneous dummy product, using the lyoluminescence of glutamine to measure dose. The implications of these results for the regulation of the food irradiation process and for the design of irradiation facilities are discussed. (author)
Technological advances in proteomics have shown great potential in detecting cancer at the earliest stages. One way is to use the time of flight mass spectroscopy to identify biomarkers, or early disease indicators related to the cancer. Pattern analysis of time of flight mass spectra data from blood and tissue samples gives great hope for the identification of potential biomarkers among the complex mixture of biological and chemical samples for the early cancer detection. One of the keys issues is the pre-processing of raw mass spectra data. A lot of challenges need to be addressed: unknown noise character associated with the large volume of data, high variability in the mass spectroscopy measurements, and poorly understood signal background and so on. This dissertation focuses on developing statistical algorithms and creating data mining tools for computationally improved signal processing for mass spectrometry data. I have introduced an advanced accurate estimate of the noise model and a half-supervised method of mass spectrum data processing which requires little knowledge about the data.
Heger, A.S.; Alang-Rashid, N.K.; Holbert, K.E.
The advent of fuzzy logic technology has afforded another opportunity to reexamine the signal processing and validation process (SPV). The features offered by fuzzy logic can lend themselves to a more reliable and perhaps fault-tolerant approach to SPV. This is particularly attractive to complex system operations, where optimal control for safe operation depends on reliable input data. The reason for the use of fuzzy logic as the tool for SPV is its ability to transform information from the linguistic domain to a mathematical domain for processing and then transformation of its result back into the linguistic domain for presentation. To ensure the safe and optimal operation of a nuclear plant, for example, reliable and valid data must be available to the human and computer operators. Based on these input data, the operators determine the current state of the power plant and project corrective actions for future states. This determination is based on available data and the conceptual and mathematical models for the plant. A fault-tolerant SPV based on fuzzy logic can help the operators meet the objective of effective, efficient, and safe operation of the nuclear power plant. The ultimate product of this project will be a code that will assist plant operators in making informed decisions under uncertain conditions when conflicting signals may be present
Tim Holm Jakobsen
Full Text Available The development of effective strategies to combat biofilm infections by means of either mechanical or chemical approaches could dramatically change today’s treatment procedures for the benefit of thousands of patients. Remarkably, considering the increased focus on biofilms in general, there has still not been invented and/or developed any simple, efficient and reliable methods with which to “chemically” eradicate biofilm infections. This underlines the resilience of infective agents present as biofilms and it further emphasizes the insufficiency of today’s approaches used to combat chronic infections. A potential method for biofilm dismantling is chemical interception of regulatory processes that are specifically involved in the biofilm mode of life. In particular, bacterial cell to cell signaling called “Quorum Sensing” together with intracellular signaling by bis-(3′-5′-cyclic-dimeric guanosine monophosphate (cyclic-di-GMP have gained a lot of attention over the last two decades. More recently, regulatory processes governed by two component regulatory systems and small non-coding RNAs have been increasingly investigated. Here, we review novel findings and potentials of using small molecules to target and modulate these regulatory processes in the bacterium Pseudomonas aeruginosa to decrease its pathogenic potential.
The purpose of this book is to provide graduate students and practitioners with traditional methods and more recent results for model-based approaches in signal processing.Firstly, discrete-time linear models such as AR, MA and ARMA models, their properties and their limitations are introduced. In addition, sinusoidal models are addressed.Secondly, estimation approaches based on least squares methods and instrumental variable techniques are presented.Finally, the book deals with optimal filters, i.e. Wiener and Kalman filtering, and adaptive filters such as the RLS, the LMS and the
Geldenhuys, R.; Liu, Y.; Calabretta, N.; Hill, M. T.; Huijskens, F. M.; Khoe, G. D.; Dorren, H. J. S.
We present three optical signal processing functional blocks that enable 1×N optical packet switching. An ultrafast asynchronous multioutput all-optical header processor is demonstrated with a terahertz optical asymmetric demultiplexer in combination with a header preprocessor. It is shown that self-induced polarization rotation can be used for both the header processor and the header preprocessor. The second functional block is optical buffering. This is shown with both a laser neural network and a recirculating buffer. Related to this is a three-state all-optical memory based on coupled lasers, which increases the number of possible output states of an optical packet switch.
Kelly, Robert J.; Van Graas, Frank; Kuhl, Mark R.
A measurement processing method has been developed which markedly improves the GPS Receiver Autonomous Integrity Monitoring (RAIM) software-based algorithm system's effectiveness in detecting satellite signal failures. Detection is via the consistency of a redundant set of pseudorange measurements. When five satellites are in view, five different subsolutions can be calculated; the integrity alarm is triggered on the basis of subsolution comparisons. Because a poor distribution of the satellites also causes RAIM subsolution scattering, a methodology for selecting the covariance matrix is presented which incorporates ridge regression into a Kalman filter.
Fan, Z. C.; Chan, T. S.; Yang, Y. H.; Jang, J. S. R.
We propose a novel neural network model for music signal processing using vector product neurons and dimensionality transformations. Here, the inputs are first mapped from real values into three-dimensional vectors then fed into a three-dimensional vector product neural network where the inputs, outputs, and weights are all three-dimensional values. Next, the final outputs are mapped back to the reals. Two methods for dimensionality transformation are proposed, one via context windows and the other via spectral coloring. Experimental results on the iKala dataset for blind singing voice separation confirm the efficacy of our model.
Critchley, Frank; Dodson, Christopher
This book focuses on the application and development of information geometric methods in the analysis, classification and retrieval of images and signals. It provides introductory chapters to help those new to information geometry and applies the theory to several applications. This area has developed rapidly over recent years, propelled by the major theoretical developments in information geometry, efficient data and image acquisition and the desire to process and interpret large databases of digital information. The book addresses both the transfer of methodology to practitioners involved in database analysis and in its efficient computational implementation.
Oxenløwe, Leif Katsuo; Galili, Michael; Mulvad, Hans Christian Hansen
detection in a delay-interferometer-balanced detector-based receiver, yielding a BER less than 10−9. We also present subsystems making serial optical Tbit/s systems compatible with standard Ethernet data for data centre applications and present Tbit/s results using, for instance silicon nanowires.......We review recent experimental demonstrations of Tbaud optical signal processing. In particular, we describe a successful 1.28 Tbit/s serial data generation based on single polarization 1.28 Tbaud symbol rate pulses with binary data modulation (OOK) and subsequent all-optical demultiplexing. We also...
Many digital control circuits in current literature are described using analog transmittance. This may not always be acceptable, especially if the sampling frequency and power transistor switching frequencies are close to the band of interest. Therefore, a digital circuit is considered as a digital controller rather than an analog circuit. This helps to avoid errors and instability in high frequency components. Digital Signal Processing in Power Electronics Control Circuits covers problems concerning the design and realization of digital control algorithms for power electronics circuits using
Full Text Available Studying early interactions is a core issue of infant development and psychopathology. Automatic social signal processing theoretically offers the possibility to extract and analyse communication by taking an integrative perspective, considering the multimodal nature and dynamics of behaviours (including synchrony. This paper proposes an explorative method to acquire and extract relevant social signals from a naturalistic early parent-infant interaction. An experimental setup is proposed based on both clinical and technical requirements. We extracted various cues from body postures and speech productions of partners using the IMI2S (Interaction, Multimodal Integration, and Social Signal Framework. Preliminary clinical and computational results are reported for two dyads (one pathological in a situation of severe emotional neglect and one normal control as an illustration of our cross-disciplinary protocol. The results from both clinical and computational analyses highlight similar differences: the pathological dyad shows dyssynchronic interaction led by the infant whereas the control dyad shows synchronic interaction and a smooth interactive dialog. The results suggest that the current method might be promising for future studies.
Gibbon, Timothy Braidwood; Yu, Xianbin; Tafur Monroy, Idelfonso
The generation of photonic ultra-wideband (UWB) impulse signals using an uncooled distributed-feedback laser is proposed. For the first time, we experimentally demonstrate bit-for-bit digital signal processing (DSP) bit-error-rate measurements for transmission of a 781.25-Mb/s photonic UWB signal...
Tavan, P; Grubmüller, H; Kühnel, H
We extend the neural concepts of topological feature maps towards self-organization of auto-associative memory and hierarchical pattern classification. As is well-known, topological maps for statistical data sets store information on the associated probability densities. To extract that information we introduce a recurrent dynamics of signal processing. We show that the dynamics converts a topological map into an auto-associative memory for real-valued feature vectors which is capable to perform a cluster analysis. The neural network scheme thus developed represents a generalization of non-linear matrix-type associative memories. The results naturally lead to the concept of a feature atlas and an associated scheme of self-organized, hierarchical pattern classification.
Cheron, Guy; Leroy, Axelle; Palmero-Soler, Ernesto; De Saedeleer, Caty; Bengoetxea, Ana; Cebolla, Ana-Maria; Vidal, Manuel; Dan, Bernard; Berthoz, Alain; McIntyre, Joseph
Visual perception is not only based on incoming visual signals but also on information about a multimodal reference frame that incorporates vestibulo-proprioceptive input and motor signals. In addition, top-down modulation of visual processing has previously been demonstrated during cognitive operations including selective attention and working memory tasks. In the absence of a stable gravitational reference, the updating of salient stimuli becomes crucial for successful visuo-spatial behavior by humans in weightlessness. Here we found that visually-evoked potentials triggered by the image of a tunnel just prior to an impending 3D movement in a virtual navigation task were altered in weightlessness aboard the International Space Station, while those evoked by a classical 2D-checkerboard were not. Specifically, the analysis of event-related spectral perturbations and inter-trial phase coherency of these EEG signals recorded in the frontal and occipital areas showed that phase-locking of theta-alpha oscillations was suppressed in weightlessness, but only for the 3D tunnel image. Moreover, analysis of the phase of the coherency demonstrated the existence on Earth of a directional flux in the EEG signals from the frontal to the occipital areas mediating a top-down modulation during the presentation of the image of the 3D tunnel. In weightlessness, this fronto-occipital, top-down control was transformed into a diverging flux from the central areas toward the frontal and occipital areas. These results demonstrate that gravity-related sensory inputs modulate primary visual areas depending on the affordances of the visual scene.
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by
Evaristo, Ronaldo M.; Batista, Antonio M.; Viana, Ricardo L.; Iarosz, Kelly C.; Szezech, José D., Jr.; Godoy, Moacir F. de
The cardiovascular system is composed of the heart, blood and blood vessels. Regarding the heart, cardiac conditions are determined by the electrocardiogram, that is a noninvasive medical procedure. In this work, we propose autoregressive process in a mathematical model based on coupled differential equations in order to obtain the tachograms and the electrocardiogram signals of young adults with normal heartbeats. Our results are compared with experimental tachogram by means of Poincaré plot and dentrended fluctuation analysis. We verify that the results from the model with autoregressive process show good agreement with experimental measures from tachogram generated by electrical activity of the heartbeat. With the tachogram we build the electrocardiogram by means of coupled differential equations.
, membranization of InP/InGaAs structure and wet etching. Experimental investigation of the switching dynamics of InP photonic crystal nanocavity structures are carried out using short-pulse homodyne pump-probe techniques, both in the linear and nonlinear region where the cavity is perturbed by a relatively small......This thesis deals with the investigation of InP material based photonic crystal cavity membrane structures, both experimentally and theoretically. The work emphasizes on the understanding of the physics underlying the structures’ nonlinear properties and their applications for all-optical signal...... processing. Based on the previous fabrication recipe developed in our III-V platform, several processing techniques are developed and optimized for the fabrication of InP photonic crystal membrane structures. Several key issues are identified to ensure a good device quality such as air hole size control...
Constam, Daniel B
Secreted cytokines of the TGFβ family are found in all multicellular organisms and implicated in regulating fundamental cell behaviors such as proliferation, differentiation, migration and survival. Signal transduction involves complexes of specific type I and II receptor kinases that induce the nuclear translocation of Smad transcription factors to regulate target genes. Ligands of the BMP and Nodal subgroups act at a distance to specify distinct cell fates in a concentration-dependent manner. These signaling gradients are shaped by multiple factors, including proteases of the proprotein convertase (PC) family that hydrolyze one or several peptide bonds between an N-terminal prodomain and the C-terminal domain that forms the mature ligand. This review summarizes information on the proteolytic processing of TGFβ and related precursors, and its spatiotemporal regulation by PCs during development and various diseases, including cancer. Available evidence suggests that the unmasking of receptor binding epitopes of TGFβ is only one (and in some cases a non-essential) function of precursor processing. Future studies should consider the impact of proteolytic maturation on protein localization, trafficking and turnover in cells and in the extracellular space. Copyright © 2014 The Author. Published by Elsevier Ltd.. All rights reserved.
The acoustic data remotely measured by hand held type microphones are investigated for monitoring and diagnosing the rotational machine integrity in nuclear power plants. The plant operator's patrol monitoring is one of the important activities for condition monitoring. However, remotely measured sound has some difficulties to be considered for precise diagnosis or quantitative judgment of rotating machine anomaly, since the measurement sensitivity is different in each measurement, and also, the sensitivity deteriorates in comparison with an attached type sensor. Hence, in the present study, several advanced signal processing methods are examined and compared in order to find optimum anomaly monitoring technology from the viewpoints of both sensitivity and robustness of performance. The dimension of pre-processed signal feature patterns are reduced into two-dimensional space for the visualization by using the standard principal component analysis (PCA) or the kernel based PCA. Then, the normal state is classified by using probabilistic neural network (PNN) or support vector data description (SVDD). By using the mockup test facility of rotating machine, it is shown that the appropriate combination of the above algorithms gives sensitive and robust anomaly monitoring performance. (author)
Peyghambarian, N.; Gibbs, H. M.
In this paper we present the basic principles of optical bistability and summarize the current advances in semiconductor optical switching, with emphasis on recent results in GaAs, CuCI, InAs, InSb, CdS, ZnS, and ZnSe etalons. These devices have great potential for applications involving optical signal processing and computing. As an example, we discuss the use of arrays of bistable devices for parallel optical processing and for addressable spatial light modulators. The use of nonlinear etalons as optical gates is also illustrated. To date, GaAs devices have shown the most favorable characteristics for practical applications. They operate at room temperature with a few milliwatts of power using a laser diode as the only light source. Quasi-cw operation and optical fiber signal regeneration have also been demonstrated. A GaAs NOR gate operates in 1 ps with <3 pJ incident energy; this, of course, implies a 1 ps switch-on time for a bistable etalon.
Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high
van Donk, DP; van Wezel, W; Gaalman, G; Bititci, US; Carrie, AS
Food processing industries cope with a specific production process and a dynamic market. Scheduling the production process is thus important in being competitive. This paper proposes a hierarchical concept for structuring the scheduling and describes the (computer) support needed for this concept.
Wang, Wei; Mu, Jiasong; Liang, Jing; Zhang, Baoju; Pi, Yiming; Zhao, Chenglin
Communications, Signal Processing, and Systems is a collection of contributions coming out of the International Conference on Communications, Signal Processing, and Systems (CSPS) held October 2012. This book provides the state-of-art developments of Communications, Signal Processing, and Systems, and their interactions in multidisciplinary fields, such as Smart Grid. The book also examines Radar Systems, Sensor Networks, Radar Signal Processing, Design and Implementation of Signal Processing Systems and Applications. Written by experts and students in the fields of Communications, Signal Processing, and Systems.
Lane, Rod; Coutts, Pamela
While Shulman argues that an important component of pedagogical content knowledge (PCK) is teachers' understanding of the alternative conceptions commonly held by students, relatively little is known about what students believe about many topics in the school curriculum. This paper focuses on a content area typically featured in Geography…
Galili, Michael; Guan, Pengyu; Lillieholm, Mads
In the talk, we will review recent work on optical signal processing based on time lenses. Various applications of optical Fourier transformation for optical communications will be discussed.......In the talk, we will review recent work on optical signal processing based on time lenses. Various applications of optical Fourier transformation for optical communications will be discussed....
Glenn, William E. (Inventor)
A method for generating color video signals representative of color images of a scene includes the following steps: focusing light from the scene on an electronic image sensor via a filter having a tri-color filter pattern; producing, from outputs of the sensor, first and second relatively low resolution luminance signals; producing, from outputs of the sensor, a relatively high resolution luminance signal; producing, from a ratio of the relatively high resolution luminance signal to the first relatively low resolution luminance signal, a high band luminance component signal; producing, from outputs of the sensor, relatively low resolution color component signals; and combining each of the relatively low resolution color component signals with the high band luminance component signal to obtain relatively high resolution color component signals.
Adirelle C. Santana
Full Text Available This article has discussed the development of a three-axis attitude digital controller for an artificial satellite using a digital signal processor. The main motivation of this study is the attitude control system of the satellite Multi-Mission Platform, developed by the Brazilian National Institute for Space Research for application in different sort of missions. The controller design was based on the theory of the Linear Quadratic Gaussian Regulator, synthesized from the linearized model of the motion of the satellite, i.e., the kinematics and dynamics of attitude. The attitude actuators considered in this study are pairs of cold gas jets powered by a pulse width/pulse frequency modulator. In the first stage of the project development, a system controller for continuous time was studied with the aim of testing the adequacy of the adopted control. The next steps had included an analysis of discretization techniques, the setting time of sampling rate, and the testing of the digital version of the Linear Quadratic Gaussian Regulator controller in the MATLAB/SIMULINK. To fulfill the study, the controller was implemented in a digital signal processor, specifically the Blackfin BF537 from Analog Devices, along with the pulse width/pulse frequency modulator. The validation tests used a scheme of co-simulation, where the model of the satellite was simulated in MATLAB/SIMULINK, while the controller and modulator were processed in the digital signal processor with a tool called Processor-In-the-Loop, which acted as a data communication link between both environments.function and required time to achieve a given mission accuracy are determined, and results are provided as illustration.
Kenny, R. Jeremy; Lee, Erik; Hulka, James R.; Casiano, Matthew
The J2X Gas Generator engine design specifications include dynamic, spontaneous, and broadband combustion stability requirements. These requirements are verified empirically based high frequency chamber pressure measurements and analyses. Dynamic stability is determined with the dynamic pressure response due to an artificial perturbation of the combustion chamber pressure (bomb testing), and spontaneous and broadband stability are determined from the dynamic pressure responses during steady operation starting at specified power levels. J2X Workhorse Gas Generator testing included bomb tests with multiple hardware configurations and operating conditions, including a configuration used explicitly for engine verification test series. This work covers signal processing techniques developed at Marshall Space Flight Center (MSFC) to help assess engine design stability requirements. Dynamic stability assessments were performed following both the CPIA 655 guidelines and a MSFC in-house developed statistical-based approach. The statistical approach was developed to better verify when the dynamic pressure amplitudes corresponding to a particular frequency returned back to pre-bomb characteristics. This was accomplished by first determining the statistical characteristics of the pre-bomb dynamic levels. The pre-bomb statistical characterization provided 95% coverage bounds; these bounds were used as a quantitative measure to determine when the post-bomb signal returned to pre-bomb conditions. The time for post-bomb levels to acceptably return to pre-bomb levels was compared to the dominant frequency-dependent time recommended by CPIA 655. Results for multiple test configurations, including stable and unstable configurations, were reviewed. Spontaneous stability was assessed using two processes: 1) characterization of the ratio of the peak response amplitudes to the excited chamber acoustic mode amplitudes and 2) characterization of the variability of the peak response
Arab, Mohammad Reza; Suratgar, Amir Abolfazl; Ashtiani, Alireza Rezaei
In this study, topographic brain mapping and wavelet transform-neural network method are used for the classification of grand mal (clonic stage) and petit mal (absence) epilepsies into healthy, ictal and interictal (EEGs). Preprocessing is included to remove artifacts occurred by blinking, wandering baseline (electrodes movement) and eyeball movement using the Discrete Wavelet Transformation (DWT). De-noising EEG signals from the AC power supply frequency with a suitable notch filter is another job of preprocessing. In experimental data, the preprocessing enhanced speed and accuracy of the processing stage (wavelet transform and neural network). The EEGs signals are categorized to normal and petit mal and clonic epilepsy by an expert neurologist. The categorization is confirmed by Fast Fourier Transform (FFT) analysis and brain mapping. The dataset includes waves such as sharp, spike and spike-slow wave. Through the Counties Wavelet Transform (CWT) of EEG records, transient features are accurately captured and separated and used as classifier input. We introduce a two-stage classifier based on the Learning Vector Quantization (LVQ) neural network location in both time and frequency contexts. The brain mapping used for finding the epilepsy locates in the brain. The simulation results are very promising and the accuracy of the proposed classifier in experimental clinical data is ∼80%. Copyright © 2010 Elsevier Ltd. All rights reserved.
Optical processors have potentially a major advantage over electronic processors because of their tremendous bandwidth. Massive parallelism is another inherent advantage of optical processors. However, it is traditionally demonstrated with free space components and seldom used for integrated optical signal processing. In this thesis, we consider spatial domain signal processing in guided wave structures, which brings a new dimension to the existing serial signal processing architecture and takes advantage of the parallelism in optics. A novel class of devices using holograms in multimode channel waveguides is developed in this work. Linear optical signal processing using multimode waveguide holograms (MWHs) is analyzed. We focus on discrete unitary transformations to take advantage of the discrete nature of modes in multimode waveguides. We prove that arbitrary unitary transformations can be performed using holograms in multimode waveguides. A model using the wide-angle beam propagation method (WA-BPM) is developed to simulate the devices and shows good agreement with the theory. The design principle of MWH devices is introduced. Based on the design principle, BPM models are used to design several devices including a mode-order converter, a Hadamard transformer, and an optical pattern generator/correlator. Optical pattern generators are fabricated to verify the theory and the model. Also, the bandwidth and fabrication tolerance of MWH devices are also analyzed. Also, we examine the nonlinear optical switches which allow the integration of MWHs into modern optical communication networks. A simple optical setup using an imaged 2-D phase grating is developed for characterization of the complex third-order nonlinearity chi(3) to identify suitable nonlinear materials for integrated optical switches. This technique provides a reliable way to characterize chi(3) as new materials are constantly being developed. Finally, we demonstrate the concept of optical switching using
Yovany Álvarez García
Full Text Available This article discusses the different concepts that are used to methodological appreciation of photography. Since photography is one of the manifestations of the visu al arts with the most commonly interacts daily ; from which can be found in books, magazines and other publications, discusses various methodologies to assess the photographic image. It addresses also the classic themes of photography as well as some expres sive elements.
Edmonson, William W.; Tucker, Jerry
different adaptive noise cancellation algorithms and provide an operational prototype to understand the behavior of the system under test. DSP software was required to interface the processor with the data converters using interrupt routines. The goal is to build a complete ANC system that can be placed on a flexible circuit with added memory circuitry that also contains the power supply, sensors and actuators. This work on the digital signal processing system for active noise reduction was completed in collaboration with another ASEE Fellow, Dr. Jerry Tucker from Virginia Commonwealth University, Richmond, VA.
Full Text Available In real application, most aerial targets are movable. In this paper, an effective multiple subbands coherent processing method is proposed for moving target. Firstly, an echoed signal model of motion target based on geometrical theory of diffraction is established and the influence of velocity on range profile of the target is analyzed. Secondly, a method based on minimum entropy principle is used to compensate velocity. Then, incoherent factors including a quadratic phase term, a linear phase factor, a fixed factor, and an amplitude difference term are analyzed. Subsequently, efficient methods are applied to estimate other incoherent factors, except that the quadratic term is small enough to be ignored. Finally, the feasibility and performance of the proposed method are investigated through numerical simulation.
Full Text Available In contemporary cochlear implant systems, the audio signal is decomposed into different frequency bands, each assigned to one electrode. Thus, pitch perception is limited by the number of physical electrodes implanted into the cochlea and by the wide bandwidth assigned to each electrode. The Harmony HiResolution bionic ear (Advanced Bionics LLC, Valencia, CA, USA has the capability of creating virtual spectral channels through simultaneous delivery of current to pairs of adjacent electrodes. By steering the locus of stimulation to sites between the electrodes, additional pitch percepts can be generated. Two new sound processing strategies based on current steering have been designed, SpecRes and SineEx. In a chronic trial, speech intelligibility, pitch perception, and subjective appreciation of sound were compared between the two current steering strategies and standard HiRes strategy in 9 adult Harmony users. There was considerable variability in benefit, and the mean results show similar performance with all three strategies.
Kelly, Jeffrey J.; Wilson, Mark R.
Narrow-band spectra characterizing jet noise are constructed from flyover acoustic measurements. Radar and c-band tracking systems provided the aircraft position histories which enabled directivity and smear angles from the aircraft to each microphone to be computed. These angles are based on source emission time and thus give some idea about the directivity of the radiated sound field due to jet noise. Simulated spectra are included in the paper to demonstrate spectral broadening due to smear angle. The acoustic data described in the study has application to community noise analysis, noise source characterization and validation of prediction models. Both broadband-shock noise and turbulent mixing noise are observed in the spectra. A detailed description of the signal processing procedures is provided.
Lai Longwei; Yi Xing; Leng Yongbin; Yan Yingbing; Chen Zhichu
Based on turn-by-turn (TBT) signal processing, the paper emphasizes on the optimization of system timing and implementation of digital automatic gain control, slow application (SA) modules. Beam position including TBT, fast application (FA) and SA data can be acquired. On-line evaluation on Shanghai Synchrotron Radiation Facility (SSRF) shows that the processor is able to get the multi-rate position data which contain true beam movements. When the storage ring is 174 mA and 500 bunches filled, the resolutions of TBT data, FA data and SA data achieve 0.84, 0.44 and 0.23 μm respectively. The above results prove that the design could meet the performance requirements. (authors)
The aim of the book is to give an accessible introduction of mathematical models and signal processing methods in speech and hearing sciences for senior undergraduate and beginning graduate students with basic knowledge of linear algebra, differential equations, numerical analysis, and probability. Speech and hearing sciences are fundamental to numerous technological advances of the digital world in the past decade, from music compression in MP3 to digital hearing aids, from network based voice enabled services to speech interaction with mobile phones. Mathematics and computation are intimately related to these leaps and bounds. On the other hand, speech and hearing are strongly interdisciplinary areas where dissimilar scientific and engineering publications and approaches often coexist and make it difficult for newcomers to enter.
Zhang, Hui; Braun, Simon G.
Smart phones have changed not only the mobile phone market but also our society during the past few years. Could the next potential intelligent device may be the vehicle? Judging by the visibility, in all media, of the numerous attempts to develop autonomous vehicles, this is certainly one of the logical outcomes. Smart vehicles would be equipped with an advanced operating system such that the vehicles could communicate with others, optimize the operation to reduce fuel consumption and emissions, enhance safety, or even become self-driving. These combined new features of vehicles require instrumentation and hardware developments, fast signal processing/fusion, decision making and online optimization. Meanwhile, the inevitable increasing system complexity would certainly challenges the control unit design.
Shynk, John J
Probability, Random Variables, and Random Processes is a comprehensive textbook on probability theory for engineers that provides a more rigorous mathematical framework than is usually encountered in undergraduate courses. It is intended for first-year graduate students who have some familiarity with probability and random variables, though not necessarily of random processes and systems that operate on random signals. It is also appropriate for advanced undergraduate students who have a strong mathematical background. The book has the following features: Several app
In Plasma Facing Components (PFCs) the joint of the CFC armour material onto the metallic CuCrZr heat sink needs to be significant defects free. Detection of material flaws is a major issue of the PFCs acceptance protocol. A Non-Destructive Technique (NDT) based upon active infrared thermography allows testing PFCs on SATIR tests bed in Cadarache. Up to now defect detection was based on the comparison of the surface temperature evolution of the inspected component with that of a supposed 'defect-free' one (used as a reference element). This work deals with improvement of thermal signal processing coming from SATIR. In particular the contributions of the thermal modelling and statistical signal processing converge in this work. As for thermal modelling, the identification of a sensitive parameter to defect presence allows improving the quantitative estimation of defect Otherwise Finite Element (FE) modeling of SATIR allows calculating the so called deterministic numerical tile. Statistical approach via the Monte Carlo technique extends the numerical tile concept to the numerical population concept. As for signal processing, traditional statistical treatments allow a better localization of the bond defect processing thermo-signal by itself, without utilising a reference signal. Moreover the problem of detection and classification of random signals can be solved by maximizing the signal-to-noise ratio. Two filters maximizing the signal-to-noise ratio are optimized: the stochastic matched filter aims at detects detection and the constrained stochastic matched filter aims at defects classification. Performances are quantified and methods are compared via the ROC curves. (author)
Rosen, Paul A.
Discusses: (1) JPL Radar Overview and Historical Perspective (2) Signal Processing Needs in Earth and Planetary Radars (3) Examples of Current Systems and techniques (4) Future Perspectives in signal processing for radar missions
National Aeronautics and Space Administration — Analog probability processing technology has the ability to provide game-changing performance advances and power savings for on-board data processing applications....
Gao Ge; Wu Bingzhe
The article introduces the technology about detecting and dealing on with random signals in isotope measurement for getting energy level region atlas. It describes the circuit design for inspecting superposed signals and method of multichannel analysing. (authors)
Salim, Arwa; Crockett, Louise; McLean, John; Milne, Peter
Highlights: ► The development of a new digital signal processing platform is described. ► The system will allow users to configure the real-time signal processing through software routines. ► The architecture of the DRUID system and signal processing elements is described. ► A prototype of the DRUID system has been developed for the digital chopper-integrator. ► The results of acquisition on 96 channels at 500 kSamples/s per channel are presented. - Abstract: Real-time signal processing in plasma fusion experiments is required for control and for data reduction as plasma pulse times grow longer. The development time and cost for these high-rate, multichannel signal processing systems can be significant. This paper proposes a new digital signal processing (DSP) platform for the data acquisition system that will allow users to easily customize real-time signal processing systems to meet their individual requirements. The D-TACQ reconfigurable user in-line DSP (DRUID) system carries out the signal processing tasks in hardware co-processors (CPs) implemented in an FPGA, with an embedded microprocessor (μP) for control. In the fully developed platform, users will be able to choose co-processors from a library and configure programmable parameters through the μP to meet their requirements. The DRUID system is implemented on a Spartan 6 FPGA, on the new rear transition module (RTM-T), a field upgrade to existing D-TACQ digitizers. As proof of concept, a multiply-accumulate (MAC) co-processor has been developed, which can be configured as a digital chopper-integrator for long pulse magnetic fusion devices. The DRUID platform allows users to set options for the integrator, such as the number of masking samples. Results from the digital integrator are presented for a data acquisition system with 96 channels simultaneously acquiring data at 500 kSamples/s per channel.
Mc Mahon, Siobhan S; Sim, Aaron; Filippi, Sarah; Johnson, Robert; Liepe, Juliane; Smith, Dominic; Stumpf, Michael P H
Sensing and responding to the environment are two essential functions that all biological organisms need to master for survival and successful reproduction. Developmental processes are marshalled by a diverse set of signalling and control systems, ranging from systems with simple chemical inputs and outputs to complex molecular and cellular networks with non-linear dynamics. Information theory provides a powerful and convenient framework in which such systems can be studied; but it also provides the means to reconstruct the structure and dynamics of molecular interaction networks underlying physiological and developmental processes. Here we supply a brief description of its basic concepts and introduce some useful tools for systems and developmental biologists. Along with a brief but thorough theoretical primer, we demonstrate the wide applicability and biological application-specific nuances by way of different illustrative vignettes. In particular, we focus on the characterisation of biological information processing efficiency, examining cell-fate decision making processes, gene regulatory network reconstruction, and efficient signal transduction experimental design. Copyright © 2014 Elsevier Ltd. All rights reserved.
Galili, Michael; Da Ros, Francesco; Hu, Hao
Aluminum Gallium Arsenide on insulator (AlGaAs-OI) has recently been developed into a very attractive platform for optical signal processing. This paper reviews key results of broadband optical signal processing using this platform.......Aluminum Gallium Arsenide on insulator (AlGaAs-OI) has recently been developed into a very attractive platform for optical signal processing. This paper reviews key results of broadband optical signal processing using this platform....
Karlsen, Brian; Sørensen, Helge Bjarup Dissing; Larsen, Jan
Denne artikel beskriver kortfattet metoder og resultater relateret til clutterreduktion (clutter: uønskede reflekterede signaler) i jordradar- (eng. ground penetrating radar, GPR) signaler vha. statistiske signalbehandlingsmetoder baseret på Independent Component Analysis (ICA). Formålet ved denne...... form for clutterreduktion er at dekomponere GPR signaler i clutter og clutterreducerede underrum. Ved kun at anvende de clutterreducerede underrum kan man reducere clutter i GPR signaler. Metoderne giver gode resultater ved detektering af landminer....
To achieve automatic modeling of plant distrubances and failure limitation procedures, first the system's hardware and the present media (water, steam, coolant fluid) are formalized into fully computable matrices, called topographies. Secondly a microscopic cellular automation model, using lattice gases and state transition rules, is combined with a semi - microscopic cellular process model and with a macroscopic model, too. In doing this, at semi-microscopic level there are acting a cellular data compressor, a feature detection device and the Intelligent Physical Element's process dynamics. At macroscopic level the Walking Process Elements, a process evolving module, a test-and-manage device and abstracting process net are involved. Additionally, a diagnosis-coordinating and a counter measurements coordinating device are used. In order to automatically get process insights, object transformations, elementary process functions and associative methods are used. Developments of optoelectronic hardware language components are under consideration
The MUltiple SIgnal Characterization ( MUSIC ) algorithm is an implementation of the Signal Subspace Approach to provide parameter estimates of...the signal subspace (obtained from the received data) and the array manifold (obtained via array calibration). The MUSIC algorithm has been...implemented to conduct experimental demonstrations of the DF and COPY performance of the approach under scenarios and conditions generally regarded as difficult
Ma, Yung-Lung; Ma, Chialo; Tu, Tsing-Yee
A moving object recognition approach is presented in this paper. The motion of an object includes the linear or nonlinear translation and rotation. For a 3-D object, the images taken by a camera are in planar form. They are varied by different distances between camera and the object, variant angles and timing for taking pictures. However, the change rate among these images taken at different instant are logically related. The brightness level between any two neighbour string cells of machine digital scanning raster varies according to the Markovian random walk process. Thus, the direction and position of a moving object can be found by the variations of the cell random walk. The angles between a machine digital scanning raster and the edges of an object in a planar image are called pseudo-refractional angles. The variations of angles can be used as features for object recognition. Together with the Kolmogorov complexity program, the probability function of the process can be changed into a finite length of string arrays to simplify the recognition procedure. The distance between camera and the object can be measured by a radar or supersonic signal for military or industrial applications.
Ma, Yung-Lung; Tu, Tsing-Yee; Ma, Chialo
A moving object recognition approach is presented in this paper. The motion of an object includes the linear or nonlinear translation and rotation. For a 3-D object, the images taken by a camera are in planar form. They are varied by different distances between camera and the object, variant angles and timing for taking. pictures. However, the change rate among these images taken at different instant are logically related. The brightness level between any two neighbour string cells of machine digital scanning raster varies according to the Markovian random walk process. Thus, the direction and position of a moving object can be found by the variations of the cell random walk. The angles between a machine digital scanning raster and the edges of an object in a planar image are called pseudo-refractional angles. The variations of angles can be used as features for object recognition. Together with the Kolmogorov complexity program, the probability function of the process can be changed into a finite length of string arrays to simplify the recognition procedure. The distance between camera and the object can be measured by a radar or supersonic signal for military or industrial applications.
Choi, Young Chul; Yoon, Chan Hoon; Choi, Heui Joo; Park, Jong Sun
Ultrasonic thickness measurement is a non-destructive method to measure the local thickness of a solid element, based on the time taken for an ultrasound wave to return to the surface. When an element is very thin, it is difficult to measure thickness with the conventional ultrasonic thickness method. This is because the method measures the time delay by using the peak of a pulse, and the pulses overlap. To solve this problem, we propose a method for measuring thickness by using the power cepstrum and the minimum variance cepstrum. Because the cepstrums processing can divides the ultrasound into an impulse train and transfer function, where the period of the impulse train is the traversal time, the thickness can be measured exactly. To verify the proposed method, we performed experiments with steel and, acrylic plates of variable thickness. The conventional method is not able to estimate the thickness, because of the overlapping pulses. However, the cepstrum ultrasonic signal processing that divides a pulse into an impulse and a transfer function can measure the thickness exactly.
Byrnes, C.I.; Saeks, R.E.; Martin, C.F.
In part because of its universal role as a first approximation of more complicated behaviour and in part because of the depth and breadth of its principle paradigms, the study of linear systems continues to play a central role in control theory and its applications. Enhancing more traditional applications to aerospace and electronics, application areas such as econometrics, finance, and speech and signal processing have contributed to a renaissance in areas such as realization theory and classical automatic feedback control. Thus, the last few years have witnessed a remarkable research effort expended in understanding both new algorithms and new paradigms for modeling and realization of linear processes and in the analysis and design of robust control strategies. The papers in this volume reflect these trends in both the theory and applications of linear systems and were selected from the invited and contributed papers presented at the 8th International Symposium on the Mathematical Theory of Networks and Systems held in Phoenix on June 15-19, 1987
biological or toxicological meaning of the generated networks. As a whole, the author would like to outline the strategies to cope with the new paradigms and to combine them to construct a more robust toxicological research system under the concept of "signal toxicity". We believe that this activity should contribute to the development of more comprehensive, faster, cheaper (including less animal to use), and reliable system for the identification, and prediction of toxicity for any kind of agents entering our body and environment.
Tosti, Fabio; Patriarca, Claudio; Slob, Evert; Benedetto, Andrea; Lambot, Sébastien
The mechanical behavior of soils is partly affected by their clay content, which arises some important issues in many fields of employment, such as civil and environmental engineering, geology, and agriculture. This work focuses on pavement engineering, although the method applies to other fields of interest. Clay content in bearing courses of road pavement frequently causes damages and defects (e.g., cracks, deformations, and ruts). Therefore, the road safety and operability decreases, directly affecting the increase of expected accidents. In this study, different ground-penetrating radar (GPR) methods and techniques were used to non-destructively investigate the clay content in sub-asphalt compacted soils. Experimental layout provided the use of typical road materials, employed for road bearing courses construction. Three types of soils classified by the American Association of State Highway and Transportation Officials (AASHTO) as A1, A2, and A3 were used and adequately compacted in electrically and hydraulically isolated test boxes. Percentages of bentonite clay were gradually added, ranging from 2% to 25% by weight. Analyses were carried out for each clay content using two different GPR instruments. A pulse radar with ground-coupled antennae at 500 MHz centre frequency and a vector network analyzer spanning the 1-3 GHz frequency range were used. Signals were processed in both time and frequency domains, and the consistency of results was validated by the Rayleigh scattering method, the full-waveform inversion, and the signal picking techniques. Promising results were obtained for the detection of clay content affecting the bearing capacity of sub-asphalt layers.
Jardine, L.J.; Short, D.W.
The methodology used to develop conceptual designs of the engineered barrier system and waste packages for a geologic repository is based on an iterative systems engineering process. The process establishes a set of general mission requirements and then conducts detailed requirements analyses using functional analyses, system concept syntheses, and trade studies identifications to develop preliminary system concept descriptions. The feasible concept descriptions are ranked based on selection factors and criteria and a set of preferred concept descriptions is then selected for further development. For each of the selected concept descriptions, a specific set of requirements, including constraints, is written to provide design guidance for the next and more detailed phase of design. The process documents all relevant waste management system requirements so that the basis and source for the specific design requirements are traceable and clearly established. Successive iterations performed during design development help to insure that workable concepts are generated to satisfy the requirements. 4 refs., 2 figs
Full Text Available The ability to differentiate healthy from unhealthy foods is important in order to promote good health. Food, however, may have an emotional connotation, which could be inversely related to healthiness. The neurobiological background of differentiating healthy and unhealthy food and its relations to emotion processing are not yet well understood. We addressed the neural activations, particularly considering the single subject level, when one evaluates a food item to be of a higher, compared to a lower grade of healthiness with a particular view on emotion processing brain regionsThirty-seven healthy subjects underwent functional magnetic resonance imaging while evaluating the healthiness of food presented as photographs with a subsequent rating on a visual analogue scale. We compared individual evaluations of high and low healthiness of food items and also considered gender differences.We found increased activation when food was evaluated to be healthy in the left dorsolateral prefrontal cortex and precuneus in whole brain analyses. In ROI analyses, perceived and rated higher healthiness was associated with lower amygdala activity and higher ventral striatal and orbitofrontal cortex activity. Females exerted a higher activation in midbrain areas when rating food items as being healthy.Our results underline the close relationship between food and emotion processing, which makes sense considering evolutionary aspects. Actively evaluating and deciding whether food is healthy is accompanied by neural signalling associated with reward and self-relevance, which could promote salutary nutrition behaviour. The involved brain regions may be amenable to mechanisms of emotion regulation in the context of psychotherapeutic regulation of food intake.
Full Text Available The main purpose of this paper is to describe key gamification techniques that can be applied to enhance the tourist attraction visiting process. The paper is based on the methodology of design patterns; particularly it adopts the definition and classification schemes originally proposed and developed in the context of gamification of work to specify gamification techniques related to various aspects of the tourist attraction visiting process. The main result is the selection of twelve gamification techniques for enhancing the tourist attraction visiting process, four for each of the three phases of the visiting process (before, during and after the visit. The paper shows that gamification techniques can be applied to enhance the tourist attraction visiting process. Implementation of the proposed gamification techniques is supposed to both improve visitor experience and give the tourist attraction managers a tool for boosting interest in less popular exhibitions and events.
van de Laar, M.C.; van den Wildenberg, W.P.M.; van Boxtel, G.J.M.; van der Molen, M.W.
This paper applied Donders’ subtraction method to examine the processing of global and selective stop signals in the stop-signal paradigm. Participants performed on three different versions of the stop task: a global task and two selective tasks. A global task required participants to inhibit their
The following thesis concerns pulse shaping and optical waveform manipulation for all-optical signal processing of ultra-high bit rate serial data signals, including generation of optical pulses in the femtosecond regime, serial-to-parallel conversion and terabaud coherent optical time division...
Hsueh, Ya-Hsin; Yin, Chieh; Chen, Yan-Hong
The study aimed to develop a real-time electromyography (EMG) signal acquiring and processing device that can acquire signal during electrical stimulation. Since electrical stimulation output can affect EMG signal acquisition, to integrate the two elements into one system, EMG signal transmitting and processing method has to be modified. The whole system was designed in a user-friendly and flexible manner. For EMG signal processing, the system applied Altera Field Programmable Gate Array (FPGA) as the core to instantly process real-time hybrid EMG signal and output the isolated signal in a highly efficient way. The system used the power spectral density to evaluate the accuracy of signal processing, and the cross correlation showed that the delay of real-time processing was only 250 μs.
For selection of radiation processes in industry the processes usually are analyzing by technological and social effects, power-insensitivity, common efficiency. Technological effect is generally conditioned with uniqueness of radiation technologies which allow to obtain new material or certain one but with new properties. Social effect first of all concerns with influence of radiation technologies on consumer's psychology. Implementation of equipment for radiation technological process for both the new material production and natural materials radiation treatment is related with decision of three tasks: 1) Choice of radiation source; 2). Creation of special equipment for radiation and untraditional stages of the process; 3) Selection of radiation and other conditions ensuring of achievement of optimal technological and economical indexes
Cotzias, Constantin G.
Describes a technique that demystifies the creative process and teaches advertising students to understand that creativity is nothing more than taking something ordinary (a product) and looking at it from an extraordinary point of view (an ad). (SR)
Ong, Zhi Yi; Alhadeff, Amber L; Grill, Harvey J
Central oxytocin (OT) administration reduces food intake and its effects are mediated, in part, by hindbrain oxytocin receptor (OT-R) signaling. The neural substrate and mechanisms mediating the intake inhibitory effects of hindbrain OT-R signaling are undefined. We examined the hypothesis that hindbrain OT-R-mediated feeding inhibition results from an interaction between medial nucleus tractus solitarius (mNTS) OT-R signaling and the processing of gastrointestinal (GI) satiation signals by neurons of the mNTS. Here, we demonstrated that mNTS or fourth ventricle (4V) microinjections of OT in rats reduced chow intake in a dose-dependent manner. To examine whether the intake suppressive effects of mNTS OT-R signaling is mediated by GI signal processing, rats were injected with OT to the 4V (1 μg) or mNTS (0.3 μg), followed by self-ingestion of a nutrient preload, where either treatment was designed to be without effect on chow intake. Results showed that the combination of mNTS OT-R signaling and GI signaling processing by preload ingestion reduced chow intake significantly and to a greater extent than either stimulus alone. Using enzyme immunoassay, endogenous OT content in mNTS-enriched dorsal vagal complex (DVC) in response to ingestion of nutrient preload was measured. Results revealed that preload ingestion significantly elevated endogenous DVC OT content. Taken together, these findings provide evidence that mNTS neurons are a site of action for hindbrain OT-R signaling in food intake control and that the intake inhibitory effects of hindbrain mNTS OT-R signaling are mediated by interactions with GI satiation signal processing by mNTS neurons. Copyright © 2015 the American Physiological Society.
Mario Michael Krell; Sirko eStraube; Anett eSeeland; Hendrik eWöhrle; Johannes eTeiwes; Jan Hendrik Metzen; Elsa Andrea Kirchner; Elsa Andrea Kirchner; Frank eKirchner; Frank eKirchner
In neuroscience large amounts of data are recorded to provide insights into cerebral information processing and function. The successful extraction of the relevant signals becomes more and more challenging due to increasing complexities in acquisition techniques and questions addressed. Here, automated signal processing and machine learning tools can help to process the data, e.g., to separate signal and noise. With the presented software pySPACE (http://pyspace.github.io/pyspace), signal pro...
Sala i Álvarez, Josep; Vázquez Grau, Gregorio
A general criterion for the design of adaptive systems in digital communications called the statistical reference criterion is proposed. The criterion is based on imposition of the probability density function of the signal of interest at the output of the adaptive system, with its application to the scenario of highly powerful interferers being the main focus of this paper. The knowledge of the pdf of the wanted signal is used as a discriminator between signals so that i...
Arsyad Ramadhan Darlis
Full Text Available In 1992, Wornell and Oppenheim did research on a modulation which is formed by using wavelet theory. In some other studies, proved that this modulation can survive on a few channels and has reliability in some applications. Because of this modulation using the concept of fractal, then it is called as fractalmodulation. Fractal modulation is formed by inserting information signal into fractal signals that are selffractal similary. This modulation technique has the potential to replace the OFDM (Orthogonal Frequency Division Multiplexing, which is currently used on some of the latest telecommunication technologies. The purpose of this research is to implement the fractal communication system using Digital Signal Processing Starter Kit (DSK TMS320C6713 without using AWGN and Rayleigh channel in order to obtain the ideal performance of the system. From the simulation results using MATLAB7.4. it appears that this communication system has good performance on some channels than any other communication systems. While in terms of implementation by using (DSK via TMS320C6713 Code Composer Studio (CCS, it can be concluded that thefractal communication system has a better execution time on some tests.
Kim, Namyong; Byun, Hyung-Gi; Lim, Jeong-Ok
Distortions caused by the DC-biased laser input can be modeled as DC biased Gaussian noise and removing DC bias is important in the demodulation process of the electrical signal in most optical communications. In this paper, a new performance criterion and a related algorithm for unsupervised equalization are proposed for communication systems in the environment of channel distortions and DC biased Gaussian noise. The proposed criterion utilizes the Euclidean distance between the Dirac-delta function located at zero on the error axis and a probability density function of biased constant modulus errors, where constant modulus error is defined by the difference between the system out and a constant modulus calculated from the transmitted symbol points. From the results obtained from the simulation under channel models with fading and DC bias noise abruptly added to background Gaussian noise, the proposed algorithm converges rapidly even after the interruption of DC bias proving that the proposed criterion can be effectively applied to optical communication systems corrupted by channel distortions and DC bias noise.
Full Text Available Stephen J Pinney,1 Alexandra E Page,2 David S Jevsevar,3 Kevin J Bozic4 1Department of Orthopaedic Surgery, St Mary's Medical Center, San Francisco, CA, USA; 2Orthopaedic Surgery, AAOS Health Care Systems Committee, San Diego, CA, USA; 3Department of Orthopaedics, Geisel School of Medicine, Dartmouth University, Hanover, NH, USA; 4Department of Surgery and Perioperative Care, Dell Medical School at the University of Texas, Austin, TX, USAAbstract: Multiple health care stakeholders are increasingly scrutinizing musculoskeletal care to optimize quality and cost efficiency. This has led to greater emphasis on quality and process improvement. There is a robust set of business strategies that are increasingly being applied to health care delivery. These quality and process improvement tools (QPITs have specific applications to segments of, or the entire episode of, patient care. In the rapidly changing health care world, it will behoove all orthopedic surgeons to have an understanding of the manner in which care delivery processes can be evaluated and improved. Many of the commonly used QPITs, including checklist initiatives, standardized clinical care pathways, lean methodology, six sigma strategies, and total quality management, embrace basic principles of quality improvement. These principles include focusing on outcomes, optimizing communication among health care team members, increasing process standardization, and decreasing process variation. This review summarizes the common QPITs, including how and when they might be employed to improve care delivery. Keywords: clinical care pathway, musculoskeletal care, outcomes, quality management, six sigma, lean thinking
Conger, R.L. [Pacific Northwest Lab., Richland, WA (United States); Lee, V.E.; Buel, L.M. [eds.] [Pacific Northwest Lab., Richland, WA (United States)
This document is a compilation of one-page technical briefs that summarize the highlights of thirty-eight innovations that were presented at the seventh Innovative Concepts Fair, held in Denver, Colorado on April 20--21, 1995. Sixteen of the innovations were funded through the Innovative Concepts Program, and twenty-two innovations represent other state or federally funded programs. The concepts in this year`s fair addressed innovations that can substantially improve industrial processes. Each tech brief describes the need for the proposed concept; the concept being proposed; and the concept`s economics and market potential, key experimental results, and future development needs. A contact block is also included with each flier.
Full Text Available The article deals with the significance of the librarian’s self-concept in the communication process. First, it underlines the meaning of reference interviews, and second, it focuses on other micro and macro aspects of communication. The analysis shows that the librarian’s self-concept is hierarchically organised, structured, and therefore it consists of different areas. Each area equally contributes to the development and preservation of the appropriate librarian’s self-concept; although some areas have more direct influence than others. To attain a high self-concept, it is of fundamental importance, that the librarian develops all self-concept areas, from individual ones to social ones. It was found that a structured and positive specialist’s self-concept is highly significant for the contribution of effective reference interviews, and finally, the article offers some directions for a successful development of the communication process.
There are currently thousands of amateur astronomers around the world engaged in astrophotography at increasingly sophisticated levels. Their ranks far outnumber professional astronomers doing the same and their contributions both technically and artistically are the dominant drivers of progress in the field today. This book is a unique collaboration of individuals, all world-renowned in their particular area, and covers in detail each of the major sub-disciplines of astrophotography. This approach offers the reader the greatest opportunity to learn the most current information and the latest techniques directly from the foremost innovators in the field today. The book as a whole covers all types of astronomical image processing, including processing of eclipses and solar phenomena, extracting detail from deep-sky, planetary, and widefield images, and offers solutions to some of the most challenging and vexing problems in astronomical image processing. Recognized chapter authors include deep sky experts su...
T. R. Thomas; A. K. Herbst
The current baseline assumption is that packaging ¡§as is¡¨ and direct disposal of high level waste (HLW) calcine in a Monitored Geologic Repository will be allowed. The fall back position is to develop a stabilized waste form for the HLW calcine, that will meet repository waste acceptance criteria currently in place, in case regulatory initiatives are unsuccessful. A decision between direct disposal or a stabilization alternative is anticipated by June 2006. The purposes of this Engineering Design File (EDF) are to provide a pre-conceptual design on three low temperature processes under development for stabilization of high level waste calcine (i.e., the grout, hydroceramic grout, and iron phosphate ceramic processes) and to support a down selection among the three candidates. The key assumptions for the pre-conceptual design assessment are that a) a waste treatment plant would operate over eight years for 200 days a year, b) a design processing rate of 3.67 m3/day or 4670 kg/day of HLW calcine would be needed, and c) the performance of waste form would remove the HLW calcine from the hazardous waste category, and d) the waste form loadings would range from about 21-25 wt% calcine. The conclusions of this EDF study are that: (a) To date, the grout formulation appears to be the best candidate stabilizer among the three being tested for HLW calcine and appears to be the easiest to mix, pour, and cure. (b) Only minor differences would exist between the process steps of the grout and hydroceramic grout stabilization processes. If temperature control of the mixer at about 80„aC is required, it would add a major level of complexity to the iron phosphate stabilization process. (c) It is too early in the development program to determine which stabilizer will produce the minimum amount of stabilized waste form for the entire HLW inventory, but the volume is assumed to be within the range of 12,250 to 14,470 m3. (d) The stacked vessel height of the hot process vessels
Holland, S. Douglas
The purpose of this paper is to describe systems and components of systems developed by personnel in the Signal Processing Section of the Tracking and Communications Division. The scope of this includes past developments which are in current use in NASA flight operations and future developments which are targeted for upcoming NASA applications. These projects specifically are: (1) NASA High Definition Television (HDTV) Project, (2) Video Codecs, (3) NASA Electronic Still Camera (ESC) Project, (4) Hercules Payload, (5) Ku-band Communications Adapter (KCA), (6) Windows Drivers for Satellite Interfacing to Commercial Equipment, and (7) Advanced Statistical Multiplexers. The methods used to determine what projects should be done in-house as opposed to which should not is based in NASA applications versus commercially available systems to meet those applications. If a commercial-off-the-shelf (COTS) component or system is available which meets the need, the first choice is to use COTS equipment. If it is not, and there is a NASA requirement, it is developed in-house. This results in technology which is being developed which otherwise was not available. Personnel involved in these projects have been contacted by many commercial companies interested in licensing or obtaining the NASA design.
Skytte Gørtz, Kim Erik
The chapter takes us inside Nordea Bank to look at how coaching was used to support their leadership development as they underwent a major change effort implementation. Drawing on the literature on Lean processes, flow and coaching, it demonstrates some of the challenges and opportunities...... of working with coaching in a systematic way across broader initiatives in organizations....
Schutte, K.; Schwering, P.B.W.
The current state of the art in electro-optics provides systems with high image quality for associated prices and less expensive systems with subsequent lower performance. This keynote will expound how image processing enables to obtain high quality imagery while utilizing affordable system
Ferry, Alissa L.; Hespos, Susan J.; Gentner, Dedre
This research asks whether analogical processing ability is present in human infants, using the simplest and most basic relation--the "same-different" relation. Experiment 1 (N = 26) tested whether 7- and 9-month-olds spontaneously detect and generalize these relations from a single example, as previous research has suggested. The…
The thesis consists of two parts. The first part is devoted to the theory of dual-orthogonal polarimetric radar signals with continuous waveforms. The thesis presents a comparison of the signal compression techniques, namely correlation and de-ramping methods, for the dual-orthogonal sophisticated
Kothamachu, Varun B; Feliu, Elisenda; Wiuf, Carsten
present here this relation for four-layered phosphorelays, which are signaling systems that are ubiquitous in prokaryotes and also found in lower eukaryotes and plants. We derive an analytical expression that relates the shape of the signal-response relationship in a relay to the kinetic rates of forward...
applications where the speech signal is required, the MMSE estimate of the speech signal using the ML estimate of the parameters will be suggested. This MMSE ...It is easy to prove, using Stirling’s formula for factorial , that - N ! k’.! (6.7) where t (,) = - P, logp, is the entropy associated with the
Turley, J A; Zalewska, K; Nilsson, M; Walker, F R; Johnson, S J
Intrinsic Optical Signal (IOS) imaging has been used extensively to examine activity-related changes within the cerebral cortex. A significant technical challenge with IOS imaging is the presence of large noise, artefact components and periodic interference. Signal processing is therefore important in obtaining quality IOS imaging results. Several signal processing techniques have been deployed, however, the performance of these approaches for IOS imaging has never been directly compared. The current study aims to compare signal processing techniques that can be used when quantifying stimuli-response IOS imaging data. Data were gathered from the somatosensory cortex of mice following piezoelectric stimulation of the hindlimb. The effectiveness of each technique to remove noise and extract the IOS signal was compared for both spatial and temporal responses. Careful analysis of the advantages and disadvantages of each method were carried out to inform the choice of signal processing for IOS imaging. We conclude that spatial Gaussian filtering is the most effective choices for improving the spatial IOS response, whilst temporal low pass and bandpass filtering produce the best results for producing temporal responses when periodic stimuli are an option. Global signal regression and truncated difference also work well and do not require periodic stimuli.
Tian, Jialin; Reisse, Robert A.; Gazarik, Michael J.
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes several digital signal processing (DSP) techniques involved in the development of the calibration model. In the first stage, the measured raw interferograms must undergo a series of processing steps that include filtering, decimation, and detector nonlinearity correction. The digital filtering is achieved by employing a linear-phase even-length FIR complex filter that is designed based on the optimum equiripple criteria. Next, the detector nonlinearity effect is compensated for using a set of pre-determined detector response characteristics. In the next stage, a phase correction algorithm is applied to the decimated interferograms. This is accomplished by first estimating the phase function from the spectral phase response of the windowed interferogram, and then correcting the entire interferogram based on the estimated phase function. In the calibration stage, we first compute the spectral responsivity based on the previous results and the ideal Planck blackbody spectra at the given temperatures, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. In the post-calibration stage, we estimate the Noise Equivalent Spectral Radiance (NESR) from the calibrated ABB and HBB spectra. The NESR is generally considered as a measure of the instrument noise performance, and can be estimated as