WorldWideScience

Sample records for source signaling model

  1. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    Science.gov (United States)

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  2. Source Signals Separation and Reconstruction Following Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    WANG Cheng

    2014-02-01

    Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.

  3. A mathematical model for source separation of MMG signals recorded with a coupled microphone-accelerometer sensor pair.

    Science.gov (United States)

    Silva, Jorge; Chau, Tom

    2005-09-01

    Recent advances in sensor technology for muscle activity monitoring have resulted in the development of a coupled microphone-accelerometer sensor pair for physiological acousti signal recording. This sensor can be used to eliminate interfering sources in practical settings where the contamination of an acoustic signal by ambient noise confounds detection but cannot be easily removed [e.g., mechanomyography (MMG), swallowing sounds, respiration, and heart sounds]. This paper presents a mathematical model for the coupled microphone-accelerometer vibration sensor pair, specifically applied to muscle activity monitoring (i.e., MMG) and noise discrimination in externally powered prostheses for below-elbow amputees. While the model provides a simple and reliable source separation technique for MMG signals, it can also be easily adapted to other aplications where the recording of low-frequency (< 1 kHz) physiological vibration signals is required.

  4. Investigation of model based beamforming and Bayesian inversion signal processing methods for seismic localization of underground sources

    DEFF Research Database (Denmark)

    Oh, Geok Lian; Brunskog, Jonas

    2014-01-01

    Techniques have been studied for the localization of an underground source with seismic interrogation signals. Much of the work has involved defining either a P-wave acoustic model or a dispersive surface wave model to the received signal and applying the time-delay processing technique and frequ...... that for field data, inversion for localization is most advantageous when the forward model completely describe all the elastic wave components as is the case of the FDTD 3D elastic model....

  5. Impedance cardiography: What is the source of the signal?

    Science.gov (United States)

    Patterson, R. P.

    2010-04-01

    Impedance cardiography continues to be investigated for various applications. Instruments for its use are available commercially. Almost all of the recent presentations and articles along with commercial advertisements have assumed that aortic volume pulsation is the source of the signal. A review of the literature will reveal that there is no clear evidence for this assumption. Starting with the first paper on impedance cardiography in 1964, which assumed the lung was the source of the signal, the presentation will review many studies in the 60's, 70's and 80's, which suggest the aorta and other vessels as well as atria and again the lung as possible sources. Current studies based on high resolution thoracic models will be presented that show the aorta as contributing only approximately 1% of the total impedance measurement, making it an unlikely candidate for the major contributor to the signal. Combining the results of past studies along with recent work based on models, suggest other vessels and regions as possible sources.

  6. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    Science.gov (United States)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  7. Source of seismic signals

    Energy Technology Data Exchange (ETDEWEB)

    Frankovskii, B.A.; Khor' yakov, K.A.

    1980-08-30

    Patented is a source of seismic signals consisting of a shock generator with a basic low-voltage and auxillary high-voltage stator coils, a capacitive transformer and control switches. To increase the amplitude of signal excitation a condensor battery and auxillary commutator are introduced into the device, which are connected in parallel and serially into the circuit of the main low-voltage stator coil.

  8. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    Directory of Open Access Journals (Sweden)

    Dong-Sup Lee

    2015-01-01

    Full Text Available Independent Component Analysis (ICA, one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: insta- bility and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to vali- date the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  9. Recognition Memory zROC Slopes for Items with Correct versus Incorrect Source Decisions Discriminate the Dual Process and Unequal Variance Signal Detection Models

    Science.gov (United States)

    Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.

    2014-01-01

    We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…

  10. Deciphering acoustic emission signals in drought stressed branches: the missing link between source and sensor

    Directory of Open Access Journals (Sweden)

    Lidewei L Vergeynst

    2015-07-01

    Full Text Available When drought occurs in plants, acoustic emission signals can be detected, but the actual causes of these signals are still unknown. By analyzing the waveforms of the measured signals, it should however be possible to trace the characteristics of the acoustic emission source and get information about the underlying physiological processes. A problem encountered during this analysis is that the waveform changes significantly from source to sensor and lack of knowledge on wave propagation impedes research progress made in this field. We used finite element modeling and the well-known pencil lead break source to investigate wave propagation in a branch. A cylindrical rod of polyvinyl chloride was first used to identify the theoretical propagation modes. Two wave propagation modes could be distinguished and we used the finite element model to interpret their behavior in terms of source position for both the PVC rod and a wooden rod. Both wave propagation modes were also identified in drying-induced signals from woody branches, and we used the obtained insights to provide recommendations for further acoustic emission research in plant science.

  11. Bayesian spatial filters for source signal extraction: a study in the peripheral nerve.

    Science.gov (United States)

    Tang, Y; Wodlinger, B; Durand, D M

    2014-03-01

    The ability to extract physiological source signals to control various prosthetics offer tremendous therapeutic potential to improve the quality of life for patients suffering from motor disabilities. Regardless of the modality, recordings of physiological source signals are contaminated with noise and interference along with crosstalk between the sources. These impediments render the task of isolating potential physiological source signals for control difficult. In this paper, a novel Bayesian Source Filter for signal Extraction (BSFE) algorithm for extracting physiological source signals for control is presented. The BSFE algorithm is based on the source localization method Champagne and constructs spatial filters using Bayesian methods that simultaneously maximize the signal to noise ratio of the recovered source signal of interest while minimizing crosstalk interference between sources. When evaluated over peripheral nerve recordings obtained in vivo, the algorithm achieved the highest signal to noise interference ratio ( 7.00 ±3.45 dB) amongst the group of methodologies compared with average correlation between the extracted source signal and the original source signal R = 0.93. The results support the efficacy of the BSFE algorithm for extracting source signals from the peripheral nerve.

  12. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  13. Model-based Bayesian signal extraction algorithm for peripheral nerves

    Science.gov (United States)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of

  14. BioSig: the free and open source software library for biomedical signal processing.

    Science.gov (United States)

    Vidaurre, Carmen; Sander, Tilmann H; Schlögl, Alois

    2011-01-01

    BioSig is an open source software library for biomedical signal processing. The aim of the BioSig project is to foster research in biomedical signal processing by providing free and open source software tools for many different application areas. Some of the areas where BioSig can be employed are neuroinformatics, brain-computer interfaces, neurophysiology, psychology, cardiovascular systems, and sleep research. Moreover, the analysis of biosignals such as the electroencephalogram (EEG), electrocorticogram (ECoG), electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), or respiration signals is a very relevant element of the BioSig project. Specifically, BioSig provides solutions for data acquisition, artifact processing, quality control, feature extraction, classification, modeling, and data visualization, to name a few. In this paper, we highlight several methods to help students and researchers to work more efficiently with biomedical signals.

  15. Blind Separation of Acoustic Signals Combining SIMO-Model-Based Independent Component Analysis and Binary Masking

    Directory of Open Access Journals (Sweden)

    Hiekata Takashi

    2006-01-01

    Full Text Available A new two-stage blind source separation (BSS method for convolutive mixtures of speech is proposed, in which a single-input multiple-output (SIMO-model-based independent component analysis (ICA and a new SIMO-model-based binary masking are combined. SIMO-model-based ICA enables us to separate the mixed signals, not into monaural source signals but into SIMO-model-based signals from independent sources in their original form at the microphones. Thus, the separated signals of SIMO-model-based ICA can maintain the spatial qualities of each sound source. Owing to this attractive property, our novel SIMO-model-based binary masking can be applied to efficiently remove the residual interference components after SIMO-model-based ICA. The experimental results reveal that the separation performance can be considerably improved by the proposed method compared with that achieved by conventional BSS methods. In addition, the real-time implementation of the proposed BSS is illustrated.

  16. Compressing Sensing Based Source Localization for Controlled Acoustic Signals Using Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    Wei Ke

    2017-01-01

    Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.

  17. Bayesian mixture models for source separation in MEG

    International Nuclear Information System (INIS)

    Calvetti, Daniela; Homa, Laura; Somersalo, Erkki

    2011-01-01

    This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)

  18. Different cAMP sources are critically involved in G protein-coupled receptor CRHR1 signaling.

    Science.gov (United States)

    Inda, Carolina; Dos Santos Claro, Paula A; Bonfiglio, Juan J; Senin, Sergio A; Maccarrone, Giuseppina; Turck, Christoph W; Silberstein, Susana

    2016-07-18

    Corticotropin-releasing hormone receptor 1 (CRHR1) activates G protein-dependent and internalization-dependent signaling mechanisms. Here, we report that the cyclic AMP (cAMP) response of CRHR1 in physiologically relevant scenarios engages separate cAMP sources, involving the atypical soluble adenylyl cyclase (sAC) in addition to transmembrane adenylyl cyclases (tmACs). cAMP produced by tmACs and sAC is required for the acute phase of extracellular signal regulated kinase 1/2 activation triggered by CRH-stimulated CRHR1, but only sAC activity is essential for the sustained internalization-dependent phase. Thus, different cAMP sources are involved in different signaling mechanisms. Examination of the cAMP response revealed that CRH-activated CRHR1 generates cAMP after endocytosis. Characterizing CRHR1 signaling uncovered a specific link between CRH-activated CRHR1, sAC, and endosome-based signaling. We provide evidence of sAC being involved in an endocytosis-dependent cAMP response, strengthening the emerging model of GPCR signaling in which the cAMP response does not occur exclusively at the plasma membrane and introducing the notion of sAC as an alternative source of cAMP. © 2016 Inda et al.

  19. Determination of noise sources and space-dependent reactor transfer functions from measured output signals only

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E.; van Dam, H.; Kleiss, E.B.J.; van Uitert, G.C.; Veldhuis, D.

    1982-01-01

    The measured cross power spectral densities of the signals from three neutron detectors and the displacement of the control rod of the 2 MW research reactor HOR at Delft have been used to determine the space-dependent reactor transfer function, the transfer function of the automatic reactor control system and the noise sources influencing the measured signals. From a block diagram of the reactor with control system and noise sources expressions were derived for the measured cross power spectral densities, which were adjusted to satisfy the requirements following from the adopted model. Then for each frequency point the required transfer functions and noise sources could be derived. The results are in agreement with those of autoregressive modelling of the reactor control feed-back loop. A method has been developed to determine the non-linear characteristics of the automatic reactor control system by analysing the non-gaussian probability density function of the power fluctuations.

  20. Determination of noise sources and space-dependent reactor transfer functions from measured output signals only

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    1982-01-01

    The measured cross power spectral densities of the signals from three neutron detectors and the displacement of the control rod of the 2 MW research reactor HOR at Delft have been used to determine the space-dependent reactor transfer function, the transfer function of the automatic reactor control system and the noise sources influencing the measured signals. From a block diagram of the reactor with control system and noise sources expressions were derived for the measured cross power spectral densities, which were adjusted to satisfy the requirements following from the adopted model. Then for each frequency point the required transfer functions and noise sources could be derived. The results are in agreement with those of autoregressive modelling of the reactor control feed-back loop. A method has been developed to determine the non-linear characteristics of the automatic reactor control system by analysing the non-gaussian probability density function of the power fluctuations. (author)

  1. IQM: an extensible and portable open source application for image and signal analysis in Java.

    Science.gov (United States)

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.

  2. Discussion of Source Reconstruction Models Using 3D MCG Data

    Science.gov (United States)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  3. Brain source localization: A new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements

    Science.gov (United States)

    Vergallo, P.; Lay-Ekuakille, A.

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to

  4. Believability of signals from cosmic ray sources

    International Nuclear Information System (INIS)

    Goodman, M.

    1990-11-01

    This paper discusses some of the criteria by which an observer judges whether to believe a signal or limit that has been reported for a cosmic ray source. The importance of specifying the test before looking at the data is emphasized. 5 refs

  5. APPLICATION OF COMPENSATION METHOD FOR SEPARATION USEFUL SIGNAL AND INTERFERENCE FROM NEARBY SOURCES

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available Based on the comparative analysis of the known methods of noise suppression the ratios for the angular measurements are obtained. A mathematical model experiment to estimate the dependence of measurement error on the relative position of interference source and useful signal has been conducted. The modified method of interfer- ence compensation is tested experimentally. The analysis of obtained angular measurements for the considered methods shows that the modified method of compensation allows obtaining more precise estimates. The analyzed methodsallow considerably eliminating the useful signal from the antenna additional channel which reduces errors of angular misalignment.To determine the degree of the radar error analytically is not always possible, and in future comparison of the ef- fectiveness of various methods of interference compensation will be expected to conduct by means of mathematical model-ing of radar closed contour.

  6. Multiscale Signal Analysis and Modeling

    CERN Document Server

    Zayed, Ahmed

    2013-01-01

    Multiscale Signal Analysis and Modeling presents recent advances in multiscale analysis and modeling using wavelets and other systems. This book also presents applications in digital signal processing using sampling theory and techniques from various function spaces, filter design, feature extraction and classification, signal and image representation/transmission, coding, nonparametric statistical signal processing, and statistical learning theory. This book also: Discusses recently developed signal modeling techniques, such as the multiscale method for complex time series modeling, multiscale positive density estimations, Bayesian Shrinkage Strategies, and algorithms for data adaptive statistics Introduces new sampling algorithms for multidimensional signal processing Provides comprehensive coverage of wavelets with presentations on waveform design and modeling, wavelet analysis of ECG signals and wavelet filters Reviews features extraction and classification algorithms for multiscale signal and image proce...

  7. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    Science.gov (United States)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  8. Addendum to foundations of multidimensional wave field signal theory: Gaussian source function

    Directory of Open Access Journals (Sweden)

    Natalie Baddour

    2018-02-01

    Full Text Available Many important physical phenomena are described by wave or diffusion-wave type equations. Recent work has shown that a transform domain signal description from linear system theory can give meaningful insight to multi-dimensional wave fields. In N. Baddour [AIP Adv. 1, 022120 (2011], certain results were derived that are mathematically useful for the inversion of multi-dimensional Fourier transforms, but more importantly provide useful insight into how source functions are related to the resulting wave field. In this short addendum to that work, it is shown that these results can be applied with a Gaussian source function, which is often useful for modelling various physical phenomena.

  9. Addendum to foundations of multidimensional wave field signal theory: Gaussian source function

    Science.gov (United States)

    Baddour, Natalie

    2018-02-01

    Many important physical phenomena are described by wave or diffusion-wave type equations. Recent work has shown that a transform domain signal description from linear system theory can give meaningful insight to multi-dimensional wave fields. In N. Baddour [AIP Adv. 1, 022120 (2011)], certain results were derived that are mathematically useful for the inversion of multi-dimensional Fourier transforms, but more importantly provide useful insight into how source functions are related to the resulting wave field. In this short addendum to that work, it is shown that these results can be applied with a Gaussian source function, which is often useful for modelling various physical phenomena.

  10. Variable cycle control model for intersection based on multi-source information

    Science.gov (United States)

    Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan

    2018-05-01

    In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.

  11. MEG (Magnetoencephalography) multipolar modeling of distributed sources using RAP-MUSIC (Recursively Applied and Projected Multiple Signal Characterization)

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J. C. (John C.); Baillet, S. (Sylvain); Jerbi, K. (Karim); Leahy, R. M. (Richard M.)

    2001-01-01

    We describe the use of truncated multipolar expansions for producing dynamic images of cortical neural activation from measurements of the magnetoencephalogram. We use a signal-subspace method to find the locations of a set of multipolar sources, each of which represents a region of activity in the cerebral cortex. Our method builds up an estimate of the sources in a recursive manner, i.e. we first search for point current dipoles, then magnetic dipoles, and finally first order multipoles. The dynamic behavior of these sources is then computed using a linear fit to the spatiotemporal data. The final step in the procedure is to map each of the multipolar sources into an equivalent distributed source on the cortical surface. The method is illustrated through an application to epileptic interictal MEG data.

  12. Reference analysis of the signal + background model in counting experiments

    Science.gov (United States)

    Casadei, D.

    2012-01-01

    The model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered from a Bayesian point of view. This is a widely used model for the searches of rare or exotic events in presence of a background source, as for example in the searches performed by high-energy physics experiments. In the assumption of prior knowledge about the background yield, a reference prior is obtained for the signal alone and its properties are studied. Finally, the properties of the full solution, the marginal reference posterior, are illustrated with few examples.

  13. Corrected Four-Sphere Head Model for EEG Signals.

    Science.gov (United States)

    Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V; Dale, Anders M; Einevoll, Gaute T; Wójcik, Daniel K

    2017-01-01

    The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.

  14. Corrected Four-Sphere Head Model for EEG Signals

    Directory of Open Access Journals (Sweden)

    Solveig Næss

    2017-10-01

    Full Text Available The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF, skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM. We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.

  15. Determination of the X, Y coordinates of a pulsed ultrasonic source of signals

    International Nuclear Information System (INIS)

    Sokolov, B.V.; Shemyakin, V.V.

    1975-01-01

    A range of problems in predicting the emergency state of large-scale vessel housings are determined for subsequent solution involving acoustic emission phenomena. The authors specify the position of a given problem and present substantial grounds for selecting the minimum number of group signal receivers for unambiguous calculation of the location of the source. Relationships are obtained between X, Y - the coordinates of the pulse signal source - and experimentally measured time differences in recording of signals by group receivers. A criterion is given for selecting the true signal group combination when the receivers simultaneously record waves from several sources. Specific suggestions are made regarding the experimental information to be stored in a central computer for subsequent processing [ru

  16. Assessment of infrasound signals recorded on seismic stations and infrasound arrays in the western United States using ground truth sources

    Science.gov (United States)

    Park, Junghyun; Hayward, Chris; Stump, Brian W.

    2018-06-01

    Ground truth sources in Utah during 2003-2013 are used to assess the contribution of temporal atmospheric conditions to infrasound detection and the predictive capabilities of atmospheric models. Ground truth sources consist of 28 long duration static rocket motor burn tests and 28 impulsive rocket body demolitions. Automated infrasound detections from a hybrid of regional seismometers and infrasound arrays use a combination of short-term time average/long-term time average ratios and spectral analyses. These detections are grouped into station triads using a Delaunay triangulation network and then associated to estimate phase velocity and azimuth to filter signals associated with a particular source location. The resulting range and azimuth distribution from sources to detecting stations varies seasonally and is consistent with predictions based on seasonal atmospheric models. Impulsive signals from rocket body detonations are observed at greater distances (>700 km) than the extended duration signals generated by the rocket burn test (up to 600 km). Infrasound energy attenuation associated with the two source types is quantified as a function of range and azimuth from infrasound amplitude measurements. Ray-tracing results using Ground-to-Space atmospheric specifications are compared to these observations and illustrate the degree to which the time variations in characteristics of the observations can be predicted over a multiple year time period.

  17. Data analysis and source modelling for LISA

    International Nuclear Information System (INIS)

    Shang, Yu

    2014-01-01

    The gravitational waves are one of the most important predictions in general relativity. Besides of the directly proof of the existence of GWs, there are already several ground based detectors (such as LIGO, GEO, etc) and the planed future space mission (such as: LISA) which are aim to detect the GWs directly. GW contain a large amount of information of its source, extracting these information can help us dig out the physical property of the source, even open a new window for understanding the Universe. Hence, GW data analysis will be a challenging task in seeking the GWs. In this thesis, I present two works about the data analysis for LISA. In the first work, we introduce an extended multimodal genetic algorithm which utilizes the properties of the signal and the detector response function to analyze the data from the third round of mock LISA data challenge. We have found all five sources present in the data and recovered the coalescence time, chirp mass, mass ratio and sky location with reasonable accuracy. As for the orbital angular momentum and two spins of the Black Holes, we have found a large number of widely separated modes in the parameter space with similar maximum likelihood values. The performance of this method is comparable, if not better, to already existing algorithms. In the second work, we introduce an new phenomenological waveform model for the extreme mass ratio inspiral system. This waveform consists of a set of harmonics with constant amplitude and slowly evolving phase which we decompose in a Taylor series. We use these phenomenological templates to detect the signal in the simulated data, and then, assuming a particular EMRI model, estimate the physical parameters of the binary with high precision. The results show that our phenomenological waveform is very feasible in the data analysis of EMRI signal.

  18. Modeling Signal-Noise Processes Supports Student Construction of a Hierarchical Image of Sample

    Science.gov (United States)

    Lehrer, Richard

    2017-01-01

    Grade 6 (modal age 11) students invented and revised models of the variability generated as each measured the perimeter of a table in their classroom. To construct models, students represented variability as a linear composite of true measure (signal) and multiple sources of random error. Students revised models by developing sampling…

  19. Improvement of Source Number Estimation Method for Single Channel Signal.

    Directory of Open Access Journals (Sweden)

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  20. Signalling network construction for modelling plant defence response.

    Directory of Open Access Journals (Sweden)

    Dragana Miljkovic

    Full Text Available Plant defence signalling response against various pathogens, including viruses, is a complex phenomenon. In resistant interaction a plant cell perceives the pathogen signal, transduces it within the cell and performs a reprogramming of the cell metabolism leading to the pathogen replication arrest. This work focuses on signalling pathways crucial for the plant defence response, i.e., the salicylic acid, jasmonic acid and ethylene signal transduction pathways, in the Arabidopsis thaliana model plant. The initial signalling network topology was constructed manually by defining the representation formalism, encoding the information from public databases and literature, and composing a pathway diagram. The manually constructed network structure consists of 175 components and 387 reactions. In order to complement the network topology with possibly missing relations, a new approach to automated information extraction from biological literature was developed. This approach, named Bio3graph, allows for automated extraction of biological relations from the literature, resulting in a set of (component1, reaction, component2 triplets and composing a graph structure which can be visualised, compared to the manually constructed topology and examined by the experts. Using a plant defence response vocabulary of components and reaction types, Bio3graph was applied to a set of 9,586 relevant full text articles, resulting in 137 newly detected reactions between the components. Finally, the manually constructed topology and the new reactions were merged to form a network structure consisting of 175 components and 524 reactions. The resulting pathway diagram of plant defence signalling represents a valuable source for further computational modelling and interpretation of omics data. The developed Bio3graph approach, implemented as an executable language processing and graph visualisation workflow, is publically available at http://ropot.ijs.si/bio3graph/and can be

  1. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  2. Monitoring alert and drowsy states by modeling EEG source nonstationarity

    Science.gov (United States)

    Hsu, Sheng-Hsiou; Jung, Tzyy-Ping

    2017-10-01

    Objective. As a human brain performs various cognitive functions within ever-changing environments, states of the brain characterized by recorded brain activities such as electroencephalogram (EEG) are inevitably nonstationary. The challenges of analyzing the nonstationary EEG signals include finding neurocognitive sources that underlie different brain states and using EEG data to quantitatively assess the state changes. Approach. This study hypothesizes that brain activities under different states, e.g. levels of alertness, can be modeled as distinct compositions of statistically independent sources using independent component analysis (ICA). This study presents a framework to quantitatively assess the EEG source nonstationarity and estimate levels of alertness. The framework was tested against EEG data collected from 10 subjects performing a sustained-attention task in a driving simulator. Main results. Empirical results illustrate that EEG signals under alert versus drowsy states, indexed by reaction speeds to driving challenges, can be characterized by distinct ICA models. By quantifying the goodness-of-fit of each ICA model to the EEG data using the model deviation index (MDI), we found that MDIs were significantly correlated with the reaction speeds (r  =  -0.390 with alertness models and r  =  0.449 with drowsiness models) and the opposite correlations indicated that the two models accounted for sources in the alert and drowsy states, respectively. Based on the observed source nonstationarity, this study also proposes an online framework using a subject-specific ICA model trained with an initial (alert) state to track the level of alertness. For classification of alert against drowsy states, the proposed online framework achieved an averaged area-under-curve of 0.745 and compared favorably with a classic power-based approach. Significance. This ICA-based framework provides a new way to study changes of brain states and can be applied to

  3. Acoustic/seismic signal propagation and sensor performance modeling

    Science.gov (United States)

    Wilson, D. Keith; Marlin, David H.; Mackay, Sean

    2007-04-01

    Performance, optimal employment, and interpretation of data from acoustic and seismic sensors depend strongly and in complex ways on the environment in which they operate. Software tools for guiding non-expert users of acoustic and seismic sensors are therefore much needed. However, such tools require that many individual components be constructed and correctly connected together. These components include the source signature and directionality, representation of the atmospheric and terrain environment, calculation of the signal propagation, characterization of the sensor response, and mimicking of the data processing at the sensor. Selection of an appropriate signal propagation model is particularly important, as there are significant trade-offs between output fidelity and computation speed. Attenuation of signal energy, random fading, and (for array systems) variations in wavefront angle-of-arrival should all be considered. Characterization of the complex operational environment is often the weak link in sensor modeling: important issues for acoustic and seismic modeling activities include the temporal/spatial resolution of the atmospheric data, knowledge of the surface and subsurface terrain properties, and representation of ambient background noise and vibrations. Design of software tools that address these challenges is illustrated with two examples: a detailed target-to-sensor calculation application called the Sensor Performance Evaluator for Battlefield Environments (SPEBE) and a GIS-embedded approach called Battlefield Terrain Reasoning and Awareness (BTRA).

  4. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  5. Decoding Problem Gamblers' Signals: A Decision Model for Casino Enterprises.

    Science.gov (United States)

    Ifrim, Sandra

    2015-12-01

    The aim of the present study is to offer a validated decision model for casino enterprises. The model enables those users to perform early detection of problem gamblers and fulfill their ethical duty of social cost minimization. To this end, the interpretation of casino customers' nonverbal communication is understood as a signal-processing problem. Indicators of problem gambling recommended by Delfabbro et al. (Identifying problem gamblers in gambling venues: final report, 2007) are combined with Viterbi algorithm into an interdisciplinary model that helps decoding signals emitted by casino customers. Model output consists of a historical path of mental states and cumulated social costs associated with a particular client. Groups of problem and non-problem gamblers were simulated to investigate the model's diagnostic capability and its cost minimization ability. Each group consisted of 26 subjects and was subsequently enlarged to 100 subjects. In approximately 95% of the cases, mental states were correctly decoded for problem gamblers. Statistical analysis using planned contrasts revealed that the model is relatively robust to the suppression of signals performed by casino clientele facing gambling problems as well as to misjudgments made by staff regarding the clients' mental states. Only if the last mentioned source of error occurs in a very pronounced manner, i.e. judgment is extremely faulty, cumulated social costs might be distorted.

  6. Mathematical Modelling Plant Signalling Networks

    KAUST Repository

    Muraro, D.

    2013-01-01

    During the last two decades, molecular genetic studies and the completion of the sequencing of the Arabidopsis thaliana genome have increased knowledge of hormonal regulation in plants. These signal transduction pathways act in concert through gene regulatory and signalling networks whose main components have begun to be elucidated. Our understanding of the resulting cellular processes is hindered by the complex, and sometimes counter-intuitive, dynamics of the networks, which may be interconnected through feedback controls and cross-regulation. Mathematical modelling provides a valuable tool to investigate such dynamics and to perform in silico experiments that may not be easily carried out in a laboratory. In this article, we firstly review general methods for modelling gene and signalling networks and their application in plants. We then describe specific models of hormonal perception and cross-talk in plants. This mathematical analysis of sub-cellular molecular mechanisms paves the way for more comprehensive modelling studies of hormonal transport and signalling in a multi-scale setting. © EDP Sciences, 2013.

  7. Hyolaryngeal excursion as the physiological source of swallowing accelerometry signals

    International Nuclear Information System (INIS)

    Zoratto, D C B; Chau, T; Steele, C M

    2010-01-01

    Swallowing dysfunction, or dysphagia, is a serious condition that can result from any structural or neurological impairment (such as stroke, neurodegenerative disease or brain injury) that affects the swallowing mechanism. The gold-standard method of instrumental swallowing assessment is an x-ray examination known as the videofluoroscopic swallowing study, which involves radiation exposure. Consequently, there is interest in exploring the potential of less invasive methods, with lesser risks of biohazard, to accurately detect swallowing abnormalities. Accelerometry is one such technique, which measures the epidermal vibration signals on a patient's neck during swallowing. Determining the utility of accelerometry signals for detecting dysphagia requires an understanding of the physiological source of the vibrations that are measured on the neck during swallowing. The purpose of the current study was to determine the extent to which movement of the hyoid bone and larynx contributes to the vibration signal that is registered during swallowing accelerometry. This question was explored by mapping the movement trajectories of the hyoid bone and the arytenoid cartilages from lateral videofluoroscopy recordings collected during thin liquid swallowing, and comparing these trajectories to time-linked signals obtained from a dual-axis accelerometer placed on the neck, just anterior to the cricoid cartilage. Participants for this study included 43 adult patients referred for videofluoroscopic swallowing studies to characterize the nature and severity of suspected neurogenic dysphagia. A software program was created to allow frame-by-frame tracking of structural movement on the videofluoroscopy recordings. These movement data were then compared to the integrated acceleration data using multiple linear regressions. The results concur with previous studies, implicating hyolaryngeal excursion as the primary physiological source of swallowing accelerometry signals, with both

  8. [The source and factors that influence tracheal pulse oximetry signal].

    Science.gov (United States)

    Fan, Xiao-hua; Wei, Wei; Wang, Jian; Mu, Ling; Wang, Li

    2010-03-01

    To investigate the source and factors that influence tracheal pulse oximetry signal. The adult mongrel dog was intubated after anesthesia. The tracheal tube was modified by attaching a disposable pediatric pulse oximeter to the cuff. The chest of the dog was cut open and a red light from the tracheal oximeter was aligned with the deeper artery. The changes in tracheal pulse oxygen saturation (SptO2) signal were observed after the deeper artery was blocked temporarily. The photoplethysmography (PPG) and readings were recorded at different intracuff pressures. The influence of mechanical ventilation on the signal was also tested and compared with pulse oxygen saturation (SpO2). The SptO2 signal disappeared after deeper artery was blocked. The SptO2 signal changed with different intracuff pressures (P signal appeared under 20-60 cm H2O of intracuff pressure than under 0-10 cm H2O of intracuff pressure(P signal under a condition with mechanical ventilation differed from that without mechanical ventilation (P signal is primarily derived from deeper arteries around the trachea, not from the tracheal wall. Both intracuff pressures and mechanical ventilation can influence SptO2 signal. The SptO2 signal under 20-60 cm H2O of intracuff pressure is stronger than that under 0-10 em H2O of intracuff pressure. Mechanical ventilation mainly changes PPG.

  9. Impact of the Test Device on the Behavior of the Acoustic Emission Signals: Contribution of the Numerical Modeling to Signal Processing

    Science.gov (United States)

    Issiaka Traore, Oumar; Cristini, Paul; Favretto-Cristini, Nathalie; Pantera, Laurent; Viguier-Pla, Sylvie

    2018-01-01

    In a context of nuclear safety experiment monitoring with the non destructive testing method of acoustic emission, we study the impact of the test device on the interpretation of the recorded physical signals by using spectral finite element modeling. The numerical results are validated by comparison with real acoustic emission data obtained from previous experiments. The results show that several parameters can have significant impacts on acoustic wave propagation and then on the interpretation of the physical signals. The potential position of the source mechanism, the positions of the receivers and the nature of the coolant fluid have to be taken into account in the definition a pre-processing strategy of the real acoustic emission signals. In order to show the relevance of such an approach, we use the results to propose an optimization of the positions of the acoustic emission sensors in order to reduce the estimation bias of the time-delay and then improve the localization of the source mechanisms.

  10. Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation.

    Science.gov (United States)

    Wang, Avery Li-Chun

    This thesis summarizes several contributions to the areas of signal processing and auditory source separation. The philosophy of Frequency-Warped Signal Processing is introduced as a means for separating the AM and FM contributions to the bandwidth of a complex-valued, frequency-varying sinusoid p (n), transforming it into a signal with slowly-varying parameters. This transformation facilitates the removal of p (n) from an additive mixture while minimizing the amount of damage done to other signal components. The average winding rate of a complex-valued phasor is explored as an estimate of the instantaneous frequency. Theorems are provided showing the robustness of this measure. To implement frequency tracking, a Frequency-Locked Loop algorithm is introduced which uses the complex winding error to update its frequency estimate. The input signal is dynamically demodulated and filtered to extract the envelope. This envelope may then be remodulated to reconstruct the target partial, which may be subtracted from the original signal mixture to yield a new, quickly-adapting form of notch filtering. Enhancements to the basic tracker are made which, under certain conditions, attain the Cramer -Rao bound for the instantaneous frequency estimate. To improve tracking, the novel idea of Harmonic -Locked Loop tracking, using N harmonically constrained trackers, is introduced for tracking signals, such as voices and certain musical instruments. The estimated fundamental frequency is computed from a maximum-likelihood weighting of the N tracking estimates, making it highly robust. The result is that harmonic signals, such as voices, can be isolated from complex mixtures in the presence of other spectrally overlapping signals. Additionally, since phase information is preserved, the resynthesized harmonic signals may be removed from the original mixtures with relatively little damage to the residual signal. Finally, a new methodology is given for designing linear-phase FIR filters

  11. Full-Scale Turbofan Engine Noise-Source Separation Using a Four-Signal Method

    Science.gov (United States)

    Hultgren, Lennart S.; Arechiga, Rene O.

    2016-01-01

    Contributions from the combustor to the overall propulsion noise of civilian transport aircraft are starting to become important due to turbofan design trends and expected advances in mitigation of other noise sources. During on-ground, static-engine acoustic tests, combustor noise is generally sub-dominant to other engine noise sources because of the absence of in-flight effects. Consequently, noise-source separation techniques are needed to extract combustor-noise information from the total noise signature in order to further progress. A novel four-signal source-separation method is applied to data from a static, full-scale engine test and compared to previous methods. The new method is, in a sense, a combination of two- and three-signal techniques and represents an attempt to alleviate some of the weaknesses of each of those approaches. This work is supported by the NASA Advanced Air Vehicles Program, Advanced Air Transport Technology Project, Aircraft Noise Reduction Subproject and the NASA Glenn Faculty Fellowship Program.

  12. Beamspace dual signal space projection (bDSSP): a method for selective detection of deep sources in MEG measurements

    Science.gov (United States)

    Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K.; Cai, Chang; Nagarajan, Srikantan S.

    2018-06-01

    Objective. Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. Approach. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Main results. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. Significance. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.

  13. Design of acoustic logging signal source of imitation based on field programmable gate array

    Science.gov (United States)

    Zhang, K.; Ju, X. D.; Lu, J. Q.; Men, B. Y.

    2014-08-01

    An acoustic logging signal source of imitation is designed and realized, based on the Field Programmable Gate Array (FPGA), to improve the efficiency of examining and repairing acoustic logging tools during research and field application, and to inspect and verify acoustic receiving circuits and corresponding algorithms. The design of this signal source contains hardware design and software design,and the hardware design uses an FPGA as the control core. Four signals are made first by reading the Random Access Memory (RAM) data which are inside the FPGA, then dealing with the data by digital to analog conversion, amplification, smoothing and so on. Software design uses VHDL, a kind of hardware description language, to program the FPGA. Experiments illustrate that the ratio of signal to noise for the signal source is high, the waveforms are stable, and also its functions of amplitude adjustment, frequency adjustment and delay adjustment are in accord with the characteristics of real acoustic logging waveforms. These adjustments can be used to imitate influences on sonic logging received waveforms caused by many kinds of factors such as spacing and span of acoustic tools, sonic speeds of different layers and fluids, and acoustic attenuations of different cementation planes.

  14. Evaluation of the influence of uncertain forward models on the EEG source reconstruction problem

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    in the different areas of the brain when noise is present. Results Due to mismatch between the true and experimental forward model, the reconstruction of the sources is determined by the angles between the i'th forward field associated with the true source and the j'th forward field in the experimental forward...... representation of the signal. Conclusions This analysis demonstrated that caution is needed when evaluating the source estimates in different brain regions. Moreover, we demonstrated the importance of reliable forward models, which may be used as a motivation for including the forward model uncertainty...

  15. A knowledge representation meta-model for rule-based modelling of signalling networks

    Directory of Open Access Journals (Sweden)

    Adrien Basso-Blandin

    2016-03-01

    Full Text Available The study of cellular signalling pathways and their deregulation in disease states, such as cancer, is a large and extremely complex task. Indeed, these systems involve many parts and processes but are studied piecewise and their literatures and data are consequently fragmented, distributed and sometimes—at least apparently—inconsistent. This makes it extremely difficult to build significant explanatory models with the result that effects in these systems that are brought about by many interacting factors are poorly understood. The rule-based approach to modelling has shown some promise for the representation of the highly combinatorial systems typically found in signalling where many of the proteins are composed of multiple binding domains, capable of simultaneous interactions, and/or peptide motifs controlled by post-translational modifications. However, the rule-based approach requires highly detailed information about the precise conditions for each and every interaction which is rarely available from any one single source. Rather, these conditions must be painstakingly inferred and curated, by hand, from information contained in many papers—each of which contains only part of the story. In this paper, we introduce a graph-based meta-model, attuned to the representation of cellular signalling networks, which aims to ease this massive cognitive burden on the rule-based curation process. This meta-model is a generalization of that used by Kappa and BNGL which allows for the flexible representation of knowledge at various levels of granularity. In particular, it allows us to deal with information which has either too little, or too much, detail with respect to the strict rule-based meta-model. Our approach provides a basis for the gradual aggregation of fragmented biological knowledge extracted from the literature into an instance of the meta-model from which we can define an automated translation into executable Kappa programs.

  16. Discrete dynamic modeling of cellular signaling networks.

    Science.gov (United States)

    Albert, Réka; Wang, Rui-Sheng

    2009-01-01

    Understanding signal transduction in cellular systems is a central issue in systems biology. Numerous experiments from different laboratories generate an abundance of individual components and causal interactions mediating environmental and developmental signals. However, for many signal transduction systems there is insufficient information on the overall structure and the molecular mechanisms involved in the signaling network. Moreover, lack of kinetic and temporal information makes it difficult to construct quantitative models of signal transduction pathways. Discrete dynamic modeling, combined with network analysis, provides an effective way to integrate fragmentary knowledge of regulatory interactions into a predictive mathematical model which is able to describe the time evolution of the system without the requirement for kinetic parameters. This chapter introduces the fundamental concepts of discrete dynamic modeling, particularly focusing on Boolean dynamic models. We describe this method step-by-step in the context of cellular signaling networks. Several variants of Boolean dynamic models including threshold Boolean networks and piecewise linear systems are also covered, followed by two examples of successful application of discrete dynamic modeling in cell biology.

  17. Note: The design of thin gap chamber simulation signal source based on field programmable gate array

    International Nuclear Information System (INIS)

    Hu, Kun; Wang, Xu; Li, Feng; Jin, Ge; Lu, Houbing; Liang, Futian

    2015-01-01

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability

  18. Design of acoustic logging signal source of imitation based on field programmable gate array

    International Nuclear Information System (INIS)

    Zhang, K; Ju, X D; Lu, J Q; Men, B Y

    2014-01-01

    An acoustic logging signal source of imitation is designed and realized, based on the Field Programmable Gate Array (FPGA), to improve the efficiency of examining and repairing acoustic logging tools during research and field application, and to inspect and verify acoustic receiving circuits and corresponding algorithms. The design of this signal source contains hardware design and software design,and the hardware design uses an FPGA as the control core. Four signals are made first by reading the Random Access Memory (RAM) data which are inside the FPGA, then dealing with the data by digital to analog conversion, amplification, smoothing and so on. Software design uses VHDL, a kind of hardware description language, to program the FPGA. Experiments illustrate that the ratio of signal to noise for the signal source is high, the waveforms are stable, and also its functions of amplitude adjustment, frequency adjustment and delay adjustment are in accord with the characteristics of real acoustic logging waveforms. These adjustments can be used to imitate influences on sonic logging received waveforms caused by many kinds of factors such as spacing and span of acoustic tools, sonic speeds of different layers and fluids, and acoustic attenuations of different cementation planes. (paper)

  19. Model Based Beamforming and Bayesian Inversion Signal Processing Methods for Seismic Localization of Underground Source

    DEFF Research Database (Denmark)

    Oh, Geok Lian

    properties such as the elastic wave speeds and soil densities. One processing method is casting the estimation problem into an inverse problem to solve for the unknown material parameters. The forward model for the seismic signals used in the literatures include ray tracing methods that consider only...... density values of the discretized ground medium, which leads to time-consuming computations and instability behaviour of the inversion process. In addition, the geophysics inverse problem is generally ill-posed due to non-exact forward model that introduces errors. The Bayesian inversion method through...... the first arrivals of the reflected compressional P-waves from the subsurface structures, or 3D elastic wave models that model all the seismic wave components. The ray tracing forward model formulation is linear, whereas the full 3D elastic wave model leads to a nonlinear inversion problem. In this Ph...

  20. Near- Source, Seismo-Acoustic Signals Accompanying a NASCAR Race at the Texas Motor Speedway

    Science.gov (United States)

    Stump, B. W.; Hayward, C.; Underwood, R.; Howard, J. E.; MacPhail, M. D.; Golden, P.; Endress, A.

    2014-12-01

    Near-source, seismo-acoustic observations provide a unique opportunity to characterize urban sources, remotely sense human activities including vehicular traffic and monitor large engineering structures. Energy separately coupled into the solid earth and atmosphere provides constraints on not only the location of these sources but also the physics of the generating process. Conditions and distances at which these observations can be made are dependent upon not only local geological conditions but also atmospheric conditions at the time of the observations. In order to address this range of topics, an empirical, seismo-acoustic study was undertaken in and around the Texas Motor Speedway in the Dallas-Ft. Worth area during the first week of April 2014 at which time a range of activities associated with a series of NASCAR races occurred. Nine, seismic sensors were deployed around the 1.5-mile track for purposes of documenting the direct-coupled seismic energy from the passage of the cars and other vehicles on the track. Six infrasound sensors were deployed on a rooftop in a rectangular array configuration designed to provide high frequency beam forming for acoustic signals. Finally, a five-element infrasound array was deployed outside the track in order to characterize how the signals propagate away from the sources in the near-source region. Signals recovered from within the track were able to track and characterize the motion of a variety of vehicles during the race weekend including individual racecars. Seismic data sampled at 1000 sps documented strong Doppler effects as the cars approached and moved away from individual sensors. There were faint seismic signals that arrived at seismic velocity but local acoustic to seismic coupling as supported by the acoustic observations generated the majority of seismic signals. Actual seismic ground motions were small as demonstrated by the dominance of regional seismic signals from a magnitude 4.0 earthquake that arrived at

  1. Challenges Handling Magnetospheric and Ionospheric Signals in Internal Geomagnetic Field Modelling

    DEFF Research Database (Denmark)

    Finlay, Chris; Lesur, V.; Thébault, E.

    2017-01-01

    systems in the ionosphere and magnetosphere. In order to fully exploit magnetic data to probe the physical properties and dynamics of the Earth’s interior, field models with suitable treatments of external sources, and their associated induced signals, are essential. Here we review the methods presently......-by-track analysis to characterize magnetospheric field fluctuations, differences in internal field models that result from alternative treatments of the quiet-time ionospheric field, and challenges associated with rapidly changing, but spatially correlated, magnetic signatures of polar cap current systems. Possible...

  2. Large signal S-parameters: modeling and radiation effects in microwave power transistors

    International Nuclear Information System (INIS)

    Graham, E.D. Jr.; Chaffin, R.J.; Gwyn, C.W.

    1973-01-01

    Microwave power transistors are usually characterized by measuring the source and load impedances, efficiency, and power output at a specified frequency and bias condition in a tuned circuit. These measurements provide limited data for circuit design and yield essentially no information concerning broadbanding possibilities. Recently, a method using large signal S-parameters has been developed which provides a rapid and repeatable means for measuring microwave power transistor parameters. These large signal S-parameters have been successfully used to design rf power amplifiers. Attempts at modeling rf power transistors have in the past been restricted to a modified Ebers-Moll procedure with numerous adjustable model parameters. The modified Ebers-Moll model is further complicated by inclusion of package parasitics. In the present paper an exact one-dimensional device analysis code has been used to model the performance of the transistor chip. This code has been integrated into the SCEPTRE circuit analysis code such that chip, package and circuit performance can be coupled together in the analysis. Using []his computational tool, rf transistor performance has been examined with particular attention given to the theoretical validity of large-signal S-parameters and the effects of nuclear radiation on device parameters. (auth)

  3. Modeling random telegraph signal noise in CMOS image sensor under low light based on binomial distribution

    International Nuclear Information System (INIS)

    Zhang Yu; Wang Guangyi; Lu Xinmiao; Hu Yongcai; Xu Jiangtao

    2016-01-01

    The random telegraph signal noise in the pixel source follower MOSFET is the principle component of the noise in the CMOS image sensor under low light. In this paper, the physical and statistical model of the random telegraph signal noise in the pixel source follower based on the binomial distribution is set up. The number of electrons captured or released by the oxide traps in the unit time is described as the random variables which obey the binomial distribution. As a result, the output states and the corresponding probabilities of the first and the second samples of the correlated double sampling circuit are acquired. The standard deviation of the output states after the correlated double sampling circuit can be obtained accordingly. In the simulation section, one hundred thousand samples of the source follower MOSFET have been simulated, and the simulation results show that the proposed model has the similar statistical characteristics with the existing models under the effect of the channel length and the density of the oxide trap. Moreover, the noise histogram of the proposed model has been evaluated at different environmental temperatures. (paper)

  4. Signals and Systems in Biomedical Engineering Signal Processing and Physiological Systems Modeling

    CERN Document Server

    Devasahayam, Suresh R

    2013-01-01

    The use of digital signal processing is ubiquitous in the field of physiology and biomedical engineering. The application of such mathematical and computational tools requires a formal or explicit understanding of physiology. Formal models and analytical techniques are interlinked in physiology as in any other field. This book takes a unitary approach to physiological systems, beginning with signal measurement and acquisition, followed by signal processing, linear systems modelling, and computer simulations. The signal processing techniques range across filtering, spectral analysis and wavelet analysis. Emphasis is placed on fundamental understanding of the concepts as well as solving numerical problems. Graphs and analogies are used extensively to supplement the mathematics. Detailed models of nerve and muscle at the cellular and systemic levels provide examples for the mathematical methods and computer simulations. Several of the models are sufficiently sophisticated to be of value in understanding real wor...

  5. Realization of rapid debugging for detection circuit of optical fiber gas sensor: Using an analog signal source

    Science.gov (United States)

    Tian, Changbin; Chang, Jun; Wang, Qiang; Wei, Wei; Zhu, Cunguang

    2015-03-01

    An optical fiber gas sensor mainly consists of two parts: optical part and detection circuit. In the debugging for the detection circuit, the optical part usually serves as a signal source. However, in the debugging condition, the optical part can be easily influenced by many factors, such as the fluctuation of ambient temperature or driving current resulting in instability of the wavelength and intensity for the laser; for dual-beam sensor, the different bends and stresses of the optical fiber will lead to the fluctuation of the intensity and phase; the intensity noise from the collimator, coupler, and other optical devices in the system will also result in the impurity of the optical part based signal source. In order to dramatically improve the debugging efficiency of the detection circuit and shorten the period of research and development, this paper describes an analog signal source, consisting of a single chip microcomputer (SCM), an amplifier circuit, and a voltage-to-current conversion circuit. It can be used to realize the rapid debugging detection circuit of the optical fiber gas sensor instead of optical part based signal source. This analog signal source performs well with many other advantages, such as the simple operation, small size, and light weight.

  6. Modeling the photoacoustic signal during the porous silicon formation

    Science.gov (United States)

    Ramirez-Gutierrez, C. F.; Castaño-Yepes, J. D.; Rodriguez-García, M. E.

    2017-01-01

    Within this work, the kinetics of the growing stage of porous silicon (PS) during the etching process was studied using the photoacoustic technique. A p-type Si with low resistivity was used as a substrate. An extension of the Rosencwaig and Gersho model is proposed in order to analyze the temporary changes that take place in the amplitude of the photoacoustic signal during the PS growth. The solution of the heat equation takes into account the modulated laser beam, the changes in the reflectance of the PS-backing heterostructure, the electrochemical reaction, and the Joule effect as thermal sources. The model includes the time-dependence of the sample thickness during the electrochemical etching of PS. The changes in the reflectance are identified as the laser reflections in the internal layers of the system. The reflectance is modeled by an additional sinusoidal-monochromatic light source and its modulated frequency is related to the velocity of the PS growth. The chemical reaction and the DC components of the heat sources are taken as an average value from the experimental data. The theoretical results are in agreement with the experimental data and hence provided a method to determine variables of the PS growth, such as the etching velocity and the thickness of the porous layer during the growing process.

  7. A parametric framework for modelling of bioelectrical signals

    CERN Document Server

    Mughal, Yar Muhammad

    2016-01-01

    This book examines non-invasive, electrical-based methods for disease diagnosis and assessment of heart function. In particular, a formalized signal model is proposed since this offers several advantages over methods that rely on measured data alone. By using a formalized representation, the parameters of the signal model can be easily manipulated and/or modified, thus providing mechanisms that allow researchers to reproduce and control such signals. In addition, having such a formalized signal model makes it possible to develop computer tools that can be used for manipulating and understanding how signal changes result from various heart conditions, as well as for generating input signals for experimenting with and evaluating the performance of e.g. signal extraction methods. The work focuses on bioelectrical information, particularly electrical bio-impedance (EBI). Once the EBI has been measured, the corresponding signals have to be modelled for analysis. This requires a structured approach in order to move...

  8. Identification of the excitation source of the pressure vessel vibration in a Soviet built WWER PWR with signal transmission path analysis

    International Nuclear Information System (INIS)

    Antonopoulos-Domis, M.; Mourtzanos, K.; Por, G.

    1996-01-01

    Signal transmission path analysis via multivariate auto-regressive modelling was applied at signals recorded at a WWER power reactor (Paks reactor, Hungary). The core is equipped with strings of self-powered neutron detectors (SPNDs). Each string has seven SPNDs. The signals were high pass filtered with cut-off at 0.03 Hz and low pass-filtered with cut-off at 25 Hz. The analysis suggests that the source of excitation of all signals at 25 Hz is due to main coolant pump vibration. It was confirmed that there is vibration of main coolant pumps at this frequency due to a bearing problem. Signal transmission path analysis also suggests direct paths from outlet coolant to inlet coolant pressure and in-core neutron detectors at the upper part of the core. (author)

  9. Transfer functions for protein signal transduction: application to a model of striatal neural plasticity.

    Directory of Open Access Journals (Sweden)

    Gabriele Scheler

    Full Text Available We present a novel formulation for biochemical reaction networks in the context of protein signal transduction. The model consists of input-output transfer functions, which are derived from differential equations, using stable equilibria. We select a set of "source" species, which are interpreted as input signals. Signals are transmitted to all other species in the system (the "target" species with a specific delay and with a specific transmission strength. The delay is computed as the maximal reaction time until a stable equilibrium for the target species is reached, in the context of all other reactions in the system. The transmission strength is the concentration change of the target species. The computed input-output transfer functions can be stored in a matrix, fitted with parameters, and even recalled to build dynamical models on the basis of state changes. By separating the temporal and the magnitudinal domain we can greatly simplify the computational model, circumventing typical problems of complex dynamical systems. The transfer function transformation of biochemical reaction systems can be applied to mass-action kinetic models of signal transduction. The paper shows that this approach yields significant novel insights while remaining a fully testable and executable dynamical model for signal transduction. In particular we can deconstruct the complex system into local transfer functions between individual species. As an example, we examine modularity and signal integration using a published model of striatal neural plasticity. The modularizations that emerge correspond to a known biological distinction between calcium-dependent and cAMP-dependent pathways. Remarkably, we found that overall interconnectedness depends on the magnitude of inputs, with higher connectivity at low input concentrations and significant modularization at moderate to high input concentrations. This general result, which directly follows from the properties of

  10. Sources of extracellular tau and its signaling.

    Science.gov (United States)

    Avila, Jesús; Simón, Diana; Díaz-Hernández, Miguel; Pintor, Jesús; Hernández, Félix

    2014-01-01

    The pathology associated with tau protein, tauopathy, has been recently analyzed in different disorders, leading to the suggestion that intracellular and extracellular tau may itself be the principal agent in the transmission and spreading of tauopathies. Tau pathology is based on an increase in the amount of tau, an increase in phosphorylated tau, and/or an increase in aggregated tau. Indeed, phosphorylated tau protein is the main component of tau aggregates, such as the neurofibrillary tangles present in the brain of Alzheimer's disease patients. It has been suggested that intracellular tau could be toxic to neurons in its phosphorylated and/or aggregated form. However, extracellular tau could also damage neurons and since neuronal death is widespread in Alzheimer's disease, mainly among cholinergic neurons, these cells may represent a possible source of extracellular tau. However, other sources of extracellular tau have been proposed that are independent of cell death. In addition, several ways have been proposed for cells to interact with, transmit, and spread extracellular tau, and to transduce signals mediated by this tau. In this work, we will discuss the role of extracellular tau in the spreading of the tau pathology.

  11. Probabilistic blind deconvolution of non-stationary sources

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Hansen, Lars Kai

    2004-01-01

    We solve a class of blind signal separation problems using a constrained linear Gaussian model. The observed signal is modelled by a convolutive mixture of colored noise signals with additive white noise. We derive a time-domain EM algorithm `KaBSS' which estimates the source signals...

  12. Added-value joint source modelling of seismic and geodetic data

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source

  13. Predicting the performance of a power amplifier using large-signal circuit simulations of an AlGaN/GaN HFET model

    Science.gov (United States)

    Bilbro, Griff L.; Hou, Danqiong; Yin, Hong; Trew, Robert J.

    2009-02-01

    We have quantitatively modeled the conduction current and charge storage of an HFET in terms its physical dimensions and material properties. For DC or small-signal RF operation, no adjustable parameters are necessary to predict the terminal characteristics of the device. Linear performance measures such as small-signal gain and input admittance can be predicted directly from the geometric structure and material properties assumed for the device design. We have validated our model at low-frequency against experimental I-V measurements and against two-dimensional device simulations. We discuss our recent extension of our model to include a larger class of electron velocity-field curves. We also discuss the recent reformulation of our model to facilitate its implementation in commercial large-signal high-frequency circuit simulators. Large signal RF operation is more complex. First, the highest CW microwave power is fundamentally bounded by a brief, reversible channel breakdown in each RF cycle. Second, the highest experimental measurements of efficiency, power, or linearity always require harmonic load pull and possibly also harmonic source pull. Presently, our model accounts for these facts with an adjustable breakdown voltage and with adjustable load impedances and source impedances for the fundamental frequency and its harmonics. This has allowed us to validate our model for large signal RF conditions by simultaneously fitting experimental measurements of output power, gain, and power added efficiency of real devices. We show that the resulting model can be used to compare alternative device designs in terms of their large signal performance, such as their output power at 1dB gain compression or their third order intercept points. In addition, the model provides insight into new device physics features enabled by the unprecedented current and voltage levels of AlGaN/GaN HFETs, including non-ohmic resistance in the source access regions and partial depletion of

  14. A technique for the deconvolution of the pulse shape of acoustic emission signals back to the generating defect source

    International Nuclear Information System (INIS)

    Houghton, J.R.; Packman, P.F.; Townsend, M.A.

    1976-01-01

    Acoustic emission signals recorded after passage through the instrumentation system can be deconvoluted to produce signal traces indicative of those at the generating source, and these traces can be used to identify characteristics of the source

  15. Regression models of reactor diagnostic signals

    International Nuclear Information System (INIS)

    Vavrin, J.

    1989-01-01

    The application is described of an autoregression model as the simplest regression model of diagnostic signals in experimental analysis of diagnostic systems, in in-service monitoring of normal and anomalous conditions and their diagnostics. The method of diagnostics is described using a regression type diagnostic data base and regression spectral diagnostics. The diagnostics is described of neutron noise signals from anomalous modes in the experimental fuel assembly of a reactor. (author)

  16. BrainSignals Revisited: Simplifying a Computational Model of Cerebral Physiology.

    Directory of Open Access Journals (Sweden)

    Matthew Caldwell

    Full Text Available Multimodal monitoring of brain state is important both for the investigation of healthy cerebral physiology and to inform clinical decision making in conditions of injury and disease. Near-infrared spectroscopy is an instrument modality that allows non-invasive measurement of several physiological variables of clinical interest, notably haemoglobin oxygenation and the redox state of the metabolic enzyme cytochrome c oxidase. Interpreting such measurements requires the integration of multiple signals from different sources to try to understand the physiological states giving rise to them. We have previously published several computational models to assist with such interpretation. Like many models in the realm of Systems Biology, these are complex and dependent on many parameters that can be difficult or impossible to measure precisely. Taking one such model, BrainSignals, as a starting point, we have developed several variant models in which specific regions of complexity are substituted with much simpler linear approximations. We demonstrate that model behaviour can be maintained whilst achieving a significant reduction in complexity, provided that the linearity assumptions hold. The simplified models have been tested for applicability with simulated data and experimental data from healthy adults undergoing a hypercapnia challenge, but relevance to different physiological and pathophysiological conditions will require specific testing. In conditions where the simplified models are applicable, their greater efficiency has potential to allow their use at the bedside to help interpret clinical data in near real-time.

  17. Diabetes: Models, Signals and control

    Science.gov (United States)

    Cobelli, C.

    2010-07-01

    Diabetes and its complications impose significant economic consequences on individuals, families, health systems, and countries. The control of diabetes is an interdisciplinary endeavor, which includes significant components of modeling, signal processing and control. Models: first, I will discuss the minimal (coarse) models which describe the key components of the system functionality and are capable of measuring crucial processes of glucose metabolism and insulin control in health and diabetes; then, the maximal (fine-grain) models which include comprehensively all available knowledge about system functionality and are capable to simulate the glucose-insulin system in diabetes, thus making it possible to create simulation scenarios whereby cost effective experiments can be conducted in silico to assess the efficacy of various treatment strategies - in particular I will focus on the first in silico simulation model accepted by FDA as a substitute to animal trials in the quest for optimal diabetes control. Signals: I will review metabolic monitoring, with a particular emphasis on the new continuous glucose sensors, on the crucial role of models to enhance the interpretation of their time-series signals, and on the opportunities that they present for automation of diabetes control. Control: I will review control strategies that have been successfully employed in vivo or in silico, presenting a promise for the development of a future artificial pancreas and, in particular, I will discuss a modular architecture for building closed-loop control systems, including insulin delivery and patient safety supervision layers.

  18. Mathematical Modelling Plant Signalling Networks

    KAUST Repository

    Muraro, D.; Byrne, H.M.; King, J.R.; Bennett, M.J.

    2013-01-01

    methods for modelling gene and signalling networks and their application in plants. We then describe specific models of hormonal perception and cross-talk in plants. This mathematical analysis of sub-cellular molecular mechanisms paves the way for more

  19. MEG source imaging method using fast L1 minimum-norm and its applications to signals with brain noise and human resting-state source amplitude images.

    Science.gov (United States)

    Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.

  20. Top-Down Control of Visual Alpha Oscillations: Sources of Control Signals and Their Mechanisms of Action

    Science.gov (United States)

    Wang, Chao; Rajagovindan, Rajasimhan; Han, Sahng-Min; Ding, Mingzhou

    2016-01-01

    Alpha oscillations (8–12 Hz) are thought to inversely correlate with cortical excitability. Goal-oriented modulation of alpha has been studied extensively. In visual spatial attention, alpha over the region of visual cortex corresponding to the attended location decreases, signifying increased excitability to facilitate the processing of impending stimuli. In contrast, in retention of verbal working memory, alpha over visual cortex increases, signifying decreased excitability to gate out stimulus input to protect the information held online from sensory interference. According to the prevailing model, this goal-oriented biasing of sensory cortex is effected by top-down control signals from frontal and parietal cortices. The present study tests and substantiates this hypothesis by (a) identifying the signals that mediate the top-down biasing influence, (b) examining whether the cortical areas issuing these signals are task-specific or task-independent, and (c) establishing the possible mechanism of the biasing action. High-density human EEG data were recorded in two experimental paradigms: a trial-by-trial cued visual spatial attention task and a modified Sternberg working memory task. Applying Granger causality to both sensor-level and source-level data we report the following findings. In covert visual spatial attention, the regions exerting top-down control over visual activity are lateralized to the right hemisphere, with the dipoles located at the right frontal eye field (FEF) and the right inferior frontal gyrus (IFG) being the main sources of top-down influences. During retention of verbal working memory, the regions exerting top-down control over visual activity are lateralized to the left hemisphere, with the dipoles located at the left middle frontal gyrus (MFG) being the main source of top-down influences. In both experiments, top-down influences are mediated by alpha oscillations, and the biasing effect is likely achieved via an inhibition

  1. Metabolite transport and associated sugar signalling systems underpinning source/sink interactions.

    Science.gov (United States)

    Griffiths, Cara A; Paul, Matthew J; Foyer, Christine H

    2016-10-01

    Metabolite transport between organelles, cells and source and sink tissues not only enables pathway co-ordination but it also facilitates whole plant communication, particularly in the transmission of information concerning resource availability. Carbon assimilation is co-ordinated with nitrogen assimilation to ensure that the building blocks of biomass production, amino acids and carbon skeletons, are available at the required amounts and stoichiometry, with associated transport processes making certain that these essential resources are transported from their sites of synthesis to those of utilisation. Of the many possible posttranslational mechanisms that might participate in efficient co-ordination of metabolism and transport only reversible thiol-disulphide exchange mechanisms have been described in detail. Sucrose and trehalose metabolism are intertwined in the signalling hub that ensures appropriate resource allocation to drive growth and development under optimal and stress conditions, with trehalose-6-phosphate acting as an important signal for sucrose availability. The formidable suite of plant metabolite transporters provides enormous flexibility and adaptability in inter-pathway coordination and source-sink interactions. Focussing on the carbon metabolism network, we highlight the functions of different transporter families, and the important of thioredoxins in the metabolic dialogue between source and sink tissues. In addition, we address how these systems can be tailored for crop improvement. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Assessing Model Characterization of Single Source ...

    Science.gov (United States)

    Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting downwind plume placement. The model shows similar patterns of an increasing fraction of PM2.5 sulfate ion to the sum of SO2 and PM2.5 sulfate ion by distance from the source compared with ambient based estimates. The model was less consistent in capturing downwind ambient based trends in conversion of NOX to NOY from these sources. Source sensitivity approaches capture near-source O3 titration by fresh NO emissions, in particular subgrid plume treatment. However, capturing this near-source chemical feature did not translate into better downwind peak estimates of single source O3 impacts. The model estimated O3 production from these sources but often was lower than ambient based source production. The downwind transect ambient measurements, in particular secondary PM2.5 and O3, have some level of contribution from other sources which makes direct comparison with model source contribution challenging. Model source attribution results suggest contribution to secondary pollutants from multiple sources even where primary pollutants indicate the presence of a single source. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, deci

  3. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  4. Tracking the 10Be-26Al source-area signal in sediment-routing systems of arid central Australia

    Science.gov (United States)

    Struck, Martin; Jansen, John D.; Fujioka, Toshiyuki; Codilean, Alexandru T.; Fink, David; Fülöp, Réka-Hajnalka; Wilcken, Klaus M.; Price, David M.; Kotevski, Steven; Fifield, L. Keith; Chappell, John

    2018-05-01

    Sediment-routing systems continuously transfer information and mass from eroding source areas to depositional sinks. Understanding how these systems alter environmental signals is critical when it comes to inferring source-area properties from the sedimentary record. We measure cosmogenic 10Be and 26Al along three large sediment-routing systems ( ˜ 100 000 km2) in central Australia with the aim of tracking downstream variations in 10Be-26Al inventories and identifying the factors responsible for these variations. By comparing 56 new cosmogenic 10Be and 26Al measurements in stream sediments with matching data (n = 55) from source areas, we show that 10Be-26Al inventories in hillslope bedrock and soils set the benchmark for relative downstream modifications. Lithology is the primary determinant of erosion-rate variations in source areas and despite sediment mixing over hundreds of kilometres downstream, a distinct lithological signal is retained. Post-orogenic ranges yield catchment erosion rates of ˜ 6-11 m Myr-1 and silcrete-dominant areas erode as slow as ˜ 0.2 m Myr-1. 10Be-26Al inventories in stream sediments indicate that cumulative-burial terms increase downstream to mostly ˜ 400-800 kyr and up to ˜ 1.1 Myr. The magnitude of the burial signal correlates with increasing sediment cover downstream and reflects assimilation from storages with long exposure histories, such as alluvial fans, desert pavements, alluvial plains, and aeolian dunes. We propose that the tendency for large alluvial rivers to mask their 10Be-26Al source-area signal differs according to geomorphic setting. Signal preservation is favoured by (i) high sediment supply rates, (ii) high mean runoff, and (iii) a thick sedimentary basin pile. Conversely, signal masking prevails in landscapes of (i) low sediment supply and (ii) juxtaposition of sediment storages with notably different exposure histories.

  5. Source modelling at the dawn of gravitational-wave astronomy

    Science.gov (United States)

    Gerosa, Davide

    2016-09-01

    The age of gravitational-wave astronomy has begun. Gravitational waves are propagating spacetime perturbations ("ripples in the fabric of space-time") predicted by Einstein's theory of General Relativity. These signals propagate at the speed of light and are generated by powerful astrophysical events, such as the merger of two black holes and supernova explosions. The first detection of gravitational waves was performed in 2015 with the LIGO interferometers. This constitutes a tremendous breakthrough in fundamental physics and astronomy: it is not only the first direct detection of such elusive signals, but also the first irrefutable observation of a black-hole binary system. The future of gravitational-wave astronomy is bright and loud: the LIGO experiments will soon be joined by a network of ground-based interferometers; the space mission eLISA has now been fully approved by the European Space Agency with a proof-of-concept mission called LISA Pathfinder launched in 2015. Gravitational-wave observations will provide unprecedented tests of gravity as well as a qualitatively new window on the Universe. Careful theoretical modelling of the astrophysical sources of gravitational-waves is crucial to maximize the scientific outcome of the detectors. In this Thesis, we present several advances on gravitational-wave source modelling, studying in particular: (i) the precessional dynamics of spinning black-hole binaries; (ii) the astrophysical consequences of black-hole recoils; and (iii) the formation of compact objects in the framework of scalar-tensor theories of gravity. All these phenomena are deeply characterized by a continuous interplay between General Relativity and astrophysics: despite being a truly relativistic messenger, gravitational waves encode details of the astrophysical formation and evolution processes of their sources. We work out signatures and predictions to extract such information from current and future observations. At the dawn of a revolutionary

  6. Reconstructing the nature of the first cosmic sources from the anisotropic 21-cm signal.

    Science.gov (United States)

    Fialkov, Anastasia; Barkana, Rennan; Cohen, Aviad

    2015-03-13

    The redshifted 21-cm background is expected to be a powerful probe of the early Universe, carrying both cosmological and astrophysical information from a wide range of redshifts. In particular, the power spectrum of fluctuations in the 21-cm brightness temperature is anisotropic due to the line-of-sight velocity gradient, which in principle allows for a simple extraction of this information in the limit of linear fluctuations. However, recent numerical studies suggest that the 21-cm signal is actually rather complex, and its analysis likely depends on detailed model fitting. We present the first realistic simulation of the anisotropic 21-cm power spectrum over a wide period of early cosmic history. We show that on observable scales, the anisotropy is large and thus measurable at most redshifts, and its form tracks the evolution of 21-cm fluctuations as they are produced early on by Lyman-α radiation from stars, then switch to x-ray radiation from early heating sources, and finally to ionizing radiation from stars. In particular, we predict a redshift window during cosmic heating (at z∼15), when the anisotropy is small, during which the shape of the 21-cm power spectrum on large scales is determined directly by the average radial distribution of the flux from x-ray sources. This makes possible a model-independent reconstruction of the x-ray spectrum of the earliest sources of cosmic heating.

  7. Observations and modeling of the elastogravity signals preceding direct seismic waves

    Science.gov (United States)

    Vallée, Martin; Ampuero, Jean Paul; Juhel, Kévin; Bernard, Pascal; Montagner, Jean-Paul; Barsuglia, Matteo

    2017-12-01

    After an earthquake, the earliest deformation signals are not expected to be carried by the fastest (P) elastic waves but by the speed-of-light changes of the gravitational field. However, these perturbations are weak and, so far, their detection has not been accurate enough to fully understand their origins and to use them for a highly valuable rapid estimate of the earthquake magnitude. We show that gravity perturbations are particularly well observed with broadband seismometers at distances between 1000 and 2000 kilometers from the source of the 2011, moment magnitude 9.1, Tohoku earthquake. We can accurately model them by a new formalism, taking into account both the gravity changes and the gravity-induced motion. These prompt elastogravity signals open the window for minute time-scale magnitude determination for great earthquakes.

  8. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Energy Technology Data Exchange (ETDEWEB)

    Murray, S. G.; Trott, C. M.; Jordan, C. H. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) (Australia)

    2017-08-10

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  9. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Science.gov (United States)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  10. Comparison of Linear Prediction Models for Audio Signals

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  11. MASCOTTE: analytical model of eddy current signals

    International Nuclear Information System (INIS)

    Delsarte, G.; Levy, R.

    1992-01-01

    Tube examination is a major application of the eddy current technique in the nuclear and petrochemical industries. Such examination configurations being specially adapted to analytical modes, a physical model is developed on portable computers. It includes simple approximations made possible by the effective conditions of the examinations. The eddy current signal is described by an analytical formulation that takes into account the tube dimensions, the sensor conception, the physical characteristics of the defect and the examination parameters. Moreover, the model makes it possible to associate real signals and simulated signals

  12. Magnetoencephalography signals are influenced by skull defects.

    Science.gov (United States)

    Lau, S; Flemming, L; Haueisen, J

    2014-08-01

    Magnetoencephalography (MEG) signals had previously been hypothesized to have negligible sensitivity to skull defects. The objective is to experimentally investigate the influence of conducting skull defects on MEG and EEG signals. A miniaturized electric dipole was implanted in vivo into rabbit brains. Simultaneous recording using 64-channel EEG and 16-channel MEG was conducted, first above the intact skull and then above a skull defect. Skull defects were filled with agar gels, which had been formulated to have tissue-like homogeneous conductivities. The dipole was moved beneath the skull defects, and measurements were taken at regularly spaced points. The EEG signal amplitude increased 2-10 times, whereas the MEG signal amplitude reduced by as much as 20%. The EEG signal amplitude deviated more when the source was under the edge of the defect, whereas the MEG signal amplitude deviated more when the source was central under the defect. The change in MEG field-map topography (relative difference measure, RDM(∗)=0.15) was geometrically related to the skull defect edge. MEG and EEG signals can be substantially affected by skull defects. MEG source modeling requires realistic volume conductor head models that incorporate skull defects. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  13. Signal analysis of accelerometry data using gravity-based modeling

    Science.gov (United States)

    Davey, Neil P.; James, Daniel A.; Anderson, Megan E.

    2004-03-01

    Triaxial accelerometers have been used to measure human movement parameters in swimming. Interpretation of data is difficult due to interference sources including interaction of external bodies. In this investigation the authors developed a model to simulate the physical movement of the lower back. Theoretical accelerometery outputs were derived thus giving an ideal, or noiseless dataset. An experimental data collection apparatus was developed by adapting a system to the aquatic environment for investigation of swimming. Model data was compared against recorded data and showed strong correlation. Comparison of recorded and modeled data can be used to identify changes in body movement, this is especially useful when cyclic patterns are present in the activity. Strong correlations between data sets allowed development of signal processing algorithms for swimming stroke analysis using first the pure noiseless data set which were then applied to performance data. Video analysis was also used to validate study results and has shown potential to provide acceptable results.

  14. A Novel Partial Discharge Ultra-High Frequency Signal De-Noising Method Based on a Single-Channel Blind Source Separation Algorithm

    Directory of Open Access Journals (Sweden)

    Liangliang Wei

    2018-02-01

    Full Text Available To effectively de-noise the Gaussian white noise and periodic narrow-band interference in the background noise of partial discharge ultra-high frequency (PD UHF signals in field tests, a novel de-noising method, based on a single-channel blind source separation algorithm, is proposed. Compared with traditional methods, the proposed method can effectively de-noise the noise interference, and the distortion of the de-noising PD signal is smaller. Firstly, the PD UHF signal is time-frequency analyzed by S-transform to obtain the number of source signals. Then, the single-channel detected PD signal is converted into multi-channel signals by singular value decomposition (SVD, and background noise is separated from multi-channel PD UHF signals by the joint approximate diagonalization of eigen-matrix method. At last, the source PD signal is estimated and recovered by the l1-norm minimization method. The proposed de-noising method was applied on the simulation test and field test detected signals, and the de-noising performance of the different methods was compared. The simulation and field test results demonstrate the effectiveness and correctness of the proposed method.

  15. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    Science.gov (United States)

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  16. Noise source analysis of nuclear ship Mutsu plant using multivariate autoregressive model

    International Nuclear Information System (INIS)

    Hayashi, K.; Shimazaki, J.; Shinohara, Y.

    1996-01-01

    The present study is concerned with the noise sources in N.S. Mutsu reactor plant. The noise experiments on the Mutsu plant were performed in order to investigate the plant dynamics and the effect of sea condition and and ship motion on the plant. The reactor noise signals as well as the ship motion signals were analyzed by a multivariable autoregressive (MAR) modeling method to clarify the noise sources in the reactor plant. It was confirmed from the analysis results that most of the plant variables were affected mainly by a horizontal component of the ship motion, that is the sway, through vibrations of the plant structures. Furthermore, the effect of ship motion on the reactor power was evaluated through the analysis of wave components extracted by a geometrical transform method. It was concluded that the amplitude of the reactor power oscillation was about 0.15% in normal sea condition, which was small enough for safe operation of the reactor plant. (authors)

  17. Assessing the contribution of binaural cues for apparent source width perception via a functional model

    DEFF Research Database (Denmark)

    Käsbach, Johannes; Hahmann, Manuel; May, Tobias

    2016-01-01

    In echoic conditions, sound sources are not perceived as point sources but appear to be expanded. The expansion in the horizontal dimension is referred to as apparent source width (ASW). To elicit this perception, the auditory system has access to fluctuations of binaural cues, the interaural time...... a statistical representation of ITDs and ILDs based on percentiles integrated over time and frequency. The model’s performance was evaluated against psychoacoustic data obtained with noise, speech and music signals in loudspeakerbased experiments. A robust model prediction of ASW was achieved using a cross...

  18. Tracking the 10Be–26Al source-area signal in sediment-routing systems of arid central Australia

    Directory of Open Access Journals (Sweden)

    M. Struck

    2018-05-01

    Full Text Available Sediment-routing systems continuously transfer information and mass from eroding source areas to depositional sinks. Understanding how these systems alter environmental signals is critical when it comes to inferring source-area properties from the sedimentary record. We measure cosmogenic 10Be and 26Al along three large sediment-routing systems ( ∼  100 000 km2 in central Australia with the aim of tracking downstream variations in 10Be–26Al inventories and identifying the factors responsible for these variations. By comparing 56 new cosmogenic 10Be and 26Al measurements in stream sediments with matching data (n =  55 from source areas, we show that 10Be–26Al inventories in hillslope bedrock and soils set the benchmark for relative downstream modifications. Lithology is the primary determinant of erosion-rate variations in source areas and despite sediment mixing over hundreds of kilometres downstream, a distinct lithological signal is retained. Post-orogenic ranges yield catchment erosion rates of  ∼  6–11 m Myr−1 and silcrete-dominant areas erode as slow as  ∼  0.2 m Myr−1. 10Be–26Al inventories in stream sediments indicate that cumulative-burial terms increase downstream to mostly  ∼  400–800 kyr and up to  ∼  1.1 Myr. The magnitude of the burial signal correlates with increasing sediment cover downstream and reflects assimilation from storages with long exposure histories, such as alluvial fans, desert pavements, alluvial plains, and aeolian dunes. We propose that the tendency for large alluvial rivers to mask their 10Be–26Al source-area signal differs according to geomorphic setting. Signal preservation is favoured by (i high sediment supply rates, (ii high mean runoff, and (iii a thick sedimentary basin pile. Conversely, signal masking prevails in landscapes of (i low sediment supply and (ii juxtaposition of sediment storages with notably different exposure

  19. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert J.; Ombao, Hernando

    2017-01-01

    aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel

  20. Computational model of Amersham I-125 source model 6711 and Prosper Pd-103 source model MED3633 using MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Artur F.; Reis Junior, Juraci P.; Silva, Ademir X., E-mail: ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear; Rosa, Luiz A.R. da, E-mail: lrosa@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Facure, Alessandro [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil); Cardoso, Simone C., E-mail: Simone@if.ufrj.b [Universidade Federal do Rio de Janeiro (IF/UFRJ), RJ (Brazil). Inst. de Fisica. Dept. de Fisica Nuclear

    2011-07-01

    Brachytherapy is used in cancer treatment at shorter distances through the use of small encapsulated source of ionizing radiation. In such treatment, a radiation source is positioned directly into or near the target volume to be treated. In this study the Monte Carlo based MCNP code was used to model and simulate the I-125 Amersham Health source model 6711 and the Pd-103 Prospera source model MED3633 in order to obtain the dosimetric parameter dose rate constant ({Lambda}) . The sources geometries were modeled and implemented in MCNPX code. The dose rate constant is an important parameter prostate LDR brachytherapy's treatments planning. This study was based on American Association of Physicists in Medicine (AAPM) recommendations which were produced by its Task Group 43. The results obtained were 0.941 and 0.65 for the dose rate constants of I-125 and Pd-103 sources, respectively. They present good agreement with the literature values based on different Monte Carlo codes. (author)

  1. Collective signaling behavior in a networked-oscillator model

    Science.gov (United States)

    Liu, Z.-H.; Hui, P. M.

    2007-09-01

    We propose and study the collective behavior of a model of networked signaling objects that incorporates several ingredients of real-life systems. These ingredients include spatial inhomogeneity with grouping of signaling objects, signal attenuation with distance, and delayed and impulsive coupling between non-identical signaling objects. Depending on the coupling strength and/or time-delay effect, the model exhibits completely, partially, and locally collective signaling behavior. In particular, a correlated signaling (CS) behavior is observed in which there exist time durations when nearly a constant fraction of oscillators in the system are in the signaling state. These time durations are much longer than the duration of a spike when a single oscillator signals, and they are separated by regular intervals in which nearly all oscillators are silent. Such CS behavior is similar to that observed in biological systems such as fireflies, cicadas, crickets, and frogs. The robustness of the CS behavior against noise is also studied. It is found that properly adjusting the coupling strength and noise level could enhance the correlated behavior.

  2. Modeling laser velocimeter signals as triply stochastic Poisson processes

    Science.gov (United States)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  3. Localization from near-source quasi-static electromagnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, John Compton [Univ. of Southern California, Los Angeles, CA (United States)

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.

  4. CHIRP-Like Signals: Estimation, Detection and Processing A Sequential Model-Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-04

    Chirp signals have evolved primarily from radar/sonar signal processing applications specifically attempting to estimate the location of a target in surveillance/tracking volume. The chirp, which is essentially a sinusoidal signal whose phase changes instantaneously at each time sample, has an interesting property in that its correlation approximates an impulse function. It is well-known that a matched-filter detector in radar/sonar estimates the target range by cross-correlating a replicant of the transmitted chirp with the measurement data reflected from the target back to the radar/sonar receiver yielding a maximum peak corresponding to the echo time and therefore enabling the desired range estimate. In this application, we perform the same operation as a radar or sonar system, that is, we transmit a “chirp-like pulse” into the target medium and attempt to first detect its presence and second estimate its location or range. Our problem is complicated by the presence of disturbance signals from surrounding broadcast stations as well as extraneous sources of interference in our frequency bands and of course the ever present random noise from instrumentation. First, we discuss the chirp signal itself and illustrate its inherent properties and then develop a model-based processing scheme enabling both the detection and estimation of the signal from noisy measurement data.

  5. Signal Processing Model for Radiation Transport

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D H

    2008-07-28

    This note describes the design of a simplified gamma ray transport model for use in designing a sequential Bayesian signal processor for low-count detection and classification. It uses a simple one-dimensional geometry to describe the emitting source, shield effects, and detector (see Fig. 1). At present, only Compton scattering and photoelectric absorption are implemented for the shield and the detector. Other effects may be incorporated in the future by revising the expressions for the probabilities of escape and absorption. Pair production would require a redesign of the simulator to incorporate photon correlation effects. The initial design incorporates the physical effects that were present in the previous event mode sequence simulator created by Alan Meyer. The main difference is that this simulator transports the rate distributions instead of single photons. Event mode sequences and other time-dependent photon flux sequences are assumed to be marked Poisson processes that are entirely described by their rate distributions. Individual realizations can be constructed from the rate distribution using a random Poisson point sequence generator.

  6. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan

    2017-03-27

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  7. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert; Ombao, Hernando

    2017-01-01

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  8. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan

    2017-12-12

    Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.

  9. Discrete dynamic modeling of T cell survival signaling networks

    Science.gov (United States)

    Zhang, Ranran

    2009-03-01

    Biochemistry-based frameworks are often not applicable for the modeling of heterogeneous regulatory systems that are sparsely documented in terms of quantitative information. As an alternative, qualitative models assuming a small set of discrete states are gaining acceptance. This talk will present a discrete dynamic model of the signaling network responsible for the survival and long-term competence of cytotoxic T cells in the blood cancer T-LGL leukemia. We integrated the signaling pathways involved in normal T cell activation and the known deregulations of survival signaling in leukemic T-LGL, and formulated the regulation of each network element as a Boolean (logic) rule. Our model suggests that the persistence of two signals is sufficient to reproduce all known deregulations in leukemic T-LGL. It also indicates the nodes whose inactivity is necessary and sufficient for the reversal of the T-LGL state. We have experimentally validated several model predictions, including: (i) Inhibiting PDGF signaling induces apoptosis in leukemic T-LGL. (ii) Sphingosine kinase 1 and NFκB are essential for the long-term survival of T cells in T-LGL leukemia. (iii) T box expressed in T cells (T-bet) is constitutively activated in the T-LGL state. The model has identified potential therapeutic targets for T-LGL leukemia and can be used for generating long-term competent CTL necessary for tumor and cancer vaccine development. The success of this model, and of other discrete dynamic models, suggests that the organization of signaling networks has an determining role in their dynamics. Reference: R. Zhang, M. V. Shah, J. Yang, S. B. Nyland, X. Liu, J. K. Yun, R. Albert, T. P. Loughran, Jr., Network Model of Survival Signaling in LGL Leukemia, PNAS 105, 16308-16313 (2008).

  10. Open source molecular modeling.

    Science.gov (United States)

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  11. Modeling the effects of Multi-path propagation and scintillation on GPS signals

    Science.gov (United States)

    Habash Krause, L.; Wilson, S. J.

    2014-12-01

    GPS signals traveling through the earth's ionosphere are affected by charged particles that often disrupt the signal and the information it carries due to "scintillation", which resembles an extra noise source on the signal. These signals are also affected by weather changes, tropospheric scattering, and absorption from objects due to multi-path propagation of the signal. These obstacles cause distortion within information and fading of the signal, which ultimately results in phase locking errors and noise in messages. In this work, we attempted to replicate the distortion that occurs in GPS signals using a signal processing simulation model. We wanted to be able to create and identify scintillated signals so we could better understand the environment that caused it to become scintillated. Then, under controlled conditions, we simulated the receiver's ability to suppress scintillation in a signal. We developed a code in MATLAB that was programmed to: 1. Create a carrier wave and then plant noise (four different frequencies) on the carrier wave, 2. Compute a Fourier transform on the four different frequencies to find the frequency content of a signal, 3. Use a filter and apply it to the Fourier transform of the four frequencies and then compute a Signal-to-noise ratio to evaluate the power (in Decibels) of the filtered signal, and 4.Plot each of these components into graphs. To test the code's validity, we used user input and data from an AM transmitter. We determined that the amplitude modulated signal or AM signal would be the best type of signal to test the accuracy of the MATLAB code due to its simplicity. This code is basic to give students the ability to change and use it to determine the environment and effects of noise on different AM signals and their carrier waves. Overall, we were able to manipulate a scenario of a noisy signal and interpret its behavior and change due to its noisy components: amplitude, frequency, and phase shift.

  12. Modelling noninvasively measured cerebral signals during a hypoxemia challenge: steps towards individualised modelling.

    Directory of Open Access Journals (Sweden)

    Beth Jelfs

    Full Text Available Noninvasive approaches to measuring cerebral circulation and metabolism are crucial to furthering our understanding of brain function. These approaches also have considerable potential for clinical use "at the bedside". However, a highly nontrivial task and precondition if such methods are to be used routinely is the robust physiological interpretation of the data. In this paper, we explore the ability of a previously developed model of brain circulation and metabolism to explain and predict quantitatively the responses of physiological signals. The five signals all noninvasively-measured during hypoxemia in healthy volunteers include four signals measured using near-infrared spectroscopy along with middle cerebral artery blood flow measured using transcranial Doppler flowmetry. We show that optimising the model using partial data from an individual can increase its predictive power thus aiding the interpretation of NIRS signals in individuals. At the same time such optimisation can also help refine model parametrisation and provide confidence intervals on model parameters. Discrepancies between model and data which persist despite model optimisation are used to flag up important questions concerning the underlying physiology, and the reliability and physiological meaning of the signals.

  13. Classifying LISA gravitational wave burst signals using Bayesian evidence

    International Nuclear Information System (INIS)

    Feroz, Farhan; Graff, Philip; Hobson, Michael P; Lasenby, Anthony; Gair, Jonathan R

    2010-01-01

    We consider the problem of characterization of burst sources detected by the Laser Interferometer Space Antenna (LISA) using the multi-modal nested sampling algorithm, MultiNest. We use MultiNest as a tool to search for modelled bursts from cosmic string cusps, and compute the Bayesian evidence associated with the cosmic string model. As an alternative burst model, we consider sine-Gaussian burst signals, and show how the evidence ratio can be used to choose between these two alternatives. We present results from an application of MultiNest to the last round of the Mock LISA Data Challenge, in which we were able to successfully detect and characterize all three of the cosmic string burst sources present in the release data set. We also present results of independent trials and show that MultiNest can detect cosmic string signals with signal-to-noise ratio (SNR) as low as ∼7 and sine-Gaussian signals with SNR as low as ∼8. In both cases, we show that the threshold at which the sources become detectable coincides with the SNR at which the evidence ratio begins to favour the correct model over the alternative.

  14. Removal of power line interference of space bearing vibration signal based on the morphological filter and blind source separation

    Science.gov (United States)

    Dong, Shaojiang; Sun, Dihua; Xu, Xiangyang; Tang, Baoping

    2017-06-01

    Aiming at the problem that it is difficult to extract the feature information from the space bearing vibration signal because of different noise, for example the running trend information, high-frequency noise and especially the existence of lot of power line interference (50Hz) and its octave ingredients of the running space simulated equipment in the ground. This article proposed a combination method to eliminate them. Firstly, the EMD is used to remove the running trend item information of the signal, the running trend that affect the signal processing accuracy is eliminated. Then the morphological filter is used to eliminate high-frequency noise. Finally, the components and characteristics of the power line interference are researched, based on the characteristics of the interference, the revised blind source separation model is used to remove the power line interferences. Through analysis of simulation and practical application, results suggest that the proposed method can effectively eliminate those noise.

  15. Parametric modelling of cardiac system multiple measurement signals: an open-source computer framework for performance evaluation of ECG, PCG and ABP event detectors.

    Science.gov (United States)

    Homaeinezhad, M R; Sabetian, P; Feizollahi, A; Ghaffari, A; Rahmani, R

    2012-02-01

    The major focus of this study is to present a performance accuracy assessment framework based on mathematical modelling of cardiac system multiple measurement signals. Three mathematical algebraic subroutines with simple structural functions for synthetic generation of the synchronously triggered electrocardiogram (ECG), phonocardiogram (PCG) and arterial blood pressure (ABP) signals are described. In the case of ECG signals, normal and abnormal PQRST cycles in complicated conditions such as fascicular ventricular tachycardia, rate dependent conduction block and acute Q-wave infarctions of inferior and anterolateral walls can be simulated. Also, continuous ABP waveform with corresponding individual events such as systolic, diastolic and dicrotic pressures with normal or abnormal morphologies can be generated by another part of the model. In addition, the mathematical synthetic PCG framework is able to generate the S4-S1-S2-S3 cycles in normal and in cardiac disorder conditions such as stenosis, insufficiency, regurgitation and gallop. In the PCG model, the amplitude and frequency content (5-700 Hz) of each sound and variation patterns can be specified. The three proposed models were implemented to generate artificial signals with varies abnormality types and signal-to-noise ratios (SNR), for quantitative detection-delineation performance assessment of several ECG, PCG and ABP individual event detectors designed based on the Hilbert transform, discrete wavelet transform, geometric features such as area curve length (ACLM), the multiple higher order moments (MHOM) metric, and the principal components analysed geometric index (PCAGI). For each method the detection-delineation operating characteristics were obtained automatically in terms of sensitivity, positive predictivity and delineation (segmentation) error rms and checked by the cardiologist. The Matlab m-file script of the synthetic ECG, ABP and PCG signal generators are available in the Appendix.

  16. A source-controlled data center network model.

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  17. A source-controlled data center network model

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  18. Kurtosis based blind source extraction of complex noncircular signals with application in EEG artifact removal in real-time

    Directory of Open Access Journals (Sweden)

    Soroush eJavidi

    2011-10-01

    Full Text Available A new class of complex domain blind source extraction (BSE algorithms suitable for the extraction of both circular and noncircular complex signals is proposed. This is achieved through sequential extraction based on the degree of kurtosis and in the presence of noncircular measurement noise. The existence and uniqueness analysis of the solution is followed by a study of fast converging variants of the algorithm. The performance is first assessed through simulations on well understood benchmark signals, followed by a case study on real-time artifact removal from EEG signals, verified using both qualitative and quantitative metrics. The results illustrate the power of the proposed approach in real-time blind extraction of general complex-valued sources.

  19. Mixed-signal instrumentation for large-signal device characterization and modelling

    NARCIS (Netherlands)

    Marchetti, M.

    2013-01-01

    This thesis concentrates on the development of advanced large-signal measurement and characterization tools to support technology development, model extraction and validation, and power amplifier (PA) designs that address the newly introduced third and fourth generation (3G and 4G) wideband

  20. Modeling and reliability analysis of three phase z-source AC-AC converter

    Directory of Open Access Journals (Sweden)

    Prasad Hanuman

    2017-12-01

    Full Text Available This paper presents the small signal modeling using the state space averaging technique and reliability analysis of a three-phase z-source ac-ac converter. By controlling the shoot-through duty ratio, it can operate in buck-boost mode and maintain desired output voltage during voltage sag and surge condition. It has faster dynamic response and higher efficiency as compared to the traditional voltage regulator. Small signal analysis derives different control transfer functions and this leads to design a suitable controller for a closed loop system during supply voltage variation. The closed loop system of the converter with a PID controller eliminates the transients in output voltage and provides steady state regulated output. The proposed model designed in the RT-LAB and executed in a field programming gate array (FPGA-based real-time digital simulator at a fixedtime step of 10 μs and a constant switching frequency of 10 kHz. The simulator was developed using very high speed integrated circuit hardware description language (VHDL, making it versatile and moveable. Hardware-in-the-loop (HIL simulation results are presented to justify the MATLAB simulation results during supply voltage variation of the three phase z-source ac-ac converter. The reliability analysis has been applied to the converter to find out the failure rate of its different components.

  1. A method for measuring power signal background and source strength in a fission reactor

    International Nuclear Information System (INIS)

    Baers, B.; Kall, L.; Visuri, P.

    1977-01-01

    Theory and experimental verification of a novel method for measuring power signal bias and source strength in a fission reactor are reported. A minicomputer was applied in the measurements. The method is an extension of the inverse kinetics method presented by Mogilner et al. (Auth.)

  2. Tilt signals at Mount Melbourne, Antarctica: evidence of a shallow volcanic source

    Directory of Open Access Journals (Sweden)

    Salvatore Gambino

    2016-06-01

    Full Text Available Mount Melbourne (74°21′ S, 164°43′ E is a quiescent volcano located in northern Victoria Land, Antarctica. Tilt signals have been recorded on Mount Melbourne since early 1989 by a permanent shallow borehole tiltmeter network comprising five stations. An overall picture of tilt, air and permafrost temperatures over 15 years of continuous recording data is reported. We focused our observations on long-term tilt trends that at the end of 1997 showed coherent changes at the three highest altitude stations, suggesting the presence of a ground deformation source whose effects are restricted to the summit area of Mount Melbourne. We inverted these data using a finite spherical body source, thereby obtaining a shallow deflation volume source located under the summit area. The ground deformation observed corroborates the hypothesis that the volcanic edifice of Mount Melbourne is active and should be monitored multidisciplinarily.

  3. A posteriori error estimates in voice source recovery

    Science.gov (United States)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  4. Sensitivity of the coastal tsunami simulation to the complexity of the 2011 Tohoku earthquake source model

    Science.gov (United States)

    Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène

    2016-04-01

    The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).

  5. Learning models for multi-source integration

    Energy Technology Data Exchange (ETDEWEB)

    Tejada, S.; Knoblock, C.A.; Minton, S. [Univ. of Southern California/ISI, Marina del Rey, CA (United States)

    1996-12-31

    Because of the growing number of information sources available through the internet there are many cases in which information needed to solve a problem or answer a question is spread across several information sources. For example, when given two sources, one about comic books and the other about super heroes, you might want to ask the question {open_quotes}Is Spiderman a Marvel Super Hero?{close_quotes} This query accesses both sources; therefore, it is necessary to have information about the relationships of the data within each source and between sources to properly access and integrate the data retrieved. The SIMS information broker captures this type of information in the form of a model. All the information sources map into the model providing the user a single interface to multiple sources.

  6. Patterns of flavor signals in supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Goto, T. [KEK National High Energy Physics, Tsukuba (Japan)]|[Kyoto Univ. (Japan). YITP; Okada, Y. [KEK National High Energy Physics, Tsukuba (Japan)]|[Graduate Univ. for Advanced Studies, Tsukuba (Japan). Dept. of Particle and Nucelar Physics; Shindou, T. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[International School for Advanced Studies, Trieste (Italy); Tanaka, M. [Osaka Univ., Toyonaka (Japan). Dept. of Physics

    2007-11-15

    Quark and lepton flavor signals are studied in four supersymmetric models, namely the minimal supergravity model, the minimal supersymmetric standard model with right-handed neutrinos, SU(5) supersymmetric grand unified theory with right-handed neutrinos and the minimal supersymmetric standard model with U(2) flavor symmetry. We calculate b{yields}s(d) transition observables in B{sub d} and B{sub s} decays, taking the constraint from the B{sub s}- anti B{sub s} mixing recently observed at Tevatron into account. We also calculate lepton flavor violating processes {mu} {yields} e{gamma}, {tau} {yields} {mu}{gamma} and {tau} {yields} e{gamma} for the models with right-handed neutrinos. We investigate possibilities to distinguish the flavor structure of the supersymmetry breaking sector with use of patterns of various flavor signals which are expected to be measured in experiments such as MEG, LHCb and a future Super B Factory. (orig.)

  7. Patterns of flavor signals in supersymmetric models

    International Nuclear Information System (INIS)

    Goto, T.; Tanaka, M.

    2007-11-01

    Quark and lepton flavor signals are studied in four supersymmetric models, namely the minimal supergravity model, the minimal supersymmetric standard model with right-handed neutrinos, SU(5) supersymmetric grand unified theory with right-handed neutrinos and the minimal supersymmetric standard model with U(2) flavor symmetry. We calculate b→s(d) transition observables in B d and B s decays, taking the constraint from the B s - anti B s mixing recently observed at Tevatron into account. We also calculate lepton flavor violating processes μ → eγ, τ → μγ and τ → eγ for the models with right-handed neutrinos. We investigate possibilities to distinguish the flavor structure of the supersymmetry breaking sector with use of patterns of various flavor signals which are expected to be measured in experiments such as MEG, LHCb and a future Super B Factory. (orig.)

  8. Vibration Signal Forecasting on Rotating Machinery by means of Signal Decomposition and Neurofuzzy Modeling

    Directory of Open Access Journals (Sweden)

    Daniel Zurita-Millán

    2016-01-01

    Full Text Available Vibration monitoring plays a key role in the industrial machinery reliability since it allows enhancing the performance of the machinery under supervision through the detection of failure modes. Thus, vibration monitoring schemes that give information regarding future condition, that is, prognosis approaches, are of growing interest for the scientific and industrial communities. This work proposes a vibration signal prognosis methodology, applied to a rotating electromechanical system and its associated kinematic chain. The method combines the adaptability of neurofuzzy modeling with a signal decomposition strategy to model the patterns of the vibrations signal under different fault scenarios. The model tuning is performed by means of Genetic Algorithms along with a correlation based interval selection procedure. The performance and effectiveness of the proposed method are validated experimentally with an electromechanical test bench containing a kinematic chain. The results of the study indicate the suitability of the method for vibration forecasting in complex electromechanical systems and their associated kinematic chains.

  9. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997...... discrimination with pure tones and broadband noise, tone-in-noise detection, spectral masking with narrow-band signals and maskers, forward masking with tone signals and tone or noise maskers, and amplitude-modulation detection with narrow- and wideband noise carriers. The model can account for most of the key...... properties of the data and is more powerful than the original model. The model might be useful as a front end in technical applications....

  10. A repeatable seismic source for tomography at volcanoes

    Directory of Open Access Journals (Sweden)

    A. Ratdomopurbo

    1999-06-01

    Full Text Available One major problem associated with the interpretation of seismic signals on active volcanoes is the lack of knowledge about the internal structure of the volcano. Assuming a 1D or a homogeneous instead of a 3D velocity structure leads to an erroneous localization of seismic events. In order to derive a high resolution 3D velocity model ofMt. Merapi (Java a seismic tomography experiment using active sources is planned as a part of the MERAPI (Mechanism Evaluation, Risk Assessment and Prediction Improvement project. During a pre-site survey in August 1996 we tested a seismic source consisting of a 2.5 l airgun shot in water basins that were constructed in different flanks of the volcano. This special source, which in our case can be fired every two minutes, produces a repeatable, identical source signal. Using this source the number of receiver locations is not limited by the number of seismometers. The seismometers can be moved to various receiver locations while the source reproduces the same source signal. Additionally, at each receiver location we are able to record the identical source signal several times so that the disadvantage of the lower energy compared to an explosion source can be reduced by skipping disturbed signals and stacking several recordings.

  11. Laser scanner data processing and 3D modeling using a free and open source software

    International Nuclear Information System (INIS)

    Gabriele, Fatuzzo; Michele, Mangiameli; Giuseppe, Mussumeci; Salvatore, Zito

    2015-01-01

    The laser scanning is a technology that allows in a short time to run the relief geometric objects with a high level of detail and completeness, based on the signal emitted by the laser and the corresponding return signal. When the incident laser radiation hits the object to detect, then the radiation is reflected. The purpose is to build a three-dimensional digital model that allows to reconstruct the reality of the object and to conduct studies regarding the design, restoration and/or conservation. When the laser scanner is equipped with a digital camera, the result of the measurement process is a set of points in XYZ coordinates showing a high density and accuracy with radiometric and RGB tones. In this case, the set of measured points is called “point cloud” and allows the reconstruction of the Digital Surface Model. Even the post-processing is usually performed by closed source software, which is characterized by Copyright restricting the free use, free and open source software can increase the performance by far. Indeed, this latter can be freely used providing the possibility to display and even custom the source code. The experience started at the Faculty of Engineering in Catania is aimed at finding a valuable free and open source tool, MeshLab (Italian Software for data processing), to be compared with a reference closed source software for data processing, i.e. RapidForm. In this work, we compare the results obtained with MeshLab and Rapidform through the planning of the survey and the acquisition of the point cloud of a morphologically complex statue

  12. Laser scanner data processing and 3D modeling using a free and open source software

    Energy Technology Data Exchange (ETDEWEB)

    Gabriele, Fatuzzo [Dept. of Industrial and Mechanical Engineering, University of Catania (Italy); Michele, Mangiameli, E-mail: amichele.mangiameli@dica.unict.it; Giuseppe, Mussumeci; Salvatore, Zito [Dept. of Civil Engineering and Architecture, University of Catania (Italy)

    2015-03-10

    The laser scanning is a technology that allows in a short time to run the relief geometric objects with a high level of detail and completeness, based on the signal emitted by the laser and the corresponding return signal. When the incident laser radiation hits the object to detect, then the radiation is reflected. The purpose is to build a three-dimensional digital model that allows to reconstruct the reality of the object and to conduct studies regarding the design, restoration and/or conservation. When the laser scanner is equipped with a digital camera, the result of the measurement process is a set of points in XYZ coordinates showing a high density and accuracy with radiometric and RGB tones. In this case, the set of measured points is called “point cloud” and allows the reconstruction of the Digital Surface Model. Even the post-processing is usually performed by closed source software, which is characterized by Copyright restricting the free use, free and open source software can increase the performance by far. Indeed, this latter can be freely used providing the possibility to display and even custom the source code. The experience started at the Faculty of Engineering in Catania is aimed at finding a valuable free and open source tool, MeshLab (Italian Software for data processing), to be compared with a reference closed source software for data processing, i.e. RapidForm. In this work, we compare the results obtained with MeshLab and Rapidform through the planning of the survey and the acquisition of the point cloud of a morphologically complex statue.

  13. Batteryless wireless transmission system for electronic drum uses piezoelectric generator for play signal and power source

    International Nuclear Information System (INIS)

    Nishikawa, H; Yoshimi, A; Takemura, K; Tanaka, A; Douseki, T

    2015-01-01

    A batteryless self-powered wireless transmission system has been developed that sends a signal from a drum pad to a synthesizer. The power generated by a piezoelectric generator functions both as the “Play” signal for the synthesizer and as the power source for the transmitter. An FM transmitter, which theoretically operates with zero latency, and a receiver with quick-response squelch of the received signal were developed for wireless transmission with a minimum system delay. Experimental results for an electronic drum without any connecting wires fully demonstrated the feasibility of self-powered wireless transmission with a latency of 900 μs. (paper)

  14. Interpretation of the MEG-MUSIC scan in biomagnetic source localization

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J.C.; Lewis, P.S. [Los Alamos National Lab., NM (United States); Leahy, R.M. [University of Southern California, Los Angeles, CA (United States). Signal and Image Processing Inst.

    1993-09-01

    MEG-Music is a new approach to MEG source localization. MEG-Music is based on a spatio-temporal source model in which the observed biomagnetic fields are generated by a small number of current dipole sources with fixed positions/orientations and varying strengths. From the spatial covariance matrix of the observed fields, a signal subspace can be identified. The rank of this subspace is equal to the number of elemental sources present. This signal sub-space is used in a projection metric that scans the three dimensional head volume. Given a perfect signal subspace estimate and a perfect forward model, the metric will peak at unity at each dipole location. In practice, the signal subspace estimate is contaminated by noise, which in turn yields MUSIC peaks which are less than unity. Previously we examined the lower bounds on localization error, independent of the choice of localization procedure. In this paper, we analyzed the effects of noise and temporal coherence on the signal subspace estimate and the resulting effects on the MEG-MUSIC peaks.

  15. One pair of hands is not like another: caudate BOLD response in dogs depends on signal source and canine temperament

    Directory of Open Access Journals (Sweden)

    Peter F. Cook

    2014-09-01

    Full Text Available Having previously used functional MRI to map the response to a reward signal in the ventral caudate in awake unrestrained dogs, here we examined the importance of signal source to canine caudate activation. Hand signals representing either incipient reward or no reward were presented by a familiar human (each dog’s respective handler, an unfamiliar human, and via illustrated images of hands on a computer screen to 13 dogs undergoing voluntary fMRI. All dogs had received extensive training with the reward and no-reward signals from their handlers and with the computer images and had minimal exposure to the signals from strangers. All dogs showed differentially higher BOLD response in the ventral caudate to the reward versus no reward signals, and there was a robust effect at the group level. Further, differential response to the signal source had a highly significant interaction with a dog’s general aggressivity as measured by the C-BARQ canine personality assessment. Dogs with greater aggressivity showed a higher differential response to the reward signal versus no-reward signal presented by the unfamiliar human and computer, while dogs with lower aggressivity showed a higher differential response to the reward signal versus no-reward signal from their handler. This suggests that specific facets of canine temperament bear more strongly on the perceived reward value of relevant communication signals than does reinforcement history, as each of the dogs were reinforced similarly for each signal, regardless of the source (familiar human, unfamiliar human, or computer. A group-level psychophysiological interaction (PPI connectivity analysis showed increased functional coupling between the caudate and a region of cortex associated with visual discrimination and learning on reward versus no-reward trials. Our findings emphasize the sensitivity of the domestic dog to human social interaction, and may have other implications and applications

  16. Analysis of the Degradation of MOSFETs in Switching Mode Power Supply by Characterizing Source Oscillator Signals

    Directory of Open Access Journals (Sweden)

    Xueyan Zheng

    2013-01-01

    Full Text Available Switching Mode Power Supply (SMPS has been widely applied in aeronautics, nuclear power, high-speed railways, and other areas related to national strategy and security. The degradation of MOSFET occupies a dominant position in the key factors affecting the reliability of SMPS. MOSFETs are used as low-voltage switches to regulate the DC voltage in SMPS. The studies have shown that die-attach degradation leads to an increase in on-state resistance due to its dependence on junction temperature. On-state resistance is the key indicator of the health of MOSFETs. In this paper, an online real-time method is presented for predicting the degradation of MOSFETs. First, the relationship between an oscillator signal of source and on-state resistance is introduced. Because oscillator signals change when they age, a feature is proposed to capture these changes and use them as indicators of the state of health of MOSFETs. A platform for testing characterizations is then established to monitor oscillator signals of source. Changes in oscillator signal measurement were observed with aged on-state resistance as a result of die-attach degradation. The experimental results demonstrate that the method is efficient. This study will enable a method to predict the failure of MOSFETs to be developed.

  17. State–time spectrum of signal transduction logic models

    International Nuclear Information System (INIS)

    MacNamara, Aidan; Terfve, Camille; Henriques, David; Bernabé, Beatriz Peñalver; Saez-Rodriguez, Julio

    2012-01-01

    Despite the current wealth of high-throughput data, our understanding of signal transduction is still incomplete. Mathematical modeling can be a tool to gain an insight into such processes. Detailed biochemical modeling provides deep understanding, but does not scale well above relatively a few proteins. In contrast, logic modeling can be used where the biochemical knowledge of the system is sparse and, because it is parameter free (or, at most, uses relatively a few parameters), it scales well to large networks that can be derived by manual curation or retrieved from public databases. Here, we present an overview of logic modeling formalisms in the context of training logic models to data, and specifically the different approaches to modeling qualitative to quantitative data (state) and dynamics (time) of signal transduction. We use a toy model of signal transduction to illustrate how different logic formalisms (Boolean, fuzzy logic and differential equations) treat state and time. Different formalisms allow for different features of the data to be captured, at the cost of extra requirements in terms of computational power and data quality and quantity. Through this demonstration, the assumptions behind each formalism are discussed, as well as their advantages and disadvantages and possible future developments. (paper)

  18. Integrated source-risk model for radon: A definition study

    International Nuclear Information System (INIS)

    Laheij, G.M.H.; Aldenkamp, F.J.; Stoop, P.

    1993-10-01

    The purpose of a source-risk model is to support policy making on radon mitigation by comparing effects of various policy options and to enable optimization of counter measures applied to different parts of the source-risk chain. There are several advantages developing and using a source-risk model: risk calculations are standardized; the effects of measures applied to different parts of the source-risk chain can be better compared because interactions are included; and sensitivity analyses can be used to determine the most important parameters within the total source-risk chain. After an inventory of processes and sources to be included in the source-risk chain, the models presently available in the Netherlands are investigated. The models were screened for completeness, validation and operational status. The investigation made clear that, by choosing for each part of the source-risk chain the most convenient model, a source-risk chain model for radon may be realized. However, the calculation of dose out of the radon concentrations and the status of the validation of most models should be improved. Calculations with the proposed source-risk model will give estimations with a large uncertainty at the moment. For further development of the source-risk model an interaction between the source-risk model and experimental research is recommended. Organisational forms of the source-risk model are discussed. A source-risk model in which only simple models are included is also recommended. The other models are operated and administrated by the model owners. The model owners execute their models for a combination of input parameters. The output of the models is stored in a database which will be used for calculations with the source-risk model. 5 figs., 15 tabs., 7 appendices, 14 refs

  19. Characterization and modeling of the heat source

    Energy Technology Data Exchange (ETDEWEB)

    Glickstein, S.S.; Friedman, E.

    1993-10-01

    A description of the input energy source is basic to any numerical modeling formulation designed to predict the outcome of the welding process. The source is fundamental and unique to each joining process. The resultant output of any numerical model will be affected by the initial description of both the magnitude and distribution of the input energy of the heat source. Thus, calculated weld shape, residual stresses, weld distortion, cooling rates, metallurgical structure, material changes due to excessive temperatures and potential weld defects are all influenced by the initial characterization of the heat source. Understandings of both the physics and the mathematical formulation of these sources are essential for describing the input energy distribution. This section provides a brief review of the physical phenomena that influence the input energy distributions and discusses several different models of heat sources that have been used in simulating arc welding, high energy density welding and resistance welding processes. Both simplified and detailed models of the heat source are discussed.

  20. Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm

    Directory of Open Access Journals (Sweden)

    Maren Stropahl

    2018-05-01

    Full Text Available Electroencephalography (EEG source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA. ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat. Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM. We then apply the method of dynamical statistical parametric mapping (dSPM to obtain

  1. HP Memristor mathematical model for periodic signals and DC

    KAUST Repository

    Radwan, Ahmed G.

    2012-07-28

    In this paper mathematical models of the HP Memristor for DC and periodic signal inputs are provided. The need for a rigid model for the Memristor using conventional current and voltage quantities is essential for the development of many promising Memristors\\' applications. Unlike the previous works, which focuses on the sinusoidal input waveform, we derived rules for any periodic signals in general in terms of voltage and current. Square and triangle waveforms are studied explicitly, extending the formulas for any general square wave. The limiting conditions for saturation are also provided in case of either DC or periodic signals. The derived equations are compared to the SPICE model of the Memristor showing a perfect match.

  2. Large-signal modeling method for power FETs and diodes

    Energy Technology Data Exchange (ETDEWEB)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping, E-mail: sunlu_1019@126.co [School of Electromechanical Engineering, Xidian University, Xi' an 710071 (China)

    2009-06-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  3. Large-signal modeling method for power FETs and diodes

    International Nuclear Information System (INIS)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping

    2009-01-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  4. Enhancement of speech signals - with a focus on voiced speech models

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie

    This thesis deals with speech enhancement, i.e., noise reduction in speech signals. This has applications in, e.g., hearing aids and teleconference systems. We consider a signal-driven approach to speech enhancement where a model of the speech is assumed and filters are generated based...... on this model. The basic model used in this thesis is the harmonic model which is a commonly used model for describing the voiced part of the speech signal. We show that it can be beneficial to extend the model to take inharmonicities or the non-stationarity of speech into account. Extending the model...

  5. MPD model for radar echo signal of hypersonic targets

    Directory of Open Access Journals (Sweden)

    Xu Xuefei

    2014-08-01

    Full Text Available The stop-and-go (SAG model is typically used for echo signal received by the radar using linear frequency modulation pulse compression. In this study, the authors demonstrate that this model is not applicable to hypersonic targets. Instead of SAG model, they present a more realistic echo signal model (moving-in-pulse duration (MPD for hypersonic targets. Following that, they evaluate the performances of pulse compression under the SAG and MPD models by theoretical analysis and simulations. They found that the pulse compression gain has an increase of 3 dB by using the MPD model compared with the SAG model in typical cases.

  6. Using Bayesian Belief Network (BBN) modelling for Rapid Source Term Prediction. RASTEP Phase 1

    International Nuclear Information System (INIS)

    Knochenhauer, M.; Swaling, V.H.; Alfheim, P.

    2012-09-01

    The project is connected to the development of RASTEP, a computerized source term prediction tool aimed at providing a basis for improving off-site emergency management. RASTEP uses Bayesian belief networks (BBN) to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, timing, and pathway of released radio-nuclides). The output is a set of possible source terms with associated probabilities. In the NKS project, a number of complex issues associated with the integration of probabilistic and deterministic analyses are addressed. This includes issues related to the method for estimating source terms, signal validation, and sensitivity analysis. One major task within Phase 1 of the project addressed the problem of how to make the source term module flexible enough to give reliable and valid output throughout the accident scenario. Of the alternatives evaluated, it is recommended that RASTEP is connected to a fast running source term prediction code, e.g., MARS, with a possibility of updating source terms based on real-time observations. (Author)

  7. Using Bayesian Belief Network (BBN) modelling for Rapid Source Term Prediction. RASTEP Phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Knochenhauer, M.; Swaling, V.H.; Alfheim, P. [Scandpower AB, Sundbyberg (Sweden)

    2012-09-15

    The project is connected to the development of RASTEP, a computerized source term prediction tool aimed at providing a basis for improving off-site emergency management. RASTEP uses Bayesian belief networks (BBN) to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, timing, and pathway of released radio-nuclides). The output is a set of possible source terms with associated probabilities. In the NKS project, a number of complex issues associated with the integration of probabilistic and deterministic analyses are addressed. This includes issues related to the method for estimating source terms, signal validation, and sensitivity analysis. One major task within Phase 1 of the project addressed the problem of how to make the source term module flexible enough to give reliable and valid output throughout the accident scenario. Of the alternatives evaluated, it is recommended that RASTEP is connected to a fast running source term prediction code, e.g., MARS, with a possibility of updating source terms based on real-time observations. (Author)

  8. Analysis of a dynamic model of guard cell signaling reveals the stability of signal propagation

    Science.gov (United States)

    Gan, Xiao; Albert, RéKa

    Analyzing the long-term behaviors (attractors) of dynamic models of biological systems can provide valuable insight into biological phenotypes and their stability. We identified the long-term behaviors of a multi-level, 70-node discrete dynamic model of the stomatal opening process in plants. We reduce the model's huge state space by reducing unregulated nodes and simple mediator nodes, and by simplifying the regulatory functions of selected nodes while keeping the model consistent with experimental observations. We perform attractor analysis on the resulting 32-node reduced model by two methods: 1. converting it into a Boolean model, then applying two attractor-finding algorithms; 2. theoretical analysis of the regulatory functions. We conclude that all nodes except two in the reduced model have a single attractor; and only two nodes can admit oscillations. The multistability or oscillations do not affect the stomatal opening level in any situation. This conclusion applies to the original model as well in all the biologically meaningful cases. We further demonstrate the robustness of signal propagation by showing that a large percentage of single-node knockouts does not affect the stomatal opening level. Thus, we conclude that the complex structure of this signal transduction network provides multiple information propagation pathways while not allowing extensive multistability or oscillations, resulting in robust signal propagation. Our innovative combination of methods offers a promising way to analyze multi-level models.

  9. Modeling the explosion-source region: An overview

    International Nuclear Information System (INIS)

    Glenn, L.A.

    1993-01-01

    The explosion-source region is defined as the region surrounding an underground explosion that cannot be described by elastic or anelastic theory. This region extends typically to ranges up to 1 km/(kt) 1/3 but for some purposes, such as yield estimation via hydrodynamic means (CORRTEX and HYDRO PLUS), the maximum range of interest is less by an order of magnitude. For the simulation or analysis of seismic signals, however, what is required is the time resolved motion and stress state at the inelastic boundary. Various analytic approximations have been made for these boundary conditions, but since they rely on near-field empirical data they cannot be expected to reliably extrapolate to different explosion sites. More important, without some knowledge of the initial energy density and the characteristics of the medium immediately surrounding the explosion, these simplified models are unable to distinguish chemical from nuclear explosions, identify cavity decoupling, or account for such phenomena as anomalous dissipation via pore collapse

  10. Source localization of rhythmic ictal EEG activity

    DEFF Research Database (Denmark)

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana

    2013-01-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal...... EEG activity using a distributed source model....

  11. SIG-VISA: Signal-based Vertically Integrated Seismic Monitoring

    Science.gov (United States)

    Moore, D.; Mayeda, K. M.; Myers, S. C.; Russell, S.

    2013-12-01

    Traditional seismic monitoring systems rely on discrete detections produced by station processing software; however, while such detections may constitute a useful summary of station activity, they discard large amounts of information present in the original recorded signal. We present SIG-VISA (Signal-based Vertically Integrated Seismic Analysis), a system for seismic monitoring through Bayesian inference on seismic signals. By directly modeling the recorded signal, our approach incorporates additional information unavailable to detection-based methods, enabling higher sensitivity and more accurate localization using techniques such as waveform matching. SIG-VISA's Bayesian forward model of seismic signal envelopes includes physically-derived models of travel times and source characteristics as well as Gaussian process (kriging) statistical models of signal properties that combine interpolation of historical data with extrapolation of learned physical trends. Applying Bayesian inference, we evaluate the model on earthquakes as well as the 2009 DPRK test event, demonstrating a waveform matching effect as part of the probabilistic inference, along with results on event localization and sensitivity. In particular, we demonstrate increased sensitivity from signal-based modeling, in which the SIGVISA signal model finds statistical evidence for arrivals even at stations for which the IMS station processing failed to register any detection.

  12. Convolutive Blind Source Separation Methods

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Larsen, Jan; Kjems, Ulrik

    2008-01-01

    During the past decades, much attention has been given to the separation of mixed sources, in particular for the blind case where both the sources and the mixing process are unknown and only recordings of the mixtures are available. In several situations it is desirable to recover all sources from...... the recorded mixtures, or at least to segregate a particular source. Furthermore, it may be useful to identify the mixing process itself to reveal information about the physical mixing system. In some simple mixing models each recording consists of a sum of differently weighted source signals. However, in many...... real-world applications, such as in acoustics, the mixing process is more complex. In such systems, the mixtures are weighted and delayed, and each source contributes to the sum with multiple delays corresponding to the multiple paths by which an acoustic signal propagates to a microphone...

  13. Efficient ECG Signal Compression Using Adaptive Heart Model

    National Research Council Canada - National Science Library

    Szilagyi, S

    2001-01-01

    This paper presents an adaptive, heart-model-based electrocardiography (ECG) compression method. After conventional pre-filtering the waves from the signal are localized and the model's parameters are determined...

  14. Mushu, a free- and open source BCI signal acquisition, written in Python.

    Science.gov (United States)

    Venthur, Bastian; Blankertz, Benjamin

    2012-01-01

    The following paper describes Mushu, a signal acquisition software for retrieval and online streaming of Electroencephalography (EEG) data. It is written, but not limited, to the needs of Brain Computer Interfacing (BCI). It's main goal is to provide a unified interface to EEG data regardless of the amplifiers used. It runs under all major operating systems, like Windows, Mac OS and Linux, is written in Python and is free- and open source software licensed under the terms of the GNU General Public License.

  15. Evaluation of the autoregression time-series model for analysis of a noisy signal

    International Nuclear Information System (INIS)

    Allen, J.W.

    1977-01-01

    The autoregression (AR) time-series model of a continuous noisy signal was statistically evaluated to determine quantitatively the uncertainties of the model order, the model parameters, and the model's power spectral density (PSD). The result of such a statistical evaluation enables an experimenter to decide whether an AR model can adequately represent a continuous noisy signal and be consistent with the signal's frequency spectrum, and whether it can be used for on-line monitoring. Although evaluations of other types of signals have been reported in the literature, no direct reference has been found to AR model's uncertainties for continuous noisy signals; yet the evaluation is necessary to decide the usefulness of AR models of typical reactor signals (e.g., neutron detector output or thermocouple output) and the potential of AR models for on-line monitoring applications. AR and other time-series models for noisy data representation are being investigated by others since such models require fewer parameters than the traditional PSD model. For this study, the AR model was selected for its simplicity and conduciveness to uncertainty analysis, and controlled laboratory bench signals were used for continuous noisy data. (author)

  16. Estimating the number of sources in a noisy convolutive mixture using BIC

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Hansen, Lars Kai

    2004-01-01

    The number of source signals in a noisy convolutive mixture is determined based on the exact log-likelihoods of the candidate models. In (Olsson and Hansen, 2004), a novel probabilistic blind source separator was introduced that is based solely on the time-varying second-order statistics of the s......The number of source signals in a noisy convolutive mixture is determined based on the exact log-likelihoods of the candidate models. In (Olsson and Hansen, 2004), a novel probabilistic blind source separator was introduced that is based solely on the time-varying second-order statistics...

  17. Reduced modeling of signal transduction – a modular approach

    Directory of Open Access Journals (Sweden)

    Ederer Michael

    2007-09-01

    Full Text Available Abstract Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good

  18. Mathematical Models Light Up Plant Signaling

    NARCIS (Netherlands)

    Chew, Y.H.; Smith, R.W.; Jones, H.J.; Seaton, D.D.; Grima, R.; Halliday, K.J.

    2014-01-01

    Plants respond to changes in the environment by triggering a suite of regulatory networks that control and synchronize molecular signaling in different tissues, organs, and the whole plant. Molecular studies through genetic and environmental perturbations, particularly in the model plant Arabidopsis

  19. Balmorel open source energy system model

    DEFF Research Database (Denmark)

    Wiese, Frauke; Bramstoft, Rasmus; Koduvere, Hardi

    2018-01-01

    As the world progresses towards a cleaner energy future with more variable renewable energy sources, energy system models are required to deal with new challenges. This article describes design, development and applications of the open source energy system model Balmorel, which is a result...... of a long and fruitful cooperation between public and private institutions within energy system research and analysis. The purpose of the article is to explain the modelling approach, to highlight strengths and challenges of the chosen approach, to create awareness about the possible applications...... of Balmorel as well as to inspire to new model developments and encourage new users to join the community. Some of the key strengths of the model are the flexible handling of the time and space dimensions and the combination of operation and investment optimisation. Its open source character enables diverse...

  20. Extraction of Point Source Gamma Signals from Aerial Survey Data Taken over a Las Vegas Nevada, Residential Area

    International Nuclear Information System (INIS)

    Thane J. Hendricks

    2007-01-01

    Detection of point-source gamma signals from aerial measurements is complicated by widely varying terrestrial gamma backgrounds, since these variations frequently resemble signals from point-sources. Spectral stripping techniques have been very useful in separating man-made and natural radiation contributions which exist on Energy Research and Development Administration (ERDA) plant sites and other like facilities. However, these facilities are generally situated in desert areas or otherwise flat terrain with few man-made structures to disturb the natural background. It is of great interest to determine if the stripping technique can be successfully applied in populated areas where numerous man-made disturbances (houses, streets, yards, vehicles, etc.) exist

  1. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  2. Learning sparse generative models of audiovisual signals

    OpenAIRE

    Monaci, Gianluca; Sommer, Friedrich T.; Vandergheynst, Pierre

    2008-01-01

    This paper presents a novel framework to learn sparse represen- tations for audiovisual signals. An audiovisual signal is modeled as a sparse sum of audiovisual kernels. The kernels are bimodal functions made of synchronous audio and video components that can be positioned independently and arbitrarily in space and time. We design an algorithm capable of learning sets of such audiovi- sual, synchronous, shift-invariant functions by alternatingly solving a coding and a learning pr...

  3. PATHLOGIC-S: a scalable Boolean framework for modelling cellular signalling.

    Directory of Open Access Journals (Sweden)

    Liam G Fearnley

    Full Text Available Curated databases of signal transduction have grown to describe several thousand reactions, and efficient use of these data requires the development of modelling tools to elucidate and explore system properties. We present PATHLOGIC-S, a Boolean specification for a signalling model, with its associated GPL-licensed implementation using integer programming techniques. The PATHLOGIC-S specification has been designed to function on current desktop workstations, and is capable of providing analyses on some of the largest currently available datasets through use of Boolean modelling techniques to generate predictions of stable and semi-stable network states from data in community file formats. PATHLOGIC-S also addresses major problems associated with the presence and modelling of inhibition in Boolean systems, and reduces logical incoherence due to common inhibitory mechanisms in signalling systems. We apply this approach to signal transduction networks including Reactome and two pathways from the Panther Pathways database, and present the results of computations on each along with a discussion of execution time. A software implementation of the framework and model is freely available under a GPL license.

  4. Multiple Signal Classification Algorithm Based Electric Dipole Source Localization Method in an Underwater Environment

    Directory of Open Access Journals (Sweden)

    Yidong Xu

    2017-10-01

    Full Text Available A novel localization method based on multiple signal classification (MUSIC algorithm is proposed for positioning an electric dipole source in a confined underwater environment by using electric dipole-receiving antenna array. In this method, the boundary element method (BEM is introduced to analyze the boundary of the confined region by use of a matrix equation. The voltage of each dipole pair is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields based localization method, which can be easily implemented in practical engineering applications. Then, a global-multiple region-conjugate gradient (CG hybrid search method is used to reduce the computation burden and to improve the operation speed. Two localization simulation models and a physical experiment are conducted. Both the simulation results and physical experiment result provide accurate positioning performance, with the help to verify the effectiveness of the proposed localization method in underwater environments.

  5. Semiconductor Modeling For Simulating Signal, Power, and Electromagneticintegrity

    CERN Document Server

    Leventhal, Roy

    2006-01-01

    Assists engineers in designing high-speed circuits. The emphasis is on semiconductor modeling, with PCB transmission line effects, equipment enclosure effects, and other modeling issues discussed as needed. This text addresses practical considerations, including process variation, model accuracy, validation and verification, and signal integrity.

  6. Wires in the soup: quantitative models of cell signaling

    Science.gov (United States)

    Cheong, Raymond; Levchenko, Andre

    2014-01-01

    Living cells are capable of extracting information from their environments and mounting appropriate responses to a variety of associated challenges. The underlying signal transduction networks enabling this can be quite complex, necessitating for their unraveling by sophisticated computational modeling coupled with precise experimentation. Although we are still at the beginning of this process, some recent examples of integrative analysis of cell signaling are very encouraging. This review highlights the case of the NF-κB pathway in order to illustrate how a quantitative model of a signaling pathway can be gradually constructed through continuous experimental validation, and what lessons one might learn from such exercises. PMID:18291655

  7. Quantitative Models of Imperfect Deception in Network Security using Signaling Games with Evidence

    OpenAIRE

    Pawlick, Jeffrey; Zhu, Quanyan

    2017-01-01

    Deception plays a critical role in many interactions in communication and network security. Game-theoretic models called "cheap talk signaling games" capture the dynamic and information asymmetric nature of deceptive interactions. But signaling games inherently model undetectable deception. In this paper, we investigate a model of signaling games in which the receiver can detect deception with some probability. This model nests traditional signaling games and complete information Stackelberg ...

  8. Exhaustively characterizing feasible logic models of a signaling network using Answer Set Programming.

    Science.gov (United States)

    Guziolowski, Carito; Videla, Santiago; Eduati, Federica; Thiele, Sven; Cokelaer, Thomas; Siegel, Anne; Saez-Rodriguez, Julio

    2013-09-15

    Logic modeling is a useful tool to study signal transduction across multiple pathways. Logic models can be generated by training a network containing the prior knowledge to phospho-proteomics data. The training can be performed using stochastic optimization procedures, but these are unable to guarantee a global optima or to report the complete family of feasible models. This, however, is essential to provide precise insight in the mechanisms underlaying signal transduction and generate reliable predictions. We propose the use of Answer Set Programming to explore exhaustively the space of feasible logic models. Toward this end, we have developed caspo, an open-source Python package that provides a powerful platform to learn and characterize logic models by leveraging the rich modeling language and solving technologies of Answer Set Programming. We illustrate the usefulness of caspo by revisiting a model of pro-growth and inflammatory pathways in liver cells. We show that, if experimental error is taken into account, there are thousands (11 700) of models compatible with the data. Despite the large number, we can extract structural features from the models, such as links that are always (or never) present or modules that appear in a mutual exclusive fashion. To further characterize this family of models, we investigate the input-output behavior of the models. We find 91 behaviors across the 11 700 models and we suggest new experiments to discriminate among them. Our results underscore the importance of characterizing in a global and exhaustive manner the family of feasible models, with important implications for experimental design. caspo is freely available for download (license GPLv3) and as a web service at http://caspo.genouest.org/. Supplementary materials are available at Bioinformatics online. santiago.videla@irisa.fr.

  9. Constraints on equivalent elastic source models from near-source data

    International Nuclear Information System (INIS)

    Stump, B.

    1993-01-01

    A phenomenological based seismic source model is important in quantifying the important physical processes that affect the observed seismic radiation in the linear-elastic regime. Representations such as these were used to assess yield effects on seismic waves under a Threshold Test Ban Treaty and to help transport seismic coupling experience at one test site to another. These same characterizations in a non-proliferation environment find applications in understanding the generation of the different types of body and surface waves from nuclear explosions, single chemical explosions, arrays of chemical explosions used in mining, rock bursts and earthquakes. Seismologists typically begin with an equivalent elastic representation of the source which when convolved with the propagation path effects produces a seismogram. The Representation Theorem replaces the true source with an equivalent set of body forces, boundary conditions or initial conditions. An extension of this representation shows the equivalence of the body forces, boundary conditions and initial conditions and replaces the source with a set of force moments, the first degree moment tensor for a point source representation. The difficulty with this formulation, which can completely describe the observed waveforms when the propagation path effects are known, is in the physical interpretation of the actual physical processes acting in the source volume. Observational data from within the source region, where processes are often nonlinear, linked to numerical models of the important physical processes in this region are critical to a unique physical understanding of the equivalent elastic source function

  10. Eco-physiolgoical role of root-sourced signal in three genotypes of spring wheat cultivars: a cue of evolution

    International Nuclear Information System (INIS)

    Liu, X.; Kong, H.Y.; Sun, G.J.; Cheng, Z.G.; Batool, A.; Jiang, H.M.

    2014-01-01

    Non-hydraulic root-sourced signal (nHRS) is so far affirmed to be a unique and positive early-warning response of plant to drying soil, but its functional role and potential evolutionary implication is little known in dryland wheat. Three spring wheat cultivars, Monkhead (1940-1960s), Dingxi 24 (1970-1980s) and Longchun 8139 (1990-present) with different drought sensitivity were chosen as materials for the research. Physiological and agronomic parameters were measured and analyzed in two relatively separated but closely related trials under environment-controlled conditions. The results showed that characteristics of nHRS and its eco-physiological effects varied from cultivars. Threshold ranges (TR) of soil moisture at which nHRS was switched on and off were 60.1-51.4% (% of FWC) in Monkhead, 63.8-47.3% in Dingxi 24 and 66.5-44.8% in Longchun 8139 respectively, suggesting that earlier onset of nHRS took place in modern cultivars. Leaf abscisic acid (ABA) concentration was significantly greater and increased more rapidly in old cultivars, Monkhead and Dingxi 24 than that of Longchun 8139 during the operation of nHRS. As a result of nHRS regulation, maintenance rate of grain yield was 43.4%, 60.8% and 79.3%, and water use efficiency was 1.47, 1.65 and 2.25 g/L in Monkhead, Dingxi 24 and Longchun 8139 respectively. In addition, drought susceptibility indices were 0.8858, 0.6037 and 0.3182 for the three cultivars, respectively. This suggests that earlier trigger of nHRS led to lower ABA-led signal intensity and better drought adaptability. It can be argued that the advances in yield performance and drought tolerance might be made by targeted selection for an earlier onset of nHRS. Finally, we attempted developing a conceptual model regarding root-sourced signal weakening and its evolutionary cue in dryland wheat. (author)

  11. Stochastic Modelling as a Tool for Seismic Signals Segmentation

    Directory of Open Access Journals (Sweden)

    Daniel Kucharczyk

    2016-01-01

    Full Text Available In order to model nonstationary real-world processes one can find appropriate theoretical model with properties following the analyzed data. However in this case many trajectories of the analyzed process are required. Alternatively, one can extract parts of the signal that have homogenous structure via segmentation. The proper segmentation can lead to extraction of important features of analyzed phenomena that cannot be described without the segmentation. There is no one universal method that can be applied for all of the phenomena; thus novel methods should be invented for specific cases. They might address specific character of the signal in different domains (time, frequency, time-frequency, etc.. In this paper we propose two novel segmentation methods that take under consideration the stochastic properties of the analyzed signals in time domain. Our research is motivated by the analysis of vibration signals acquired in an underground mine. In such signals we observe seismic events which appear after the mining activity, like blasting, provoked relaxation of rock, and some unexpected events, like natural rock burst. The proposed segmentation procedures allow for extraction of such parts of the analyzed signals which are related to mentioned events.

  12. Signal classification using global dynamical models, Part I: Theory

    International Nuclear Information System (INIS)

    Kadtke, J.; Kremliovsky, M.

    1996-01-01

    Detection and classification of signals is one of the principal areas of signal processing, and the utilization of nonlinear information has long been considered as a way of improving performance beyond standard linear (e.g. spectral) techniques. Here, we develop a method for using global models of chaotic dynamical systems theory to define a signal classification processing chain, which is sensitive to nonlinear correlations in the data. We use it to demonstrate classification in high noise regimes (negative SNR), and argue that classification probabilities can be directly computed from ensemble statistics in the model coefficient space. We also develop a modification for non-stationary signals (i.e. transients) using non-autonomous ODEs. In Part II of this paper, we demonstrate the analysis on actual open ocean acoustic data from marine biologics. copyright 1996 American Institute of Physics

  13. Analysis and logical modeling of biological signaling transduction networks

    Science.gov (United States)

    Sun, Zhongyao

    The study of network theory and its application span across a multitude of seemingly disparate fields of science and technology: computer science, biology, social science, linguistics, etc. It is the intrinsic similarities embedded in the entities and the way they interact with one another in these systems that link them together. In this dissertation, I present from both the aspect of theoretical analysis and the aspect of application three projects, which primarily focus on signal transduction networks in biology. In these projects, I assembled a network model through extensively perusing literature, performed model-based simulations and validation, analyzed network topology, and proposed a novel network measure. The application of network modeling to the system of stomatal opening in plants revealed a fundamental question about the process that has been left unanswered in decades. The novel measure of the redundancy of signal transduction networks with Boolean dynamics by calculating its maximum node-independent elementary signaling mode set accurately predicts the effect of single node knockout in such signaling processes. The three projects as an organic whole advance the understanding of a real system as well as the behavior of such network models, giving me an opportunity to take a glimpse at the dazzling facets of the immense world of network science.

  14. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. A simple statistical signal loss model for deep underground garage

    DEFF Research Database (Denmark)

    Nguyen, Huan Cong; Gimenez, Lucas Chavarria; Kovacs, Istvan

    2016-01-01

    In this paper we address the channel modeling aspects for a deep-indoor scenario with extreme coverage conditions in terms of signal losses, namely underground garage areas. We provide an in-depth analysis in terms of path loss (gain) and large scale signal shadowing, and a propose simple...... propagation model which can be used to predict cellular signal levels in similar deep-indoor scenarios. The proposed frequency-independent floor attenuation factor (FAF) is shown to be in range of 5.2 dB per meter deep....

  16. Network modeling reveals prevalent negative regulatory relationships between signaling sectors in Arabidopsis immune signaling.

    Directory of Open Access Journals (Sweden)

    Masanao Sato

    Full Text Available Biological signaling processes may be mediated by complex networks in which network components and network sectors interact with each other in complex ways. Studies of complex networks benefit from approaches in which the roles of individual components are considered in the context of the network. The plant immune signaling network, which controls inducible responses to pathogen attack, is such a complex network. We studied the Arabidopsis immune signaling network upon challenge with a strain of the bacterial pathogen Pseudomonas syringae expressing the effector protein AvrRpt2 (Pto DC3000 AvrRpt2. This bacterial strain feeds multiple inputs into the signaling network, allowing many parts of the network to be activated at once. mRNA profiles for 571 immune response genes of 22 Arabidopsis immunity mutants and wild type were collected 6 hours after inoculation with Pto DC3000 AvrRpt2. The mRNA profiles were analyzed as detailed descriptions of changes in the network state resulting from the genetic perturbations. Regulatory relationships among the genes corresponding to the mutations were inferred by recursively applying a non-linear dimensionality reduction procedure to the mRNA profile data. The resulting static network model accurately predicted 23 of 25 regulatory relationships reported in the literature, suggesting that predictions of novel regulatory relationships are also accurate. The network model revealed two striking features: (i the components of the network are highly interconnected; and (ii negative regulatory relationships are common between signaling sectors. Complex regulatory relationships, including a novel negative regulatory relationship between the early microbe-associated molecular pattern-triggered signaling sectors and the salicylic acid sector, were further validated. We propose that prevalent negative regulatory relationships among the signaling sectors make the plant immune signaling network a "sector

  17. Modelling and Analysis of Biochemical Signalling Pathway Cross-talk

    Directory of Open Access Journals (Sweden)

    Robin Donaldson

    2010-02-01

    Full Text Available Signalling pathways are abstractions that help life scientists structure the coordination of cellular activity. Cross-talk between pathways accounts for many of the complex behaviours exhibited by signalling pathways and is often critical in producing the correct signal-response relationship. Formal models of signalling pathways and cross-talk in particular can aid understanding and drive experimentation. We define an approach to modelling based on the concept that a pathway is the (synchronising parallel composition of instances of generic modules (with internal and external labels. Pathways are then composed by (synchronising parallel composition and renaming; different types of cross-talk result from different combinations of synchronisation and renaming. We define a number of generic modules in PRISM and five types of cross-talk: signal flow, substrate availability, receptor function, gene expression and intracellular communication. We show that Continuous Stochastic Logic properties can both detect and distinguish the types of cross-talk. The approach is illustrated with small examples and an analysis of the cross-talk between the TGF-b/BMP, WNT and MAPK pathways.

  18. A new method for quantifying the performance of EEG blind source separation algorithms by referencing a simultaneously recorded ECoG signal.

    Science.gov (United States)

    Oosugi, Naoya; Kitajo, Keiichi; Hasegawa, Naomi; Nagasaka, Yasuo; Okanoya, Kazuo; Fujii, Naotaka

    2017-09-01

    Blind source separation (BSS) algorithms extract neural signals from electroencephalography (EEG) data. However, it is difficult to quantify source separation performance because there is no criterion to dissociate neural signals and noise in EEG signals. This study develops a method for evaluating BSS performance. The idea is neural signals in EEG can be estimated by comparison with simultaneously measured electrocorticography (ECoG). Because the ECoG electrodes cover the majority of the lateral cortical surface and should capture most of the original neural sources in the EEG signals. We measured real EEG and ECoG data and developed an algorithm for evaluating BSS performance. First, EEG signals are separated into EEG components using the BSS algorithm. Second, the EEG components are ranked using the correlation coefficients of the ECoG regression and the components are grouped into subsets based on their ranks. Third, canonical correlation analysis estimates how much information is shared between the subsets of the EEG components and the ECoG signals. We used our algorithm to compare the performance of BSS algorithms (PCA, AMUSE, SOBI, JADE, fastICA) via the EEG and ECoG data of anesthetized nonhuman primates. The results (Best case >JADE = fastICA >AMUSE = SOBI ≥ PCA >random separation) were common to the two subjects. To encourage the further development of better BSS algorithms, our EEG and ECoG data are available on our Web site (http://neurotycho.org/) as a common testing platform. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  19. AC Small Signal Modeling of PWM Y-Source Converter by Circuit Averaging and Averaged Switch Modeling Technique

    DEFF Research Database (Denmark)

    Forouzesh, Mojtaba; Siwakoti, Yam Prasad; Blaabjerg, Frede

    2016-01-01

    Magnetically coupled Y-source impedance network is a newly proposed structure with versatile features intended for various power converter applications e.g. in the renewable energy technologies. The voltage gain of the Y-source impedance network rises exponentially as a function of turns ratio, w...

  20. Bispectral pairwise interacting source analysis for identifying systems of cross-frequency interacting brain sources from electroencephalographic or magnetoencephalographic signals

    Science.gov (United States)

    Chella, Federico; Pizzella, Vittorio; Zappasodi, Filippo; Nolte, Guido; Marzetti, Laura

    2016-05-01

    Brain cognitive functions arise through the coordinated activity of several brain regions, which actually form complex dynamical systems operating at multiple frequencies. These systems often consist of interacting subsystems, whose characterization is of importance for a complete understanding of the brain interaction processes. To address this issue, we present a technique, namely the bispectral pairwise interacting source analysis (biPISA), for analyzing systems of cross-frequency interacting brain sources when multichannel electroencephalographic (EEG) or magnetoencephalographic (MEG) data are available. Specifically, the biPISA makes it possible to identify one or many subsystems of cross-frequency interacting sources by decomposing the antisymmetric components of the cross-bispectra between EEG or MEG signals, based on the assumption that interactions are pairwise. Thanks to the properties of the antisymmetric components of the cross-bispectra, biPISA is also robust to spurious interactions arising from mixing artifacts, i.e., volume conduction or field spread, which always affect EEG or MEG functional connectivity estimates. This method is an extension of the pairwise interacting source analysis (PISA), which was originally introduced for investigating interactions at the same frequency, to the study of cross-frequency interactions. The effectiveness of this approach is demonstrated in simulations for up to three interacting source pairs and for real MEG recordings of spontaneous brain activity. Simulations show that the performances of biPISA in estimating the phase difference between the interacting sources are affected by the increasing level of noise rather than by the number of the interacting subsystems. The analysis of real MEG data reveals an interaction between two pairs of sources of central mu and beta rhythms, localizing in the proximity of the left and right central sulci.

  1. Increasing signal-to-noise ratio of swept-source optical coherence tomography by oversampling in k-space

    Science.gov (United States)

    Nagib, Karim; Mezgebo, Biniyam; Thakur, Rahul; Fernando, Namal; Kordi, Behzad; Sherif, Sherif

    2018-03-01

    Optical coherence tomography systems suffer from noise that could reduce ability to interpret reconstructed images correctly. We describe a method to increase the signal-to-noise ratio of swept-source optical coherence tomography (SSOCT) using oversampling in k-space. Due to this oversampling, information redundancy would be introduced in the measured interferogram that could be used to reduce white noise in the reconstructed A-scan. We applied our novel scaled nonuniform discrete Fourier transform to oversampled SS-OCT interferograms to reconstruct images of a salamander egg. The peak-signal-to-noise (PSNR) between the reconstructed images using interferograms sampled at 250MS/s andz50MS/s demonstrate that this oversampling increased the signal-to-noise ratio by 25.22 dB.

  2. Sequential Markov chain Monte Carlo filter with simultaneous model selection for electrocardiogram signal modeling.

    Science.gov (United States)

    Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia

    2012-01-01

    Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.

  3. Modelling Choice of Information Sources

    Directory of Open Access Journals (Sweden)

    Agha Faisal Habib Pathan

    2013-04-01

    Full Text Available This paper addresses the significance of traveller information sources including mono-modal and multimodal websites for travel decisions. The research follows a decision paradigm developed earlier, involving an information acquisition process for travel choices, and identifies the abstract characteristics of new information sources that deserve further investigation (e.g. by incorporating these in models and studying their significance in model estimation. A Stated Preference experiment is developed and the utility functions are formulated by expanding the travellers' choice set to include different combinations of sources of information. In order to study the underlying choice mechanisms, the resulting variables are examined in models based on different behavioural strategies, including utility maximisation and minimising the regret associated with the foregone alternatives. This research confirmed that RRM (Random Regret Minimisation Theory can fruitfully be used and can provide important insights for behavioural studies. The study also analyses the properties of travel planning websites and establishes a link between travel choices and the content, provenance, design, presence of advertisements, and presentation of information. The results indicate that travellers give particular credence to governmentowned sources and put more importance on their own previous experiences than on any other single source of information. Information from multimodal websites is more influential than that on train-only websites. This in turn is more influential than information from friends, while information from coachonly websites is the least influential. A website with less search time, specific information on users' own criteria, and real time information is regarded as most attractive

  4. Modeling borehole microseismic and strain signals measured by a distributed fiber optic sensor

    Science.gov (United States)

    Mellors, R. J.; Sherman, C. S.; Ryerson, F. J.; Morris, J.; Allen, G. S.; Messerly, M. J.; Carr, T.; Kavousi, P.

    2017-12-01

    The advent of distributed fiber optic sensors installed in boreholes provides a new and data-rich perspective on the subsurface environment. This includes the long-term capability for vertical seismic profiles, monitoring of active borehole processes such as well stimulation, and measuring of microseismic signals. The distributed fiber sensor, which measures strain (or strain-rate), is an active sensor with highest sensitivity parallel to the fiber and subject to varying types of noise, both external and internal. We take a systems approach and include the response of the electronics, fiber/cable, and subsurface to improve interpretation of the signals. This aids in understanding noise sources, assessing error bounds on amplitudes, and developing appropriate algorithms for improving the image. Ultimately, a robust understanding will allow identification of areas for future improvement and possible optimization in fiber and cable design. The subsurface signals are simulated in two ways: 1) a massively parallel multi-physics code that is capable of modeling hydraulic stimulation of heterogeneous reservoir with a pre-existing discrete fracture network, and 2) a parallelized 3D finite difference code for high-frequency seismic signals. Geometry and parameters for the simulations are derived from fiber deployments, including the Marcellus Shale Energy and Environment Laboratory (MSEEL) project in West Virginia. The combination mimics both the low-frequency strain signals generated during the fracture process and high-frequency signals from microseismic and perforation shots. Results are compared with available fiber data and demonstrate that quantitative interpretation of the fiber data provides valuable constraints on the fracture geometry and microseismic activity. These constraints appear difficult, if not impossible, to obtain otherwise.

  5. Wavelet modeling of signals for non-destructive testing of concretes

    International Nuclear Information System (INIS)

    Shao, Zhixue; Shi, Lihua; Cai, Jian

    2011-01-01

    In a non-destructive test of concrete structures, ultrasonic pulses are commonly used to detect damage or embedded objects from their reflections. A wavelet modeling method is proposed here to identify the main reflections and to remove the interferences in the detected ultrasonic waves. This method assumes that if the structure is stimulated by a wavelet function with good time–frequency localization ability, the detected signal is a combination of time-delayed and amplitude-attenuated wavelets. Therefore, modeling of the detected signal by wavelets can give a straightforward and simple model of the original signal. The central time and amplitude of each wavelet represent the position and amplitude of the reflections in the detected structure. A signal processing method is also proposed to estimate the structure response to wavelet excitation from its response to a high-voltage pulse with a sharp leading edge. A signal generation card with a compact peripheral component interconnect extension for instrumentation interface is designed to produce this high-voltage pulse. The proposed method is applied to synthesized aperture focusing technology of concrete specimens and the image results are provided

  6. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    Science.gov (United States)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration

  7. A Bayesian test for periodic signals in red noise

    Science.gov (United States)

    Vaughan, S.

    2010-02-01

    Many astrophysical sources, especially compact accreting sources, show strong, random brightness fluctuations with broad power spectra in addition to periodic or quasi-periodic oscillations (QPOs) that have narrower spectra. The random nature of the dominant source of variance greatly complicates the process of searching for possible weak periodic signals. We have addressed this problem using the tools of Bayesian statistics; in particular, using Markov Chain Monte Carlo techniques to approximate the posterior distribution of model parameters, and posterior predictive model checking to assess model fits and search for periodogram outliers that may represent periodic signals. The methods developed are applied to two example data sets, both long XMM-Newton observations of highly variable Seyfert 1 galaxies: RE J1034 + 396 and Mrk 766. In both cases, a bend (or break) in the power spectrum is evident. In the case of RE J1034 + 396, the previously reported QPO is found but with somewhat weaker statistical significance than reported in previous analyses. The difference is due partly to the improved continuum modelling, better treatment of nuisance parameters and partly to different data selection methods.

  8. Validation of Nonlinear Bipolar Transistor Model by Small-Signal Measurements

    DEFF Research Database (Denmark)

    Vidkjær, Jens; Porra, V.; Zhu, J.

    1992-01-01

    A new method for the validity analysis of nonlinear transistor models is presented based on DC-and small-signal S-parameter measurements and realistic consideration of the measurement and de-embedding errors and singularities of the small-signal equivalent circuit. As an example, some analysis...... results for an extended Gummel Poon model are presented in the case of a UHF bipolar power transistor....

  9. THE SIGNAL APPROACH TO MODELLING THE BALANCE OF PAYMENT CRISIS

    Directory of Open Access Journals (Sweden)

    O. Chernyak

    2016-12-01

    Full Text Available The paper considers and presents synthesis of theoretical models of balance of payment crisis and investigates the most effective ways to model the crisis in Ukraine. For mathematical formalization of balance of payment crisis, comparative analysis of the effectiveness of different calculation methods of Exchange Market Pressure Index was performed. A set of indicators that signal the growing likelihood of balance of payments crisis was defined using signal approach. With the help of minimization function thresholds indicators were selected, the crossing of which signalize increase in the probability of balance of payment crisis.

  10. Gap Acceptance Behavior Model for Non-signalized

    OpenAIRE

    Fajaruddin Bin Mustakim

    2015-01-01

    The paper proposes field studies that were performed to determine the critical gap on the multiple rural roadways Malaysia, at non-signalized T-intersection by using The Raff and Logic Method. Critical gap between passenger car and motorcycle have been determined.   There are quite number of studied doing gap acceptance behavior model for passenger car however still few research on gap acceptance behavior model for motorcycle. Thus in this paper, logistic regression models were developed to p...

  11. DOA Estimation of Audio Sources in Reverberant Environments

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Nielsen, Jesper Kjær; Heusdens, Richard

    2016-01-01

    Reverberation is well-known to have a detrimental impact on many localization methods for audio sources. We address this problem by imposing a model for the early reflections as well as a model for the audio source itself. Using these models, we propose two iterative localization methods...... that estimate the direction-of-arrival (DOA) of both the direct path of the audio source and the early reflections. In these methods, the contribution of the early reflections is essentially subtracted from the signal observations before localization of the direct path component, which may reduce the estimation...

  12. X-33 Telemetry Best Source Selection, Processing, Display, and Simulation Model Comparison

    Science.gov (United States)

    Burkes, Darryl A.

    1998-01-01

    The X-33 program requires the use of multiple telemetry ground stations to cover the launch, ascent, transition, descent, and approach phases for the flights from Edwards AFB to landings at Dugway Proving Grounds, UT and Malmstrom AFB, MT. This paper will discuss the X-33 telemetry requirements and design, including information on fixed and mobile telemetry systems, best source selection, and support for Range Safety Officers. A best source selection system will be utilized to automatically determine the best source based on the frame synchronization status of the incoming telemetry streams. These systems will be used to select the best source at the landing sites and at NASA Dryden Flight Research Center to determine the overall best source between the launch site, intermediate sites, and landing site sources. The best source at the landing sites will be decommutated to display critical flight safety parameters for the Range Safety Officers. The overall best source will be sent to the Lockheed Martin's Operational Control Center at Edwards AFB for performance monitoring by X-33 program personnel and for monitoring of critical flight safety parameters by the primary Range Safety Officer. The real-time telemetry data (received signal strength, etc.) from each of the primary ground stations will also be compared during each nu'ssion with simulation data generated using the Dynamic Ground Station Analysis software program. An overall assessment of the accuracy of the model will occur after each mission. Acknowledgment: The work described in this paper was NASA supported through cooperative agreement NCC8-115 with Lockheed Martin Skunk Works.

  13. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    Science.gov (United States)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  14. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA.

    Science.gov (United States)

    Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M

    2017-10-01

    Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  15. Tsunami Source Modeling of the 2015 Volcanic Tsunami Earthquake near Torishima, South of Japan

    Science.gov (United States)

    Sandanbata, O.; Watada, S.; Satake, K.; Fukao, Y.; Sugioka, H.; Ito, A.; Shiobara, H.

    2017-12-01

    An abnormal earthquake occurred at a submarine volcano named Smith Caldera, near Torishima Island on the Izu-Bonin arc, on May 2, 2015. The earthquake, which hereafter we call "the 2015 Torishima earthquake," has a CLVD-type focal mechanism with a moderate seismic magnitude (M5.7) but generated larger tsunami waves with an observed maximum height of 50 cm at Hachijo Island [JMA, 2015], so that the earthquake can be regarded as a "tsunami earthquake." In the region, similar tsunami earthquakes were observed in 1984, 1996 and 2006, but their physical mechanisms are still not well understood. Tsunami waves generated by the 2015 earthquake were recorded by an array of ocean bottom pressure (OBP) gauges, 100 km northeastern away from the epicenter. The waves initiated with a small downward signal of 0.1 cm and reached peak amplitude (1.5-2.0 cm) of leading upward signals followed by continuous oscillations [Fukao et al., 2016]. For modeling its tsunami source, or sea-surface displacement, we perform tsunami waveform simulations, and compare synthetic and observed waveforms at the OBP gauges. The linear Boussinesq equations are adapted with the tsunami simulation code, JAGURS [Baba et al., 2015]. We first assume a Gaussian-shaped sea-surface uplift of 1.0 m with a source size comparable to Smith Caldera, 6-7 km in diameter. By shifting source location around the caldera, we found the uplift is probably located within the caldera rim, as suggested by Sandanbata et al. [2016]. However, synthetic waves show no initial downward signal that was observed at the OBP gauges. Hence, we add a ring of subsidence surrounding the main uplift, and examine sizes and amplitudes of the main uplift and the subsidence ring. As a result, the model of a main uplift of around 1.0 m with a radius of 4 km surrounded by a ring of small subsidence shows good agreement of synthetic and observed waveforms. The results yield two implications for the deformation process that help us to understanding

  16. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  17. Small-signal model for the series resonant converter

    Science.gov (United States)

    King, R. J.; Stuart, T. A.

    1985-01-01

    The results of a previous discrete-time model of the series resonant dc-dc converter are reviewed and from these a small signal dynamic model is derived. This model is valid for low frequencies and is based on the modulation of the diode conduction angle for control. The basic converter is modeled separately from its output filter to facilitate the use of these results for design purposes. Experimental results are presented.

  18. Long period seismic source characterization at Popocatépetl volcano, Mexico

    Science.gov (United States)

    Arciniega-Ceballos, Alejandra; Dawson, Phillip; Chouet, Bernard A.

    2012-01-01

    The seismicity of Popocatépetl is dominated by long-period and very-long period signals associated with hydrothermal processes and magmatic degassing. We model the source mechanism of repetitive long-period signals in the 0.4–2 s band from a 15-station broadband network by stacking long-period events with similar waveforms to improve the signal-to-noise ratio. The data are well fitted by a point source located within the summit crater ~250 m below the crater floor and ~200 m from the inferred magma conduit. The inferred source includes a volumetric component that can be modeled as resonance of a horizontal steam-filled crack and a vertical single force component. The long-period events are thought to be related to the interaction between the magmatic system and a perched hydrothermal system. Repetitive injection of fluid into the horizontal fracture and subsequent sudden discharge when a critical pressure threshold is met provides a non-destructive source process.

  19. SignalSpider: Probabilistic pattern discovery on multiple normalized ChIP-Seq signal profiles

    KAUST Repository

    Wong, Kachun

    2014-09-05

    Motivation: Chromatin immunoprecipitation (ChIP) followed by high-throughput sequencing (ChIP-Seq) measures the genome-wide occupancy of transcription factors in vivo. Different combinations of DNA-binding protein occupancies may result in a gene being expressed in different tissues or at different developmental stages. To fully understand the functions of genes, it is essential to develop probabilistic models on multiple ChIP-Seq profiles to decipher the combinatorial regulatory mechanisms by multiple transcription factors. Results: In this work, we describe a probabilistic model (SignalSpider) to decipher the combinatorial binding events of multiple transcription factors. Comparing with similar existing methods, we found SignalSpider performs better in clustering promoter and enhancer regions. Notably, SignalSpider can learn higher-order combinatorial patterns from multiple ChIP-Seq profiles. We have applied SignalSpider on the normalized ChIP-Seq profiles from the ENCODE consortium and learned model instances. We observed different higher-order enrichment and depletion patterns across sets of proteins. Those clustering patterns are supported by Gene Ontology (GO) enrichment, evolutionary conservation and chromatin interaction enrichment, offering biological insights for further focused studies. We also proposed a specific enrichment map visualization method to reveal the genome-wide transcription factor combinatorial patterns from the models built, which extend our existing fine-scale knowledge on gene regulation to a genome-wide level. Availability and implementation: The matrix-algebra-optimized executables and source codes are available at the authors\\' websites: http://www.cs.toronto.edu/∼wkc/SignalSpider. Contact: Supplementary information: Supplementary data are available at Bioinformatics online.

  20. Modelling of H.264 MPEG2 TS Traffic Source

    Directory of Open Access Journals (Sweden)

    Stanislav Klucik

    2013-01-01

    Full Text Available This paper deals with IPTV traffic source modelling. Traffic sources are used for simulation, emulation and real network testing. This model is made as a derivation of known recorded traffic sources that are analysed and statistically processed. As the results show the proposed model causes in comparison to the known traffic source very similar network traffic parameters when used in a simulated network.

  1. Computerized dosimetry of I-125 sources model 6711

    International Nuclear Information System (INIS)

    Isturiz, J.

    2001-01-01

    It tries on: physical presentation of the sources; radiation protection; mathematical model of I-125 source model 6711; data considered for the calculation program; experimental com probation of the dose distribution; exposure rate and apparent activity; techniques of the use given to the sources I-125; and the calculation planning systems [es

  2. Modeling off-frequency binaural masking for short- and long-duration signals.

    Science.gov (United States)

    Nitschmann, Marc; Yasin, Ifat; Henning, G Bruce; Verhey, Jesko L

    2017-08-01

    Experimental binaural masking-pattern data are presented together with model simulations for 12- and 600-ms signals. The masker was a diotic 11-Hz wide noise centered on 500 Hz. The tonal signal was presented either diotically or dichotically (180° interaural phase difference) with frequencies ranging from 400 to 600 Hz. The results and the modeling agree with previous data and hypotheses; simulations with a binaural model sensitive to monaural modulation cues show that the effect of duration on off-frequency binaural masking-level differences is mainly a result of modulation cues which are only available in the monaural detection of long signals.

  3. Digital signal processing with kernel methods

    CERN Document Server

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  4. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    Science.gov (United States)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  5. An open-terrain line source model coupled with street-canyon effects to forecast carbon monoxide at traffic roundabout.

    Science.gov (United States)

    Pandian, Suresh; Gokhale, Sharad; Ghoshal, Aloke Kumar

    2011-02-15

    A double-lane four-arm roundabout, where traffic movement is continuous in opposite directions and at different speeds, produces a zone responsible for recirculation of emissions within a road section creating canyon-type effect. In this zone, an effect of thermally induced turbulence together with vehicle wake dominates over wind driven turbulence causing pollutant emission to flow within, resulting into more or less equal amount of pollutants upwind and downwind particularly during low winds. Beyond this region, however, the effect of winds becomes stronger, causing downwind movement of pollutants. Pollutant dispersion caused by such phenomenon cannot be described accurately by open-terrain line source model alone. This is demonstrated by estimating one-minute average carbon monoxide concentration by coupling an open-terrain line source model with a street canyon model which captures the combine effect to describe the dispersion at non-signalized roundabout. The results of the modeling matched well with the measurements compared with the line source model alone and the prediction error reduced by about 50%. The study further demonstrated this with traffic emissions calculated by field and semi-empirical methods. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Nuisance Source Population Modeling for Radiation Detection System Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P; Lange, D; Nelson, K; Wheeler, R

    2009-10-05

    A major challenge facing the prospective deployment of radiation detection systems for homeland security applications is the discrimination of radiological or nuclear 'threat sources' from radioactive, but benign, 'nuisance sources'. Common examples of such nuisance sources include naturally occurring radioactive material (NORM), medical patients who have received radioactive drugs for either diagnostics or treatment, and industrial sources. A sensitive detector that cannot distinguish between 'threat' and 'benign' classes will generate false positives which, if sufficiently frequent, will preclude it from being operationally deployed. In this report, we describe a first-principles physics-based modeling approach that is used to approximate the physical properties and corresponding gamma ray spectral signatures of real nuisance sources. Specific models are proposed for the three nuisance source classes - NORM, medical and industrial. The models can be validated against measured data - that is, energy spectra generated with the model can be compared to actual nuisance source data. We show by example how this is done for NORM and medical sources, using data sets obtained from spectroscopic detector deployments for cargo container screening and urban area traffic screening, respectively. In addition to capturing the range of radioactive signatures of individual nuisance sources, a nuisance source population model must generate sources with a frequency of occurrence consistent with that found in actual movement of goods and people. Measured radiation detection data can indicate these frequencies, but, at present, such data are available only for a very limited set of locations and time periods. In this report, we make more general estimates of frequencies for NORM and medical sources using a range of data sources such as shipping manifests and medical treatment statistics. We also identify potential data sources for industrial

  7. Fast temperature optimization of multi-source hyperthermia applicators with reduced-order modeling of 'virtual sources'

    International Nuclear Information System (INIS)

    Cheng, K-S; Stakhursky, Vadim; Craciunescu, Oana I; Stauffer, Paul; Dewhirst, Mark; Das, Shiva K

    2008-01-01

    The goal of this work is to build the foundation for facilitating real-time magnetic resonance image guided patient treatment for heating systems with a large number of physical sources (e.g. antennas). Achieving this goal requires knowledge of how the temperature distribution will be affected by changing each source individually, which requires time expenditure on the order of the square of the number of sources. To reduce computation time, we propose a model reduction approach that combines a smaller number of predefined source configurations (fewer than the number of actual sources) that are most likely to heat tumor. The source configurations consist of magnitude and phase source excitation values for each actual source and may be computed from a CT scan based plan or a simplified generic model of the corresponding patient anatomy. Each pre-calculated source configuration is considered a 'virtual source'. We assume that the actual best source settings can be represented effectively as weighted combinations of the virtual sources. In the context of optimization, each source configuration is treated equivalently to one physical source. This model reduction approach is tested on a patient upper-leg tumor model (with and without temperature-dependent perfusion), heated using a 140 MHz ten-antenna cylindrical mini-annular phased array. Numerical simulations demonstrate that using only a few pre-defined source configurations can achieve temperature distributions that are comparable to those from full optimizations using all physical sources. The method yields close to optimal temperature distributions when using source configurations determined from a simplified model of the tumor, even when tumor position is erroneously assumed to be ∼2.0 cm away from the actual position as often happens in practical clinical application of pre-treatment planning. The method also appears to be robust under conditions of changing, nonlinear, temperature-dependent perfusion. The

  8. Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform.

    Science.gov (United States)

    Wu, Hau-Tieng; Wu, Han-Kuei; Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong

    2016-01-01

    We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features.

  9. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  10. Electromagnetic modeling method for eddy current signal analysis

    International Nuclear Information System (INIS)

    Lee, D. H.; Jung, H. K.; Cheong, Y. M.; Lee, Y. S.; Huh, H.; Yang, D. J.

    2004-10-01

    An electromagnetic modeling method for eddy current signal analysis is necessary before an experiment is performed. Electromagnetic modeling methods consists of the analytical method and the numerical method. Also, the numerical methods can be divided by Finite Element Method(FEM), Boundary Element Method(BEM) and Volume Integral Method(VIM). Each modeling method has some merits and demerits. Therefore, the suitable modeling method can be chosen by considering the characteristics of each modeling. This report explains the principle and application of each modeling method and shows the comparison modeling programs

  11. A model for generating Surface EMG signal of m. Tibialis Anterior.

    Science.gov (United States)

    Siddiqi, Ariba; Kumar, Dinesh; Arjunan, Sridhar P

    2014-01-01

    A model that simulates surface electromyogram (sEMG) signal of m. Tibialis Anterior has been developed and tested. This has a firing rate equation that is based on experimental findings. It also has a recruitment threshold that is based on observed statistical distribution. Importantly, it has considered both, slow and fast type which has been distinguished based on their conduction velocity. This model has assumed that the deeper unipennate half of the muscle does not contribute significantly to the potential induced on the surface of the muscle and has approximated the muscle to have parallel structure. The model was validated by comparing the simulated and the experimental sEMG signal recordings. Experiments were conducted on eight subjects who performed isometric dorsiflexion at 10, 20, 30, 50, 75, and 100% maximal voluntary contraction. Normalized root mean square and median frequency of the experimental and simulated EMG signal were computed and the slopes of the linearity with the force were statistically analyzed. The gradients were found to be similar (p>0.05) for both experimental and simulated sEMG signal, validating the proposed model.

  12. Electrocardiogram (ECG Signal Modeling and Noise Reduction Using Hopfield Neural Networks

    Directory of Open Access Journals (Sweden)

    F. Bagheri

    2013-02-01

    Full Text Available The Electrocardiogram (ECG signal is one of the diagnosing approaches to detect heart disease. In this study the Hopfield Neural Network (HNN is applied and proposed for ECG signal modeling and noise reduction. The Hopfield Neural Network (HNN is a recurrent neural network that stores the information in a dynamic stable pattern. This algorithm retrieves a pattern stored in memory in response to the presentation of an incomplete or noisy version of that pattern. Computer simulation results show that this method can successfully model the ECG signal and remove high-frequency noise.

  13. Modeling the frequency of opposing left-turn conflicts at signalized intersections using generalized linear regression models.

    Science.gov (United States)

    Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei

    2014-01-01

    The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation.

  14. Model of multicomponent micro-Doppler signal in environment MatLab

    Directory of Open Access Journals (Sweden)

    Kucheryavenko Alexander

    2017-01-01

    Full Text Available The article illustrates the problem of measuring the speed glider component targets in the presence of a turboprop effect of the reflected signal in a pulse-Doppler radar, proposed a model turboprop signal component and an algorithm for its suppression

  15. Asymmetric Joint Source-Channel Coding for Correlated Sources with Blind HMM Estimation at the Receiver

    Directory of Open Access Journals (Sweden)

    Ser Javier Del

    2005-01-01

    Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.

  16. Probabilistic forward model for electroencephalography source analysis

    International Nuclear Information System (INIS)

    Plis, Sergey M; George, John S; Jun, Sung C; Ranken, Doug M; Volegov, Petr L; Schmidt, David M

    2007-01-01

    Source localization by electroencephalography (EEG) requires an accurate model of head geometry and tissue conductivity. The estimation of source time courses from EEG or from EEG in conjunction with magnetoencephalography (MEG) requires a forward model consistent with true activity for the best outcome. Although MRI provides an excellent description of soft tissue anatomy, a high resolution model of the skull (the dominant resistive component of the head) requires CT, which is not justified for routine physiological studies. Although a number of techniques have been employed to estimate tissue conductivity, no present techniques provide the noninvasive 3D tomographic mapping of conductivity that would be desirable. We introduce a formalism for probabilistic forward modeling that allows the propagation of uncertainties in model parameters into possible errors in source localization. We consider uncertainties in the conductivity profile of the skull, but the approach is general and can be extended to other kinds of uncertainties in the forward model. We and others have previously suggested the possibility of extracting conductivity of the skull from measured electroencephalography data by simultaneously optimizing over dipole parameters and the conductivity values required by the forward model. Using Cramer-Rao bounds, we demonstrate that this approach does not improve localization results nor does it produce reliable conductivity estimates. We conclude that the conductivity of the skull has to be either accurately measured by an independent technique, or that the uncertainties in the conductivity values should be reflected in uncertainty in the source location estimates

  17. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    Science.gov (United States)

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  18. Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform.

    Directory of Open Access Journals (Sweden)

    Hau-Tieng Wu

    Full Text Available We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features.

  19. Nonnegative Tensor Factorization Approach Applied to Fission Chamber’s Output Signals Blind Source Separation

    Science.gov (United States)

    Laassiri, M.; Hamzaoui, E.-M.; Cherkaoui El Moursli, R.

    2018-02-01

    Inside nuclear reactors, gamma-rays emitted from nuclei together with the neutrons introduce unwanted backgrounds in neutron spectra. For this reason, powerful extraction methods are needed to extract useful neutron signal from recorded mixture and thus to obtain clearer neutron flux spectrum. Actually, several techniques have been developed to discriminate between neutrons and gamma-rays in a mixed radiation field. Most of these techniques, tackle using analogue discrimination methods. Others propose to use some organic scintillators to achieve the discrimination task. Recently, systems based on digital signal processors are commercially available to replace the analog systems. As alternative to these systems, we aim in this work to verify the feasibility of using a Nonnegative Tensor Factorization (NTF) to blind extract neutron component from mixture signals recorded at the output of fission chamber (WL-7657). This last have been simulated through the Geant4 linked to Garfield++ using a 252Cf neutron source. To achieve our objective of obtaining the best possible neutron-gamma discrimination, we have applied the two different NTF algorithms, which have been found to be the best methods that allow us to analyse this kind of nuclear data.

  20. Ocean angular momentum signals in a climate model and implications for Earth rotation

    Science.gov (United States)

    Ponte, R. M.; Rajamony, J.; Gregory, J. M.

    2002-03-01

    Estimates of ocean angular momentum (OAM) provide an integrated measure of variability in ocean circulation and mass fields and can be directly related to observed changes in Earth rotation. We use output from a climate model to calculate 240 years of 3-monthly OAM values (two equatorial terms L1 and L2, related to polar motion or wobble, and axial term L3, related to length of day variations) representing the period 1860-2100. Control and forced runs permit the study of the effects of natural and anthropogenically forced climate variability on OAM. All OAM components exhibit a clear annual cycle, with large decadal modulations in amplitude, and also longer period fluctuations, all associated with natural climate variability in the model. Anthropogenically induced signals, inferred from the differences between forced and control runs, include an upward trend in L3, related to inhomogeneous ocean warming and increases in the transport of the Antarctic Circumpolar Current, and a significantly weaker seasonal cycle in L2 in the second half of the record, related primarily to changes in seasonal bottom pressure variability in the Southern Ocean and North Pacific. Variability in mass fields is in general more important to OAM signals than changes in circulation at the seasonal and longer periods analyzed. Relation of OAM signals to changes in surface atmospheric forcing are discussed. The important role of the oceans as an excitation source for the annual, Chandler and Markowitz wobbles, is confirmed. Natural climate variability in OAM and related excitation is likely to measurably affect the Earth rotation, but anthropogenically induced effects are comparatively weak.

  1. Foreground Subtraction and Signal reconstruction in redshifted 21cm Global Signal Experiments using Artificial Neural Networks

    Science.gov (United States)

    Choudhury, Madhurima; Datta, Abhirup

    2018-05-01

    Observations of HI 21cm transition line is a promising probe into the Dark Ages and Epoch-of-Reionization. Detection of this redshifted 21cm signal is one of the key science goal for several upcoming low-frequency radio telescopes like HERA, SKA and DARE. Other global signal experiments include EDGES, LEDA, BIGHORNS, SCI-HI, SARAS. One of the major challenges for the detection of this signal is the accuracy of the foreground source removal. Several novel techniques have been explored already to remove bright foregrounds from both interferometric as well as total power experiments. Here, we present preliminary results from our investigation on application of ANN to detect 21cm global signal amidst bright galactic foreground. Following the formalism of representing the global 21cm signal by 'tanh' model, this study finds that the global 21cm signal parameters can be accurately determined even in the presence of bright foregrounds represented by 3rd order log-polynomial or higher.

  2. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    Science.gov (United States)

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  3. Using pulse oximetry to account for high and low frequency physiological artifacts in the BOLD signal.

    Science.gov (United States)

    Verstynen, Timothy D; Deshpande, Vibhas

    2011-04-15

    The BOLD signal not only reflects changes in local neural activity, but also exhibits variability from physiological processes like cardiac rhythms and breathing. We investigated how both of these physiological sources are reflected in the pulse oximetry (PO) signal, a direct measure of blood oxygenation, and how this information can be used to account for different types of noise in the BOLD response. Measures of heart rate, respiration and PO were simultaneously recorded while neurologically healthy participants performed an eye-movement task in a 3T MRI. PO exhibited power in frequencies that matched those found in the independently recorded cardiac and respiration signals. Using the phasic and aphasic properties of these signals as nuisance regressors, we found that the different frequency components of the PO signal could be used to identify different types of physiological artifacts in the BOLD response. A comparison of different physiological noise models found that a simple, down-sampled version of the PO signal improves the estimation of task-relevant statistics nearly as well as more established noise models that may run the risk of over-parameterization. These findings suggest that the PO signal captures multiple sources of physiological noise in the BOLD response and provides a simple and efficient way of modeling these noise sources in subsequent analysis. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Creating and analyzing pathway and protein interaction compendia for modelling signal transduction networks

    Directory of Open Access Journals (Sweden)

    Kirouac Daniel C

    2012-05-01

    Full Text Available Abstract Background Understanding the information-processing capabilities of signal transduction networks, how those networks are disrupted in disease, and rationally designing therapies to manipulate diseased states require systematic and accurate reconstruction of network topology. Data on networks central to human physiology, such as the inflammatory signalling networks analyzed here, are found in a multiplicity of on-line resources of pathway and interactome databases (Cancer CellMap, GeneGo, KEGG, NCI-Pathway Interactome Database (NCI-PID, PANTHER, Reactome, I2D, and STRING. We sought to determine whether these databases contain overlapping information and whether they can be used to construct high reliability prior knowledge networks for subsequent modeling of experimental data. Results We have assembled an ensemble network from multiple on-line sources representing a significant portion of all machine-readable and reconcilable human knowledge on proteins and protein interactions involved in inflammation. This ensemble network has many features expected of complex signalling networks assembled from high-throughput data: a power law distribution of both node degree and edge annotations, and topological features of a “bow tie” architecture in which diverse pathways converge on a highly conserved set of enzymatic cascades focused around PI3K/AKT, MAPK/ERK, JAK/STAT, NFκB, and apoptotic signaling. Individual pathways exhibit “fuzzy” modularity that is statistically significant but still involving a majority of “cross-talk” interactions. However, we find that the most widely used pathway databases are highly inconsistent with respect to the actual constituents and interactions in this network. Using a set of growth factor signalling networks as examples (epidermal growth factor, transforming growth factor-beta, tumor necrosis factor, and wingless, we find a multiplicity of network topologies in which receptors couple to downstream

  5. Development of an emissions inventory model for mobile sources

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, A W; Broderick, B M [Trinity College, Dublin (Ireland). Dept. of Civil, Structural and Environmental Engineering

    2000-07-01

    Traffic represents one of the largest sources of primary air pollutants in urban areas. As a consequence, numerous abatement strategies are being pursued to decrease the ambient concentrations of a wide range of pollutants. A mutual characteristic of most of these strategies is a requirement for accurate data on both the quantity and spatial distribution of emissions to air in the form of an atmospheric emissions inventory database. In the case of traffic pollution, such an inventory must be compiled using activity statistics and emission factors for a wide range of vehicle types. The majority of inventories are compiled using 'passive' data from either surveys or transportation models and by their very nature tend to be out-of-date by the time they are compiled. Current trends are towards integrating urban traffic control systems and assessments of the environmental effects of motor vehicles. In this paper. a methodology for estimating emissions from mobile sources using real-time data is described. This methodology is used to calculate emissions of sulphur dioxide (SO{sub 2}), oxides of nitrogen (NO{sub x}), carbon monoxide (CO). volatile organic compounds (VOC), particulate matter less than 10 {mu}m aerodynamic diameter (PM{sub 10}), 1,3-butadiene (C{sub 4}H{sub 6}) and benzene (C{sub 6}H{sub 6}) at a test junction in Dublin. Traffic data, which are required on a street-by-street basis, is obtained from induction loops and closed circuit televisions (CCTV) as well as statistical data. The observed traffic data are compared to simulated data from a travel demand model. As a test case, an emissions inventory is compiled for a heavily trafficked signalized junction in an urban environment using the measured data. In order that the model may be validated, the predicted emissions are employed in a dispersion model along with local meteorological conditions and site geometry. The resultant pollutant concentrations are compared to average ambient kerbside conditions

  6. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  7. Signal and noise modeling in confocal laser scanning fluorescence microscopy.

    Science.gov (United States)

    Herberich, Gerlind; Windoffer, Reinhard; Leube, Rudolf E; Aach, Til

    2012-01-01

    Fluorescence confocal laser scanning microscopy (CLSM) has revolutionized imaging of subcellular structures in biomedical research by enabling the acquisition of 3D time-series of fluorescently-tagged proteins in living cells, hence forming the basis for an automated quantification of their morphological and dynamic characteristics. Due to the inherently weak fluorescence, CLSM images exhibit a low SNR. We present a novel model for the transfer of signal and noise in CLSM that is both theoretically sound as well as corroborated by a rigorous analysis of the pixel intensity statistics via measurement of the 3D noise power spectra, signal-dependence and distribution. Our model provides a better fit to the data than previously proposed models. Further, it forms the basis for (i) the simulation of the CLSM imaging process indispensable for the quantitative evaluation of CLSM image analysis algorithms, (ii) the application of Poisson denoising algorithms and (iii) the reconstruction of the fluorescence signal.

  8. Analysis and modelization of short-duration windows of seismic signals

    International Nuclear Information System (INIS)

    Berriani, B.; Lacoume, J.L.; Martin, N.; Cliet, C.; Dubesset, M.

    1987-01-01

    The spectral analysis of a seismic arrival is of a great interest, but unfortunately the common Fourier analysis is unserviceable on short-time windows. So, in order to obtain the spectral characteristics of the dominant components of a seismic signal on a short-time interval, the authors study parametric methods. At first, the autoregressive methods are able to localize a small number of non-stationary pure frequencies. But the amplitude determination is impossible with these methods. So, they develop a combination of AR and Capon's methods. In the Capon's method, the amplitude is conserved for a given frequency, at the very time when the contribution of the other frequencies is minimized. Finally, to characterize completely the different pure-frequency dominant components of the signal and to be able to reconstruct the signal and to be able to reconstruct the signal with these elements, the authors need also the phase and the attenuation; for that, they use the Prony's method where the signal is represented by a sum of damped sinusoids. This last method is used to modelize an offset VSP. It is shown that, using four frequencies and their attributes (amplitude, phase, attenuation), it is possible to modelize quasi-exactly the section. When reconstructing the signal, if one (or more) frequency is eliminated, an efficient filtering can be applied. The AR methods, and Prony's in particular, are efficient tools for signal component decomposition and information compression

  9. A New Signal Model for Axion Cavity Searches from N -body Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Lentz, Erik W.; Rosenberg, Leslie J. [Physics Department, University of Washington, Seattle, WA 98195-1580 (United States); Quinn, Thomas R.; Tremmel, Michael J., E-mail: lentze@phys.washington.edu, E-mail: ljrosenberg@phys.washington.edu, E-mail: trq@astro.washington.edu, E-mail: mjt29@astro.washington.edu [Astronomy Department, University of Washington, Seattle, WA 98195-1580 (United States)

    2017-08-20

    Signal estimates for direct axion dark matter (DM) searches have used the isothermal sphere halo model for the last several decades. While insightful, the isothermal model does not capture effects from a halo’s infall history nor the influence of baryonic matter, which has been shown to significantly influence a halo’s inner structure. The high resolution of cavity axion detectors can make use of modern cosmological structure-formation simulations, which begin from realistic initial conditions, incorporate a wide range of baryonic physics, and are capable of resolving detailed structure. This work uses a state-of-the-art cosmological N -body+Smoothed-Particle Hydrodynamics simulation to develop an improved signal model for axion cavity searches. Signal shapes from a class of galaxies encompassing the Milky Way are found to depart significantly from the isothermal sphere. A new signal model for axion detectors is proposed and projected sensitivity bounds on the Axion DM eXperiment (ADMX) data are presented.

  10. The Commercial Open Source Business Model

    Science.gov (United States)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  11. Road Impedance Model Study under the Control of Intersection Signal

    Directory of Open Access Journals (Sweden)

    Yunlin Luo

    2015-01-01

    Full Text Available Road traffic impedance model is a difficult and critical point in urban traffic assignment and route guidance. The paper takes a signalized intersection as the research object. On the basis of traditional traffic wave theory including the implementation of traffic wave model and the analysis of vehicles’ gathering and dissipating, the road traffic impedance model is researched by determining the basic travel time and waiting delay time. Numerical example results have proved that the proposed model in this paper has received better calculation performance compared to existing model, especially in flat hours. The values of mean absolute percentage error (MAPE and mean absolute deviation (MAD are separately reduced by 3.78% and 2.62 s. It shows that the proposed model has feasibility and availability in road traffic impedance under intersection signal.

  12. Study on non-linear bistable dynamics model based EEG signal discrimination analysis method.

    Science.gov (United States)

    Ying, Xiaoguo; Lin, Han; Hui, Guohua

    2015-01-01

    Electroencephalogram (EEG) is the recording of electrical activity along the scalp. EEG measures voltage fluctuations generating from ionic current flows within the neurons of the brain. EEG signal is looked as one of the most important factors that will be focused in the next 20 years. In this paper, EEG signal discrimination based on non-linear bistable dynamical model was proposed. EEG signals were processed by non-linear bistable dynamical model, and features of EEG signals were characterized by coherence index. Experimental results showed that the proposed method could properly extract the features of different EEG signals.

  13. Unified and Modular Modeling and Functional Verification Framework of Real-Time Image Signal Processors

    Directory of Open Access Journals (Sweden)

    Abhishek Jain

    2016-01-01

    Full Text Available In VLSI industry, image signal processing algorithms are developed and evaluated using software models before implementation of RTL and firmware. After the finalization of the algorithm, software models are used as a golden reference model for the image signal processor (ISP RTL and firmware development. In this paper, we are describing the unified and modular modeling framework of image signal processing algorithms used for different applications such as ISP algorithms development, reference for hardware (HW implementation, reference for firmware (FW implementation, and bit-true certification. The universal verification methodology- (UVM- based functional verification framework of image signal processors using software reference models is described. Further, IP-XACT based tools for automatic generation of functional verification environment files and model map files are described. The proposed framework is developed both with host interface and with core using virtual register interface (VRI approach. This modeling and functional verification framework is used in real-time image signal processing applications including cellphone, smart cameras, and image compression. The main motivation behind this work is to propose the best efficient, reusable, and automated framework for modeling and verification of image signal processor (ISP designs. The proposed framework shows better results and significant improvement is observed in product verification time, verification cost, and quality of the designs.

  14. Source locations for impulsive electric signals seen in the night ionosphere of Venus

    Science.gov (United States)

    Russell, C. T.; Von Dornum, M.; Scarf, F. L.

    1989-01-01

    A mapping of the rate of occurrence of impulsive VLF noise bursts in Venus' dark low altitude ionosphere, which increases rapidly with decreasing altitude, as a function of latitude and longitude indicates enhanced occurrence rates over Atla. In a 30-sec observing period, there are impulsive signals 70 percent of the time at 160 km in the region of maximum occurrence; the occurrence rates, moreover, increase with decreasing latitude, so that the equatorial rate is of the order of 1.6 times that at 30 deg latitude. These phenomena are in keeping with lightning-generated wave sources.

  15. Modeling and Simulation of Bus Dispatching Policy for Timed Transfers on Signalized Networks

    Science.gov (United States)

    Cho, Hsun-Jung; Lin, Guey-Shii

    2007-12-01

    The major work of this study is to formulate the system cost functions and to integrate the bus dispatching policy with signal control. The integrated model mainly includes the flow dispersion model for links, signal control model for nodes, and dispatching control model for transfer terminals. All such models are inter-related for transfer operations in one-center transit network. The integrated model that combines dispatching policies with flexible signal control modes can be applied to assess the effectiveness of transfer operations. It is found that, if bus arrival information is reliable, an early dispatching decision made at the mean bus arrival times is preferable. The costs for coordinated operations with slack times are relatively low at the optimal common headway when applying adaptive route control. Based on such findings, a threshold function of bus headway for justifying an adaptive signal route control under various time values of auto drivers is developed.

  16. Tracking of Multiple Moving Sources Using Recursive EM Algorithm

    Directory of Open Access Journals (Sweden)

    Böhme Johann F

    2005-01-01

    Full Text Available We deal with recursive direction-of-arrival (DOA estimation of multiple moving sources. Based on the recursive EM algorithm, we develop two recursive procedures to estimate the time-varying DOA parameter for narrowband signals. The first procedure requires no prior knowledge about the source movement. The second procedure assumes that the motion of moving sources is described by a linear polynomial model. The proposed recursion updates the polynomial coefficients when a new data arrives. The suggested approaches have two major advantages: simple implementation and easy extension to wideband signals. Numerical experiments show that both procedures provide excellent results in a slowly changing environment. When the DOA parameter changes fast or two source directions cross with each other, the procedure designed for a linear polynomial model has a better performance than the general procedure. Compared to the beamforming technique based on the same parameterization, our approach is computationally favorable and has a wider range of applications.

  17. Dynamic Bayesian Network Modeling of the Interplay between EGFR and Hedgehog Signaling.

    Science.gov (United States)

    Fröhlich, Holger; Bahamondez, Gloria; Götschel, Frank; Korf, Ulrike

    2015-01-01

    Aberrant activation of sonic Hegdehog (SHH) signaling has been found to disrupt cellular differentiation in many human cancers and to increase proliferation. The SHH pathway is known to cross-talk with EGFR dependent signaling. Recent studies experimentally addressed this interplay in Daoy cells, which are presumable a model system for medulloblastoma, a highly malignant brain tumor that predominately occurs in children. Currently ongoing are several clinical trials for different solid cancers, which are designed to validate the clinical benefits of targeting the SHH in combination with other pathways. This has motivated us to investigate interactions between EGFR and SHH dependent signaling in greater depth. To our knowledge, there is no mathematical model describing the interplay between EGFR and SHH dependent signaling in medulloblastoma so far. Here we come up with a fully probabilistic approach using Dynamic Bayesian Networks (DBNs). To build our model, we made use of literature based knowledge describing SHH and EGFR signaling and integrated gene expression (Illumina) and cellular location dependent time series protein expression data (Reverse Phase Protein Arrays). We validated our model by sub-sampling training data and making Bayesian predictions on the left out test data. Our predictions focusing on key transcription factors and p70S6K, showed a high level of concordance with experimental data. Furthermore, the stability of our model was tested by a parametric bootstrap approach. Stable network features were in agreement with published data. Altogether we believe that our model improved our understanding of the interplay between two highly oncogenic signaling pathways in Daoy cells. This may open new perspectives for the future therapy of Hedghog/EGF-dependent solid tumors.

  18. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  19. Unpowered wireless transmission of ultrasound signals

    International Nuclear Information System (INIS)

    Huang, H; Paramo, D; Deshmukh, S

    2011-01-01

    This paper presents a wireless ultrasound sensing system that uses frequency conversion to convert the ultrasound signal to a microwave signal and transmit it directly without digitization. Constructed from a few passive microwave components, the sensor is able to sense, modulate, and transmit the full waveform of ultrasound signals wirelessly without requiring any local power source. The principle of operation of the unpowered wireless ultrasound sensor is described first, and this is followed by a detailed description of the implementation of the sensor and the sensor interrogation unit using commercially available antennas and microwave components. Validation of the sensing system using an ultrasound pitch–catch system and the power analysis model of the system are also presented

  20. Mathematical modeling of gonadotropin-releasing hormone signaling.

    Science.gov (United States)

    Pratap, Amitesh; Garner, Kathryn L; Voliotis, Margaritis; Tsaneva-Atanasova, Krasimira; McArdle, Craig A

    2017-07-05

    Gonadotropin-releasing hormone (GnRH) acts via G-protein coupled receptors on pituitary gonadotropes to control reproduction. These are G q -coupled receptors that mediate acute effects of GnRH on the exocytotic secretion of luteinizing hormone (LH) and follicle-stimulating hormone (FSH), as well as the chronic regulation of their synthesis. GnRH is secreted in short pulses and GnRH effects on its target cells are dependent upon the dynamics of these pulses. Here we overview GnRH receptors and their signaling network, placing emphasis on pulsatile signaling, and how mechanistic mathematical models and an information theoretic approach have helped further this field. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  1. An agent-based model of signal transduction in bacterial chemotaxis.

    Directory of Open Access Journals (Sweden)

    Jameson Miller

    2010-05-01

    Full Text Available We report the application of agent-based modeling to examine the signal transduction network and receptor arrays for chemotaxis in Escherichia coli, which are responsible for regulating swimming behavior in response to environmental stimuli. Agent-based modeling is a stochastic and bottom-up approach, where individual components of the modeled system are explicitly represented, and bulk properties emerge from their movement and interactions. We present the Chemoscape model: a collection of agents representing both fixed membrane-embedded and mobile cytoplasmic proteins, each governed by a set of rules representing knowledge or hypotheses about their function. When the agents were placed in a simulated cellular space and then allowed to move and interact stochastically, the model exhibited many properties similar to the biological system including adaptation, high signal gain, and wide dynamic range. We found the agent based modeling approach to be both powerful and intuitive for testing hypotheses about biological properties such as self-assembly, the non-linear dynamics that occur through cooperative protein interactions, and non-uniform distributions of proteins in the cell. We applied the model to explore the role of receptor type, geometry and cooperativity in the signal gain and dynamic range of the chemotactic response to environmental stimuli. The model provided substantial qualitative evidence that the dynamic range of chemotactic response can be traced to both the heterogeneity of receptor types present, and the modulation of their cooperativity by their methylation state.

  2. An agent-based model of signal transduction in bacterial chemotaxis.

    Science.gov (United States)

    Miller, Jameson; Parker, Miles; Bourret, Robert B; Giddings, Morgan C

    2010-05-13

    We report the application of agent-based modeling to examine the signal transduction network and receptor arrays for chemotaxis in Escherichia coli, which are responsible for regulating swimming behavior in response to environmental stimuli. Agent-based modeling is a stochastic and bottom-up approach, where individual components of the modeled system are explicitly represented, and bulk properties emerge from their movement and interactions. We present the Chemoscape model: a collection of agents representing both fixed membrane-embedded and mobile cytoplasmic proteins, each governed by a set of rules representing knowledge or hypotheses about their function. When the agents were placed in a simulated cellular space and then allowed to move and interact stochastically, the model exhibited many properties similar to the biological system including adaptation, high signal gain, and wide dynamic range. We found the agent based modeling approach to be both powerful and intuitive for testing hypotheses about biological properties such as self-assembly, the non-linear dynamics that occur through cooperative protein interactions, and non-uniform distributions of proteins in the cell. We applied the model to explore the role of receptor type, geometry and cooperativity in the signal gain and dynamic range of the chemotactic response to environmental stimuli. The model provided substantial qualitative evidence that the dynamic range of chemotactic response can be traced to both the heterogeneity of receptor types present, and the modulation of their cooperativity by their methylation state.

  3. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  4. Model for neural signaling leap statistics

    International Nuclear Information System (INIS)

    Chevrollier, Martine; Oria, Marcos

    2011-01-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5 0 C, awaken regime) and Levy statistics (T = 35.5 0 C, sleeping period), characterized by rare events of long range connections.

  5. Model for neural signaling leap statistics

    Science.gov (United States)

    Chevrollier, Martine; Oriá, Marcos

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T = 37.5°C, awaken regime) and Lévy statistics (T = 35.5°C, sleeping period), characterized by rare events of long range connections.

  6. Induced mitochondrial membrane potential for modeling solitonic conduction of electrotonic signals.

    Directory of Open Access Journals (Sweden)

    R R Poznanski

    Full Text Available A cable model that includes polarization-induced capacitive current is derived for modeling the solitonic conduction of electrotonic potentials in neuronal branchlets with microstructure containing endoplasmic membranes. A solution of the nonlinear cable equation modified for fissured intracellular medium with a source term representing charge 'soakage' is used to show how intracellular capacitive effects of bound electrical charges within mitochondrial membranes can influence electrotonic signals expressed as solitary waves. The elastic collision resulting from a head-on collision of two solitary waves results in localized and non-dispersing electrical solitons created by the nonlinearity of the source term. It has been shown that solitons in neurons with mitochondrial membrane and quasi-electrostatic interactions of charges held by the microstructure (i.e., charge 'soakage' have a slower velocity of propagation compared with solitons in neurons with microstructure, but without endoplasmic membranes. When the equilibrium potential is a small deviation from rest, the nonohmic conductance acts as a leaky channel and the solitons are small compared when the equilibrium potential is large and the outer mitochondrial membrane acts as an amplifier, boosting the amplitude of the endogenously generated solitons. These findings demonstrate a functional role of quasi-electrostatic interactions of bound electrical charges held by microstructure for sustaining solitons with robust self-regulation in their amplitude through changes in the mitochondrial membrane equilibrium potential. The implication of our results indicate that a phenomenological description of ionic current can be successfully modeled with displacement current in Maxwell's equations as a conduction process involving quasi-electrostatic interactions without the inclusion of diffusive current. This is the first study in which solitonic conduction of electrotonic potentials are generated by

  7. Disentangling the Complexity of HGF Signaling by Combining Qualitative and Quantitative Modeling.

    Directory of Open Access Journals (Sweden)

    Lorenza A D'Alessandro

    2015-04-01

    Full Text Available Signaling pathways are characterized by crosstalk, feedback and feedforward mechanisms giving rise to highly complex and cell-context specific signaling networks. Dissecting the underlying relations is crucial to predict the impact of targeted perturbations. However, a major challenge in identifying cell-context specific signaling networks is the enormous number of potentially possible interactions. Here, we report a novel hybrid mathematical modeling strategy to systematically unravel hepatocyte growth factor (HGF stimulated phosphoinositide-3-kinase (PI3K and mitogen activated protein kinase (MAPK signaling, which critically contribute to liver regeneration. By combining time-resolved quantitative experimental data generated in primary mouse hepatocytes with interaction graph and ordinary differential equation modeling, we identify and experimentally validate a network structure that represents the experimental data best and indicates specific crosstalk mechanisms. Whereas the identified network is robust against single perturbations, combinatorial inhibition strategies are predicted that result in strong reduction of Akt and ERK activation. Thus, by capitalizing on the advantages of the two modeling approaches, we reduce the high combinatorial complexity and identify cell-context specific signaling networks.

  8. Modeling, estimation and optimal filtration in signal processing

    CERN Document Server

    Najim, Mohamed

    2010-01-01

    The purpose of this book is to provide graduate students and practitioners with traditional methods and more recent results for model-based approaches in signal processing.Firstly, discrete-time linear models such as AR, MA and ARMA models, their properties and their limitations are introduced. In addition, sinusoidal models are addressed.Secondly, estimation approaches based on least squares methods and instrumental variable techniques are presented.Finally, the book deals with optimal filters, i.e. Wiener and Kalman filtering, and adaptive filters such as the RLS, the LMS and the

  9. Wideband Small-Signal Input dq Admittance Modeling of Six-Pulse Diode Rectifiers

    DEFF Research Database (Denmark)

    Yue, Xiaolong; Wang, Xiongfei; Blaabjerg, Frede

    2018-01-01

    This paper studies the wideband small-signal input dq admittance of six-pulse diode rectifiers. Considering the frequency coupling introduced by ripple frequency harmonics of d-and q-channel switching function, the proposed model successfully predicts the small-signal input dq admittance of six......-pulse diode rectifiers in high frequency regions that existing models fail to explain. Simulation and experimental results verify the accuracy of the proposed model....

  10. Si/SiC-based DD hetero-structure IMPATTs as MM-wave power-source: a generalized large-signal analysis

    International Nuclear Information System (INIS)

    Mukherjee, Moumita; Tripathy, P. R.; Pati, S. P.

    2015-01-01

    A full-scale, self-consistent, non-linear, large-signal model of double-drift hetero-structure IMPATT diode with general doping profile is derived. This newly developed model, for the first time, has been used to analyze the large-signal characteristics of hexagonal SiC-based double-drift IMPATT diode. Considering the fabrication feasibility, the authors have studied the large-signal characteristics of Si/SiC-based hetero-structure devices. Under small-voltage modulation (∼ 2%, i.e. small-signal conditions) results are in good agreement with calculations done using a linearised small-signal model. The large-signal values of the diode's negative conductance (5 × 10 6 S/m 2 ), susceptance (10.4 × 10 7 S/m 2 ), average breakdown voltage (207.6 V), and power generating efficiency (15%, RF power: 25.0 W at 94 GHz) are obtained as a function of oscillation amplitude (50% of DC breakdown voltage) for a fixed average current density. The large-signal calculations exhibit power and efficiency saturation for large-signal (> 50%) voltage modulation and thereafter decrease gradually with further increasing voltage-modulation. This generalized large-signal formulation is applicable for all types of IMPATT structures with distributed and narrow avalanche zones. The simulator is made more realistic by incorporating the space-charge effects, realistic field and temperature dependent material parameters in Si and SiC. The electric field snap-shots and the large-signal impedance and admittance of the diode with current excitation are expressed in closed loop form. This study will act as a guide for researchers to fabricate a high-power Si/SiC-based IMPATT for possible application in high-power MM-wave communication systems. (paper)

  11. Predictive model identifies key network regulators of cardiomyocyte mechano-signaling.

    Directory of Open Access Journals (Sweden)

    Philip M Tan

    2017-11-01

    Full Text Available Mechanical strain is a potent stimulus for growth and remodeling in cells. Although many pathways have been implicated in stretch-induced remodeling, the control structures by which signals from distinct mechano-sensors are integrated to modulate hypertrophy and gene expression in cardiomyocytes remain unclear. Here, we constructed and validated a predictive computational model of the cardiac mechano-signaling network in order to elucidate the mechanisms underlying signal integration. The model identifies calcium, actin, Ras, Raf1, PI3K, and JAK as key regulators of cardiac mechano-signaling and characterizes crosstalk logic imparting differential control of transcription by AT1R, integrins, and calcium channels. We find that while these regulators maintain mostly independent control over distinct groups of transcription factors, synergy between multiple pathways is necessary to activate all the transcription factors necessary for gene transcription and hypertrophy. We also identify a PKG-dependent mechanism by which valsartan/sacubitril, a combination drug recently approved for treating heart failure, inhibits stretch-induced hypertrophy, and predict further efficacious pairs of drug targets in the network through a network-wide combinatorial search.

  12. MATHEMATICAL MODELING OF THE UNPUT DEVICES IN AUTOMATIC LOCOMOTIVE SIGNALING SYSTEM

    Directory of Open Access Journals (Sweden)

    O. O. Gololobova

    2014-03-01

    Full Text Available Purpose. To examine the operation of the automatic locomotive signaling system (ALS, to find out the influence of external factors on the devices operation and the quality of the code information derived from track circuit information, as well as to enable modeling of failure occurrences that may appear during operation. Methodology. To achieve this purpose, the main obstacles in ALS operation and the reasons for their occurrence were considered and the system structure principle was researched. The mathematical model for input equipment of the continuous automatic locomotive signaling system (ALS with the number coding was developed. It was designed taking into account all the types of code signals “R”, “Y”, “RY” and equivalent scheme of replacing the filter with a frequency of 50 Hz. Findings. The operation of ALSN with a signal current frequency of 50 Hz was examined. The adequate mathematical model of input equipment of ALS with a frequency of 50 Hz was developed. Originality. The computer model of input equipment of ALS system in the environment of MATLAB+Simulink was developed. The results of the computer modeling on the outlet of the filter during delivering every type of code combination were given in the article. Practical value. With the use of developed mathematical model of ALS system operation we have an opportunity to study, research and determine behavior of the circuit during the normal operation mode and failure occurrences. Also there is a possibility to develop and apply different scheme decisions in modeling environment MATLAB+Simulink for reducing the influence of obstacles on the functional capability of ALS and to model the occurrence of possible difficulties.

  13. Earthquake Source Spectral Study beyond the Omega-Square Model

    Science.gov (United States)

    Uchide, T.; Imanishi, K.

    2017-12-01

    Earthquake source spectra have been used for characterizing earthquake source processes quantitatively and, at the same time, simply, so that we can analyze the source spectra for many earthquakes, especially for small earthquakes, at once and compare them each other. A standard model for the source spectra is the omega-square model, which has the flat spectrum and the falloff inversely proportional to the square of frequencies at low and high frequencies, respectively, which are bordered by a corner frequency. The corner frequency has often been converted to the stress drop under the assumption of circular crack models. However, recent studies claimed the existence of another corner frequency [Denolle and Shearer, 2016; Uchide and Imanishi, 2016] thanks to the recent development of seismic networks. We have found that many earthquakes in areas other than the area studied by Uchide and Imanishi [2016] also have source spectra deviating from the omega-square model. Another part of the earthquake spectra we now focus on is the falloff rate at high frequencies, which will affect the seismic energy estimation [e.g., Hirano and Yagi, 2017]. In June, 2016, we deployed seven velocity seismometers in the northern Ibaraki prefecture, where the shallow crustal seismicity mainly with normal-faulting events was activated by the 2011 Tohoku-oki earthquake. We have recorded seismograms at 1000 samples per second and at a short distance from the source, so that we can investigate the high-frequency components of the earthquake source spectra. Although we are still in the stage of discovery and confirmation of the deviation from the standard omega-square model, the update of the earthquake source spectrum model will help us systematically extract more information on the earthquake source process.

  14. Robust, open-source removal of systematics in Kepler data

    Science.gov (United States)

    Aigrain, S.; Parviainen, H.; Roberts, S.; Reece, S.; Evans, T.

    2017-10-01

    We present ARC2 (Astrophysically Robust Correction 2), an open-source python-based systematics-correction pipeline, to correct for the Kepler prime mission long-cadence light curves. The ARC2 pipeline identifies and corrects any isolated discontinuities in the light curves and then removes trends common to many light curves. These trends are modelled using the publicly available co-trending basis vectors, within an (approximate) Bayesian framework with 'shrinkage' priors to minimize the risk of overfitting and the injection of any additional noise into the corrected light curves, while keeping any astrophysical signals intact. We show that the ARC2 pipeline's performance matches that of the standard Kepler PDC-MAP data products using standard noise metrics, and demonstrate its ability to preserve astrophysical signals using injection tests with simulated stellar rotation and planetary transit signals. Although it is not identical, the ARC2 pipeline can thus be used as an open-source alternative to PDC-MAP, whenever the ability to model the impact of the systematics removal process on other kinds of signal is important.

  15. Constraint-based modeling and kinetic analysis of the Smad dependent TGF-beta signaling pathway.

    Directory of Open Access Journals (Sweden)

    Zhike Zi

    Full Text Available BACKGROUND: Investigation of dynamics and regulation of the TGF-beta signaling pathway is central to the understanding of complex cellular processes such as growth, apoptosis, and differentiation. In this study, we aim at using systems biology approach to provide dynamic analysis on this pathway. METHODOLOGY/PRINCIPAL FINDINGS: We proposed a constraint-based modeling method to build a comprehensive mathematical model for the Smad dependent TGF-beta signaling pathway by fitting the experimental data and incorporating the qualitative constraints from the experimental analysis. The performance of the model generated by constraint-based modeling method is significantly improved compared to the model obtained by only fitting the quantitative data. The model agrees well with the experimental analysis of TGF-beta pathway, such as the time course of nuclear phosphorylated Smad, the subcellular location of Smad and signal response of Smad phosphorylation to different doses of TGF-beta. CONCLUSIONS/SIGNIFICANCE: The simulation results indicate that the signal response to TGF-beta is regulated by the balance between clathrin dependent endocytosis and non-clathrin mediated endocytosis. This model is useful to be built upon as new precise experimental data are emerging. The constraint-based modeling method can also be applied to quantitative modeling of other signaling pathways.

  16. Model for neural signaling leap statistics

    Energy Technology Data Exchange (ETDEWEB)

    Chevrollier, Martine; Oria, Marcos, E-mail: oria@otica.ufpb.br [Laboratorio de Fisica Atomica e Lasers Departamento de Fisica, Universidade Federal da ParaIba Caixa Postal 5086 58051-900 Joao Pessoa, Paraiba (Brazil)

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5{sup 0}C, awaken regime) and Levy statistics (T = 35.5{sup 0}C, sleeping period), characterized by rare events of long range connections.

  17. Model predictive control for Z-source power converter

    DEFF Research Database (Denmark)

    Mo, W.; Loh, P.C.; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of impedance-source (commonly known as Z-source) power converter. Output voltage control and current control for Z-source inverter are analyzed and simulated. With MPC's ability of multi- system variables regulation, load current and voltage...

  18. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  19. The Chandra Source Catalog: Spectral Properties

    Science.gov (United States)

    Doe, Stephen; Siemiginowska, Aneta L.; Refsdal, Brian L.; Evans, Ian N.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Glotfelty, Kenny J.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Primini, Francis A.; Rots, Arnold H.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2009-09-01

    The first release of the Chandra Source Catalog (CSC) contains all sources identified from eight years' worth of publicly accessible observations. The vast majority of these sources have been observed with the ACIS detector and have spectral information in 0.5-7 keV energy range. Here we describe the methods used to automatically derive spectral properties for each source detected by the standard processing pipeline and included in the final CSC. Hardness ratios were calculated for each source between pairs of energy bands (soft, medium and hard) using the Bayesian algorithm (BEHR, Park et al. 2006). The sources with high signal to noise ratio (exceeding 150 net counts) were fit in Sherpa (the modeling and fitting application from the Chandra Interactive Analysis of Observations package, developed by the Chandra X-ray Center; see Freeman et al. 2001). Two models were fit to each source: an absorbed power law and a blackbody emission. The fitted parameter values for the power-law and blackbody models were included in the catalog with the calculated flux for each model. The CSC also provides the source energy flux computed from the normalizations of predefined power-law and black-body models needed to match the observed net X-ray counts. In addition, we provide access to data products for each source: a file with source spectrum, the background spectrum, and the spectral response of the detector. This work is supported by NASA contract NAS8-03060 (CXC).

  20. Open source data assimilation framework for hydrological modeling

    Science.gov (United States)

    Ridler, Marc; Hummel, Stef; van Velzen, Nils; Katrine Falk, Anne; Madsen, Henrik

    2013-04-01

    An open-source data assimilation framework is proposed for hydrological modeling. Data assimilation (DA) in hydrodynamic and hydrological forecasting systems has great potential to improve predictions and improve model result. The basic principle is to incorporate measurement information into a model with the aim to improve model results by error minimization. Great strides have been made to assimilate traditional in-situ measurements such as discharge, soil moisture, hydraulic head and snowpack into hydrologic models. More recently, remotely sensed data retrievals of soil moisture, snow water equivalent or snow cover area, surface water elevation, terrestrial water storage and land surface temperature have been successfully assimilated in hydrological models. The assimilation algorithms have become increasingly sophisticated to manage measurement and model bias, non-linear systems, data sparsity (time & space) and undetermined system uncertainty. It is therefore useful to use a pre-existing DA toolbox such as OpenDA. OpenDA is an open interface standard for (and free implementation of) a set of tools to quickly implement DA and calibration for arbitrary numerical models. The basic design philosophy of OpenDA is to breakdown DA into a set of building blocks programmed in object oriented languages. To implement DA, a model must interact with OpenDA to create model instances, propagate the model, get/set variables (or parameters) and free the model once DA is completed. An open-source interface for hydrological models exists capable of all these tasks: OpenMI. OpenMI is an open source standard interface already adopted by key hydrological model providers. It defines a universal approach to interact with hydrological models during simulation to exchange data during runtime, thus facilitating the interactions between models and data sources. The interface is flexible enough so that models can interact even if the model is coded in a different language, represent

  1. Improving traffic signal management and operations : a basic service model.

    Science.gov (United States)

    2009-12-01

    This report provides a guide for achieving a basic service model for traffic signal management and : operations. The basic service model is based on simply stated and defensible operational objectives : that consider the staffing level, expertise and...

  2. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  3. Development of a Modified Kernel Regression Model for a Robust Signal Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Ibrahim; Heo, Gyunyoung [Kyung Hee University, Yongin (Korea, Republic of)

    2016-10-15

    The demand for robust and resilient performance has led to the use of online-monitoring techniques to monitor the process parameters and signal validation. On-line monitoring and signal validation techniques are the two important terminologies in process and equipment monitoring. These techniques are automated methods of monitoring instrument performance while the plant is operating. To implementing these techniques, several empirical models are used. One of these models is nonparametric regression model, otherwise known as kernel regression (KR). Unlike parametric models, KR is an algorithmic estimation procedure which assumes no significant parameters, and it needs no training process after its development when new observations are prepared; which is good for a system characteristic of changing due to ageing phenomenon. Although KR is used and performed excellently when applied to steady state or normal operating data, it has limitation in time-varying data that has several repetition of the same signal, especially if those signals are used to infer the other signals. The convectional KR has limitation in correctly estimating the dependent variable when time-varying data with repeated values are used to estimate the dependent variable especially in signal validation and monitoring. Therefore, we presented here in this work a modified KR that can resolve this issue which can also be feasible in time domain. Data are first transformed prior to the Euclidian distance evaluation considering their slopes/changes with respect to time. The performance of the developed model is evaluated and compared with that of conventional KR using both the lab experimental data and the real time data from CNS provided by KAERI. The result shows that the proposed developed model, having demonstrated high performance accuracy than that of conventional KR, is capable of resolving the identified limitation with convectional KR. We also discovered that there is still need to further

  4. Estimation of effective brain connectivity with dual Kalman filter and EEG source localization methods.

    Science.gov (United States)

    Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher

    2017-09-01

    Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.

  5. A speed guidance strategy for multiple signalized intersections based on car-following model

    Science.gov (United States)

    Tang, Tie-Qiao; Yi, Zhi-Yan; Zhang, Jian; Wang, Tao; Leng, Jun-Qiang

    2018-04-01

    Signalized intersection has great roles in urban traffic system. The signal infrastructure and the driving behavior near the intersection are paramount factors that have significant impacts on traffic flow and energy consumption. In this paper, a speed guidance strategy is introduced into a car-following model to study the driving behavior and the fuel consumption in a single-lane road with multiple signalized intersections. The numerical results indicate that the proposed model can reduce the fuel consumption and the average stop times. The findings provide insightful guidance for the eco-driving strategies near the signalized intersections.

  6. Model-based design of self-Adapting networked signal processing systems

    NARCIS (Netherlands)

    Oliveira Filho, J.A. de; Papp, Z.; Djapic, R.; Oostveen, J.C.

    2013-01-01

    The paper describes a model based approach for architecture design of runtime reconfigurable, large-scale, networked signal processing applications. A graph based modeling formalism is introduced to describe all relevant aspects of the design (functional, concurrency, hardware, communication,

  7. Seismic signal simulation and study of underground nuclear sources by moment inversion

    International Nuclear Information System (INIS)

    Crusem, R.

    1986-09-01

    Some problems of underground nuclear explosions are examined from the seismological point of view. In the first part a model is developed for mean seismic propagation through the lagoon of Mururoa atoll and for calculation of synthetic seismograms (in intermediate fields: 5 to 20 km) by summation of discrete wave numbers. In the second part this ground model is used with a linear inversion method of seismic moments for estimation of elastic source terms equivalent to the nuclear source. Only the isotrope part is investigated solution stability is increased by using spectral smoothing and a minimal phase hypothesis. Some examples of applications are presented: total energy estimation of a nuclear explosion, simulation of mechanical effects induced by an underground explosion [fr

  8. A signal detection-item response theory model for evaluating neuropsychological measures.

    Science.gov (United States)

    Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Risbrough, Victoria B; Baker, Dewleen G

    2018-02-05

    Models from signal detection theory are commonly used to score neuropsychological test data, especially tests of recognition memory. Here we show that certain item response theory models can be formulated as signal detection theory models, thus linking two complementary but distinct methodologies. We then use the approach to evaluate the validity (construct representation) of commonly used research measures, demonstrate the impact of conditional error on neuropsychological outcomes, and evaluate measurement bias. Signal detection-item response theory (SD-IRT) models were fitted to recognition memory data for words, faces, and objects. The sample consisted of U.S. Infantry Marines and Navy Corpsmen participating in the Marine Resiliency Study. Data comprised item responses to the Penn Face Memory Test (PFMT; N = 1,338), Penn Word Memory Test (PWMT; N = 1,331), and Visual Object Learning Test (VOLT; N = 1,249), and self-report of past head injury with loss of consciousness. SD-IRT models adequately fitted recognition memory item data across all modalities. Error varied systematically with ability estimates, and distributions of residuals from the regression of memory discrimination onto self-report of past head injury were positively skewed towards regions of larger measurement error. Analyses of differential item functioning revealed little evidence of systematic bias by level of education. SD-IRT models benefit from the measurement rigor of item response theory-which permits the modeling of item difficulty and examinee ability-and from signal detection theory-which provides an interpretive framework encompassing the experimentally validated constructs of memory discrimination and response bias. We used this approach to validate the construct representation of commonly used research measures and to demonstrate how nonoptimized item parameters can lead to erroneous conclusions when interpreting neuropsychological test data. Future work might include the

  9. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    Science.gov (United States)

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  10. Non Linear Programming (NLP formulation for quantitative modeling of protein signal transduction pathways.

    Directory of Open Access Journals (Sweden)

    Alexander Mitsos

    Full Text Available Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i excessive CPU time requirements and ii loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  11. Multiobjective Traffic Signal Control Model for Intersection Based on Dynamic Turning Movements Estimation

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2014-01-01

    Full Text Available The real-time traffic signal control for intersection requires dynamic turning movements as the basic input data. It is impossible to detect dynamic turning movements directly through current traffic surveillance systems, but dynamic origin-destination (O-D estimation can obtain it. However, the combined models of dynamic O-D estimation and real-time traffic signal control are rare in the literature. A framework for the multiobjective traffic signal control model for intersection based on dynamic O-D estimation (MSC-DODE is presented. A state-space model using Kalman filtering is first formulated to estimate the dynamic turning movements; then a revised sequential Kalman filtering algorithm is designed to solve the model, and the root mean square error and mean percentage error are used to evaluate the accuracy of estimated dynamic turning proportions. Furthermore, a multiobjective traffic signal control model is put forward to achieve real-time signal control parameters and evaluation indices. Finally, based on practical survey data, the evaluation indices from MSC-DODE are compared with those from Webster method. The actual and estimated turning movements are further input into MSC-DODE, respectively, and results are also compared. Case studies show that results of MSC-DODE are better than those of Webster method and are very close to unavailable actual values.

  12. Visualizing spikes in source-space

    DEFF Research Database (Denmark)

    Beniczky, Sándor; Duez, Lene; Scherg, Michael

    2016-01-01

    OBJECTIVE: Reviewing magnetoencephalography (MEG) recordings is time-consuming: signals from the 306 MEG-sensors are typically reviewed divided into six arrays of 51 sensors each, thus browsing each recording six times in order to evaluate all signals. A novel method of reconstructing the MEG...... signals in source-space was developed using a source-montage of 29 brain-regions and two spatial components to remove magnetocardiographic (MKG) artefacts. Our objective was to evaluate the accuracy of reviewing MEG in source-space. METHODS: In 60 consecutive patients with epilepsy, we prospectively...... evaluated the accuracy of reviewing the MEG signals in source-space as compared to the classical method of reviewing them in sensor-space. RESULTS: All 46 spike-clusters identified in sensor-space were also identified in source-space. Two additional spike-clusters were identified in source-space. As 29...

  13. Sources of uncertainties in modelling black carbon at the global scale

    Directory of Open Access Journals (Sweden)

    E. Vignati

    2010-03-01

    Full Text Available Our understanding of the global black carbon (BC cycle is essentially qualitative due to uncertainties in our knowledge of its properties. This work investigates two source of uncertainties in modelling black carbon: those due to the use of different schemes for BC ageing and its removal rate in the global Transport-Chemistry model TM5 and those due to the uncertainties in the definition and quantification of the observations, which propagate through to both the emission inventories, and the measurements used for the model evaluation.

    The schemes for the atmospheric processing of black carbon that have been tested with the model are (i a simple approach considering BC as bulk aerosol and a simple treatment of the removal with fixed 70% of in-cloud black carbon concentrations scavenged by clouds and removed when rain is present and (ii a more complete description of microphysical ageing within an aerosol dynamics model, where removal is coupled to the microphysical properties of the aerosol, which results in a global average of 40% in-cloud black carbon that is scavenged in clouds and subsequently removed by rain, thus resulting in a longer atmospheric lifetime. This difference is reflected in comparisons between both sets of modelled results and the measurements. Close to the sources, both anthropogenic and vegetation fire source regions, the model results do not differ significantly, indicating that the emissions are the prevailing mechanism determining the concentrations and the choice of the aerosol scheme does not influence the levels. In more remote areas such as oceanic and polar regions the differences can be orders of magnitude, due to the differences between the two schemes. The more complete description reproduces the seasonal trend of the black carbon observations in those areas, although not always the magnitude of the signal, while the more simplified approach underestimates black carbon concentrations by orders of

  14. Separating astrophysical sources from indirect dark matter signals

    Science.gov (United States)

    Siegal-Gaskins, Jennifer M.

    2015-01-01

    Indirect searches for products of dark matter annihilation and decay face the challenge of identifying an uncertain and subdominant signal in the presence of uncertain backgrounds. Two valuable approaches to this problem are (i) using analysis methods which take advantage of different features in the energy spectrum and angular distribution of the signal and backgrounds and (ii) more accurately characterizing backgrounds, which allows for more robust identification of possible signals. These two approaches are complementary and can be significantly strengthened when used together. I review the status of indirect searches with gamma rays using two promising targets, the Inner Galaxy and the isotropic gamma-ray background. For both targets, uncertainties in the properties of backgrounds are a major limitation to the sensitivity of indirect searches. I then highlight approaches which can enhance the sensitivity of indirect searches using these targets. PMID:25304638

  15. EEG and MEG source localization using recursively applied (RAP) MUSIC

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J.C. [Los Alamos National Lab., NM (United States); Leahy, R.M. [University of Southern California, Los Angeles, CA (United States). Signal and Image Processing Inst.

    1996-12-31

    The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which uses the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.

  16. Modeling of Nonlinear Beat Signals of TAE's

    Science.gov (United States)

    Zhang, Bo; Berk, Herbert; Breizman, Boris; Zheng, Linjin

    2012-03-01

    Experiments on Alcator C-Mod reveal Toroidal Alfven Eigenmodes (TAE) together with signals at various beat frequencies, including those at twice the mode frequency. The beat frequencies are sidebands driven by quadratic nonlinear terms in the MHD equations. These nonlinear sidebands have not yet been quantified by any existing codes. We extend the AEGIS code to capture nonlinear effects by treating the nonlinear terms as a driving source in the linear MHD solver. Our goal is to compute the spatial structure of the sidebands for realistic geometry and q-profile, which can be directly compared with experiment in order to interpret the phase contrast imaging diagnostic measurements and to enable the quantitative determination of the Alfven wave amplitude in the plasma core

  17. Outer heliospheric radio emissions. II - Foreshock source models

    Science.gov (United States)

    Cairns, Iver H.; Kurth, William S.; Gurnett, Donald A.

    1992-01-01

    Observations of LF radio emissions in the range 2-3 kHz by the Voyager spacecraft during the intervals 1983-1987 and 1989 to the present while at heliocentric distances greater than 11 AU are reported. New analyses of the wave data are presented, and the characteristics of the radiation are reviewed and discussed. Two classes of events are distinguished: transient events with varying starting frequencies that drift upward in frequency and a relatively continuous component that remains near 2 kHz. Evidence for multiple transient sources and for extension of the 2-kHz component above the 2.4-kHz interference signal is presented. The transient emissions are interpreted in terms of radiation generated at multiples of the plasma frequency when solar wind density enhancements enter one or more regions of a foreshock sunward of the inner heliospheric shock. Solar wind density enhancements by factors of 4-10 are observed. Propagation effects, the number of radiation sources, and the time variability, frequency drift, and varying starting frequencies of the transient events are discussed in terms of foreshock sources.

  18. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  19. Regulation of Wnt signaling by nociceptive input in animal models

    Directory of Open Access Journals (Sweden)

    Shi Yuqiang

    2012-06-01

    Full Text Available Abstract Background Central sensitization-associated synaptic plasticity in the spinal cord dorsal horn (SCDH critically contributes to the development of chronic pain, but understanding of the underlying molecular pathways is still incomplete. Emerging evidence suggests that Wnt signaling plays a crucial role in regulation of synaptic plasticity. Little is known about the potential function of the Wnt signaling cascades in chronic pain development. Results Fluorescent immunostaining results indicate that β-catenin, an essential protein in the canonical Wnt signaling pathway, is expressed in the superficial layers of the mouse SCDH with enrichment at synapses in lamina II. In addition, Wnt3a, a prototypic Wnt ligand that activates the canonical pathway, is also enriched in the superficial layers. Immunoblotting analysis indicates that both Wnt3a a β-catenin are up-regulated in the SCDH of various mouse pain models created by hind-paw injection of capsaicin, intrathecal (i.t. injection of HIV-gp120 protein or spinal nerve ligation (SNL. Furthermore, Wnt5a, a prototypic Wnt ligand for non-canonical pathways, and its receptor Ror2 are also up-regulated in the SCDH of these models. Conclusion Our results suggest that Wnt signaling pathways are regulated by nociceptive input. The activation of Wnt signaling may regulate the expression of spinal central sensitization during the development of acute and chronic pain.

  20. Electron Signal Detection for the Beam-Finder Wire of the Linac Coherent Light Source Undulator

    International Nuclear Information System (INIS)

    Wu, Juhao; Emma, P.; Field, R.C.; SLAC

    2006-01-01

    The Linac Coherent Light Source (LCLS) is a SASE x-ray Free-Electron Laser (FEL) based on the final kilometer of the Stanford Linear Accelerator. The tight tolerances for positioning the electron beam close to the undulator axis calls for the introduction of Beam Finder Wire (BFW) device. A BFW device close to the upstream end of the undulator segment and a quadrupole close to the down stream end of the undulator segment will allow a beam-based undulator segment alignment. Based on the scattering of the electrons on the BFW, we can detect the electron signal in the main dump bends after the undulator to find the beam position. We propose to use a threshold Cherenkov counter for this purpose. According to the signal strength at such a Cherenkov counter, we then suggest choice of material and size for such a BFW device in the undulator

  1. OSeMOSYS: The Open Source Energy Modeling System

    International Nuclear Information System (INIS)

    Howells, Mark; Rogner, Holger; Strachan, Neil; Heaps, Charles; Huntington, Hillard; Kypreos, Socrates; Hughes, Alison; Silveira, Semida; DeCarolis, Joe; Bazillian, Morgan; Roehrl, Alexander

    2011-01-01

    This paper discusses the design and development of the Open Source Energy Modeling System (OSeMOSYS). It describes the model's formulation in terms of a 'plain English' description, algebraic formulation, implementation-in terms of its full source code, as well as a detailed description of the model inputs, parameters, and outputs. A key feature of the OSeMOSYS implementation is that it is contained in less than five pages of documented, easily accessible code. Other existing energy system models that do not have this emphasis on compactness and openness makes the barrier to entry by new users much higher, as well as making the addition of innovative new functionality very difficult. The paper begins by describing the rationale for the development of OSeMOSYS and its structure. The current preliminary implementation of the model is then demonstrated for a discrete example. Next, we explain how new development efforts will build on the existing OSeMOSYS codebase. The paper closes with thoughts regarding the organization of the OSeMOSYS community, associated capacity development efforts, and linkages to other open source efforts including adding functionality to the LEAP model. - Highlights: → OSeMOSYS is a new free and open source energy systems. → This model is written in a simple, open, flexible and transparent manner to support teaching. → OSeMOSYS is based on free software and optimizes using a free solver. → This model replicates the results of many popular tools, such as MARKAL. → A link between OSeMOSYS and LEAP has been developed.

  2. Logic integer programming models for signaling networks.

    Science.gov (United States)

    Haus, Utz-Uwe; Niermann, Kathrin; Truemper, Klaus; Weismantel, Robert

    2009-05-01

    We propose a static and a dynamic approach to model biological signaling networks, and show how each can be used to answer relevant biological questions. For this, we use the two different mathematical tools of Propositional Logic and Integer Programming. The power of discrete mathematics for handling qualitative as well as quantitative data has so far not been exploited in molecular biology, which is mostly driven by experimental research, relying on first-order or statistical models. The arising logic statements and integer programs are analyzed and can be solved with standard software. For a restricted class of problems the logic models reduce to a polynomial-time solvable satisfiability algorithm. Additionally, a more dynamic model enables enumeration of possible time resolutions in poly-logarithmic time. Computational experiments are included.

  3. 21-cm signature of the first sources in the Universe: prospects of detection with SKA

    Science.gov (United States)

    Ghara, Raghunath; Choudhury, T. Roy; Datta, Kanan K.

    2016-07-01

    Currently several low-frequency experiments are being planned to study the nature of the first stars using the redshifted 21-cm signal from the cosmic dawn and Epoch of Reionization. Using a one-dimensional radiative transfer code, we model the 21-cm signal pattern around the early sources for different source models, I.e. the metal-free Population III (PopIII) stars, primordial galaxies consisting of Population II (PopII) stars, mini-QSOs and high-mass X-ray binaries (HMXBs). We investigate the detectability of these sources by comparing the 21-cm visibility signal with the system noise appropriate for a telescope like the SKA1-low. Upon integrating the visibility around a typical source over all baselines and over a frequency interval of 16 MHz, we find that it will be possible to make a ˜9σ detection of the isolated sources like PopII galaxies, mini-QSOs and HMXBs at z ˜ 15 with the SKA1-low in 1000 h. The exact value of the signal-to-noise ratio (SNR) will depend on the source properties, in particular on the mass and age of the source and the escape fraction of ionizing photons. The predicted SNR decreases with increasing redshift. We provide simple scaling laws to estimate the SNR for different values of the parameters which characterize the source and the surrounding medium. We also argue that it will be possible to achieve an SNR ˜9 even in the presence of the astrophysical foregrounds by subtracting out the frequency-independent component of the observed signal. These calculations will be useful in planning 21-cm observations to detect the first sources.

  4. Making Faces - State-Space Models Applied to Multi-Modal Signal Processing

    DEFF Research Database (Denmark)

    Lehn-Schiøler, Tue

    2005-01-01

    The two main focus areas of this thesis are State-Space Models and multi modal signal processing. The general State-Space Model is investigated and an addition to the class of sequential sampling methods is proposed. This new algorithm is denoted as the Parzen Particle Filter. Furthermore...... optimizer can be applied to speed up convergence. The linear version of the State-Space Model, the Kalman Filter, is applied to multi modal signal processing. It is demonstrated how a State-Space Model can be used to map from speech to lip movements. Besides the State-Space Model and the multi modal...... application an information theoretic vector quantizer is also proposed. Based on interactions between particles, it is shown how a quantizing scheme based on an analytic cost function can be derived....

  5. Bayesian Inference for Signal-Based Seismic Monitoring

    Science.gov (United States)

    Moore, D.

    2015-12-01

    Traditional seismic monitoring systems rely on discrete detections produced by station processing software, discarding significant information present in the original recorded signal. SIG-VISA (Signal-based Vertically Integrated Seismic Analysis) is a system for global seismic monitoring through Bayesian inference on seismic signals. By modeling signals directly, our forward model is able to incorporate a rich representation of the physics underlying the signal generation process, including source mechanisms, wave propagation, and station response. This allows inference in the model to recover the qualitative behavior of recent geophysical methods including waveform matching and double-differencing, all as part of a unified Bayesian monitoring system that simultaneously detects and locates events from a global network of stations. We demonstrate recent progress in scaling up SIG-VISA to efficiently process the data stream of global signals recorded by the International Monitoring System (IMS), including comparisons against existing processing methods that show increased sensitivity from our signal-based model and in particular the ability to locate events (including aftershock sequences that can tax analyst processing) precisely from waveform correlation effects. We also provide a Bayesian analysis of an alleged low-magnitude event near the DPRK test site in May 2010 [1] [2], investigating whether such an event could plausibly be detected through automated processing in a signal-based monitoring system. [1] Zhang, Miao and Wen, Lianxing. "Seismological Evidence for a Low-Yield Nuclear Test on 12 May 2010 in North Korea". Seismological Research Letters, January/February 2015. [2] Richards, Paul. "A Seismic Event in North Korea on 12 May 2010". CTBTO SnT 2015 oral presentation, video at https://video-archive.ctbto.org/index.php/kmc/preview/partner_id/103/uiconf_id/4421629/entry_id/0_ymmtpps0/delivery/http

  6. MOTORCYCLE CRASH PREDICTION MODEL FOR NON-SIGNALIZED INTERSECTIONS

    Directory of Open Access Journals (Sweden)

    S. HARNEN

    2003-01-01

    Full Text Available This paper attempts to develop a prediction model for motorcycle crashes at non-signalized intersections on urban roads in Malaysia. The Generalized Linear Modeling approach was used to develop the model. The final model revealed that an increase in motorcycle and non-motorcycle flows entering an intersection is associated with an increase in motorcycle crashes. Non-motorcycle flow on major road had the greatest effect on the probability of motorcycle crashes. Approach speed, lane width, number of lanes, shoulder width and land use were also found to be significant in explaining motorcycle crashes. The model should assist traffic engineers to decide the need for appropriate intersection treatment that specifically designed for non-exclusive motorcycle lane facilities.

  7. Analytic model of the stress waves propagation in thin wall tubes, seeking the location of a harmonic point source in its surface

    International Nuclear Information System (INIS)

    Boaratti, Mario Francisco Guerra

    2006-01-01

    Leaks in pressurized tubes generate acoustic waves that propagate through the walls of these tubes, which can be captured by accelerometers or by acoustic emission sensors. The knowledge of how these walls can vibrate, or in another way, how these acoustic waves propagate in this material is fundamental in the detection and localization process of the leak source. In this work an analytic model was implemented, through the motion equations of a cylindrical shell, with the objective to understand the behavior of the tube surface excited by a point source. Since the cylindrical surface has a closed pattern in the circumferential direction, waves that are beginning their trajectory will meet with another that has already completed the turn over the cylindrical shell, in the clockwise direction as well as in the counter clockwise direction, generating constructive and destructive interferences. After enough time of propagation, peaks and valleys in the shell surface are formed, which can be visualized through a graphic representation of the analytic solution created. The theoretical results were proven through measures accomplished in an experimental setup composed of a steel tube finished in sand box, simulating the condition of infinite tube. To determine the location of the point source on the surface, the process of inverse solution was adopted, that is to say, known the signals of the sensor disposed in the tube surface , it is determined through the theoretical model where the source that generated these signals can be. (author)

  8. Variables and potential models for the bleaching of luminescence signals in fluvial environments

    Science.gov (United States)

    Gray, Harrison J.; Mahan, Shannon

    2015-01-01

    Luminescence dating of fluvial sediments rests on the assumption that sufficient sunlight is available to remove a previously obtained signal in a process deemed bleaching. However, luminescence signals obtained from sediment in the active channels of rivers often contain residual signals. This paper explores and attempts to build theoretical models for the bleaching of luminescence signals in fluvial settings. We present two models, one for sediment transported in an episodic manner, such as flood-driven washes in arid environments, and one for sediment transported in a continuous manner, such as in large continental scale rivers. The episodic flow model assumes that the majority of sediment is bleached while exposed to sunlight at the near surface between flood events and predicts a power-law decay in luminescence signal with downstream transport distance. The continuous flow model is developed by combining the Beer–Lambert law for the attenuation of light through a water column with a general-order kinetics equation to produce an equation with the form of a double negative exponential. The inflection point of this equation is compared with the sediment concentration from a Rouse profile to derive a non-dimensional number capable of assessing the likely extent of bleaching for a given set of luminescence and fluvial parameters. Although these models are theoretically based and not yet necessarily applicable to real-world fluvial systems, we introduce these ideas to stimulate discussion and encourage the development of comprehensive bleaching models with predictive power.

  9. Monte Carlo modelling of large scale NORM sources using MCNP.

    Science.gov (United States)

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  10. Signal-dependent independent component analysis by tunable mother wavelets

    International Nuclear Information System (INIS)

    Seo, Kyung Ho

    2006-02-01

    The objective of this study is to improve the standard independent component analysis when applied to real-world signals. Independent component analysis starts from the assumption that signals from different physical sources are statistically independent. But real-world signals such as EEG, ECG, MEG, and fMRI signals are not statistically independent perfectly. By definition, standard independent component analysis algorithms are not able to estimate statistically dependent sources, that is, when the assumption of independence does not hold. Therefore before independent component analysis, some preprocessing stage is needed. This paper started from simple intuition that wavelet transformed source signals by 'well-tuned' mother wavelet will be simplified sufficiently, and then the source separation will show better results. By the correlation coefficient method, the tuning process between source signal and tunable mother wavelet was executed. Gamma component of raw EEG signal was set to target signal, and wavelet transform was executed by tuned mother wavelet and standard mother wavelets. Simulation results by these wavelets was shown

  11. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  12. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  13. Quasi-homogeneous partial coherent source modeling of multimode optical fiber output using the elementary source method

    Science.gov (United States)

    Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.

    2017-10-01

    Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian-Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.

  14. Numerical modelling of the pump-to-signal relative intensity noise ...

    Indian Academy of Sciences (India)

    An accurate numerical model to investigate the pump-to-signal relative intensity noise (RIN) transfer in two-pump fibre optical parametric amplifiers (2-P FOPAs) for low modulation frequencies is presented. Compared to other models in the field, this model takes into account the fibre loss, pump depletion as well as the gain ...

  15. Multiple logistic regression model of signalling practices of drivers on urban highways

    Science.gov (United States)

    Puan, Othman Che; Ibrahim, Muttaka Na'iya; Zakaria, Rozana

    2015-05-01

    Giving signal is a way of informing other road users, especially to the conflicting drivers, the intention of a driver to change his/her movement course. Other users are exposed to hazard situation and risks of accident if the driver who changes his/her course failed to give signal as required. This paper describes the application of logistic regression model for the analysis of driver's signalling practices on multilane highways based on possible factors affecting driver's decision such as driver's gender, vehicle's type, vehicle's speed and traffic flow intensity. Data pertaining to the analysis of such factors were collected manually. More than 2000 drivers who have performed a lane changing manoeuvre while driving on two sections of multilane highways were observed. Finding from the study shows that relatively a large proportion of drivers failed to give any signals when changing lane. The result of the analysis indicates that although the proportion of the drivers who failed to provide signal prior to lane changing manoeuvre is high, the degree of compliances of the female drivers is better than the male drivers. A binary logistic model was developed to represent the probability of a driver to provide signal indication prior to lane changing manoeuvre. The model indicates that driver's gender, type of vehicle's driven, speed of vehicle and traffic volume influence the driver's decision to provide a signal indication prior to a lane changing manoeuvre on a multilane urban highway. In terms of types of vehicles driven, about 97% of motorcyclists failed to comply with the signal indication requirement. The proportion of non-compliance drivers under stable traffic flow conditions is much higher than when the flow is relatively heavy. This is consistent with the data which indicates a high degree of non-compliances when the average speed of the traffic stream is relatively high.

  16. Revisiting source identification, weathering models, and phase discrimination for Exxon Valdez oil

    International Nuclear Information System (INIS)

    Driskell, W.B.; Payne, J.R.; Shigenaka, G.

    2005-01-01

    A large chemistry data set for polycyclic aromatic hydrocarbon (PAH) and saturated hydrocarbon (SHC) contamination in sediment, water and tissue samples has emerged in the aftermath of the 1989 Exxon Valdez oil spill in Prince William Sound, Alaska. When the oil was fresh, source identification was a primary objective and fairly reliable. However, source identification became problematic as the oil weathered and its signatures changed. In response to concerns regarding when the impacted area will be clean again, this study focused on developing appropriate tools to confirm hydrocarbon source identifications and assess weathering in various matrices. Previous efforts that focused only on the whole or particulate-phase oil are not adequate to track dissolved-phase signal with low total PAH values. For that reason, a particulate signature index (PSI) and dissolved signature index (DSI) screening tool was developed in this study to discriminate between these 2 phases. The screening tool was used to measure the dissolved or water-soluble fraction of crude oil which occurs at much lower levels than the particulate phase, but which is more widely circulated and equally as important as the particulate oil phase. The discrimination methods can also identify normally-discarded, low total PAH samples which can increase the amount of usable data needed to model other effects of oil spills. 37 refs., 3 tabs., 10 figs

  17. Impact of Improper Gaussian Signaling on Hardware Impaired Systems

    KAUST Repository

    Javed, Sidrah

    2016-12-18

    In this paper, we accurately model the hardware impairments (HWI) as improper Gaussian signaling (IGS) which can characterize the asymmetric characteristics of different HWI sources. The proposed model encourages us to adopt IGS scheme for transmitted signal that represents a general study compared with the conventional scheme, proper Gaussian signaling (PGS). First, we express the achievable rate of HWI systems when both PGS and IGS schemes are used when the aggregate effect of HWI is modeled as IGS. Moreover, we tune the IGS statistical characteristics to maximize the achievable rate. Then, we analyze the outage probability for both schemes and derive closed form expressions. Finally, we validate the analytic expressions through numerical and simulation results. In addition, we quantify through the numerical results the performance degradation in the absence of ideal transceivers and the gain reaped from adopting IGS scheme compared with PGS scheme.

  18. Impact of Improper Gaussian Signaling on Hardware Impaired Systems

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salam S.; Alouini, Mohamed-Slim

    2016-01-01

    In this paper, we accurately model the hardware impairments (HWI) as improper Gaussian signaling (IGS) which can characterize the asymmetric characteristics of different HWI sources. The proposed model encourages us to adopt IGS scheme for transmitted signal that represents a general study compared with the conventional scheme, proper Gaussian signaling (PGS). First, we express the achievable rate of HWI systems when both PGS and IGS schemes are used when the aggregate effect of HWI is modeled as IGS. Moreover, we tune the IGS statistical characteristics to maximize the achievable rate. Then, we analyze the outage probability for both schemes and derive closed form expressions. Finally, we validate the analytic expressions through numerical and simulation results. In addition, we quantify through the numerical results the performance degradation in the absence of ideal transceivers and the gain reaped from adopting IGS scheme compared with PGS scheme.

  19. Modeling the NPE with finite sources and empirical Green`s functions

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.; Kasameyer, P.; Goldstein, P. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-12-31

    In order to better understand the source characteristics of both nuclear and chemical explosions for purposes of discrimination, we have modeled the NPE chemical explosion as a finite source and with empirical Green`s functions. Seismograms are synthesized at four sties to test the validity of source models. We use a smaller chemical explosion detonated in the vicinity of the working point to obtain empirical Green`s functions. Empirical Green`s functions contain all the linear information of the geology along the propagation path and recording site, which are identical for chemical or nuclear explosions, and therefore reduce the variability in modeling the source of the larger event. We further constrain the solution to have the overall source duration obtained from point-source deconvolution results. In modeling the source, we consider both an elastic source on a spherical surface and an inelastic expanding spherical volume source. We found that the spherical volume solution provides better fits to observed seismograms. The potential to identify secondary sources was examined, but the resolution is too poor to be definitive.

  20. Improved signal model for confocal sensors accounting for object depending artifacts.

    Science.gov (United States)

    Mauch, Florian; Lyda, Wolfram; Gronle, Marc; Osten, Wolfgang

    2012-08-27

    The conventional signal model of confocal sensors is well established and has proven to be exceptionally robust especially when measuring rough surfaces. Its physical derivation however is explicitly based on plane surfaces or point like objects, respectively. Here we show experimental results of a confocal point sensor measurement of a surface standard. The results illustrate the rise of severe artifacts when measuring curved surfaces. On this basis, we present a systematic extension of the conventional signal model that is proven to be capable of qualitatively explaining these artifacts.

  1. Hierarchical Colored Petri Nets for Modeling and Analysis of Transit Signal Priority Control Systems

    Directory of Open Access Journals (Sweden)

    Yisheng An

    2018-01-01

    Full Text Available In this paper, we consider the problem of developing a model for traffic signal control with transit priority using Hierarchical Colored Petri nets (HCPN. Petri nets (PN are useful for state analysis of discrete event systems due to their powerful modeling capability and mathematical formalism. This paper focuses on their use to formalize the transit signal priority (TSP control model. In a four-phase traffic signal control model, the transit detection and two kinds of transit priority strategies are integrated to obtain the HCPN-based TSP control models. One of the advantages to use these models is the clear presentation of traffic light behaviors in terms of conditions and events that cause the detection of a priority request by a transit vehicle. Another advantage of the resulting models is that the correctness and reliability of the proposed strategies are easily analyzed. After their full reachable states are generated, the boundness, liveness, and fairness of the proposed models are verified. Experimental results show that the proposed control model provides transit vehicles with better effectiveness at intersections. This work helps advance the state of the art in the design of signal control models related to the intersection of roadways.

  2. Characteristic of TPF-I Current Signals

    International Nuclear Information System (INIS)

    Kunamaspakorn, T.; Poolyarat, N.; Picha, R.; Promping, J.; Onjun, T.

    2014-01-01

    Thailand Plasma Focus I (TPF-I) is a dense plasma focus device which has been built and developed as a collaborative project among TINT, SIIT, and TU as a radiation source for academic research. This prototype device is powered by a 30 μF capacitor bank, charged at 15 kV. In this work, we assembled a Rogowski coils, which was used for measuring high speed current pulse, to capture current signals from TPF-I. The signals were then compared with the simulation results from Lee model code and found to be in good agreement. The current development status of the TPF-I will also be presented.

  3. A model for superliminal radio sources

    International Nuclear Information System (INIS)

    Milgrom, M.; Bahcall, J.N.

    1977-01-01

    A geometrical model for superluminal radio sources is described. Six predictions that can be tested by observations are summarized. The results are in agreement with all the available observations. In this model, the Hubble constant is the only numerical parameter that is important in interpreting the observed rates of change of angular separations for small redshifts. The available observations imply that H 0 is less than 55 km/s/Mpc if the model is correct. (author)

  4. Models of signal validation using artificial intelligence techniques applied to a nuclear reactor

    International Nuclear Information System (INIS)

    Oliveira, Mauro V.; Schirru, Roberto

    2000-01-01

    This work presents two models of signal validation in which the analytical redundancy of the monitored signals from a nuclear plant is made by neural networks. In one model the analytical redundancy is made by only one neural network while in the other it is done by several neural networks, each one working in a specific part of the entire operation region of the plant. Four cluster techniques were tested to separate the entire operation region in several specific regions. An additional information of systems' reliability is supplied by a fuzzy inference system. The models were implemented in C language and tested with signals acquired from Angra I nuclear power plant, from its start to 100% of power. (author)

  5. Topographic filtering simulation model for sediment source apportionment

    Science.gov (United States)

    Cho, Se Jong; Wilcock, Peter; Hobbs, Benjamin

    2018-05-01

    We propose a Topographic Filtering simulation model (Topofilter) that can be used to identify those locations that are likely to contribute most of the sediment load delivered from a watershed. The reduced complexity model links spatially distributed estimates of annual soil erosion, high-resolution topography, and observed sediment loading to determine the distribution of sediment delivery ratio across a watershed. The model uses two simple two-parameter topographic transfer functions based on the distance and change in elevation from upland sources to the nearest stream channel and then down the stream network. The approach does not attempt to find a single best-calibrated solution of sediment delivery, but uses a model conditioning approach to develop a large number of possible solutions. For each model run, locations that contribute to 90% of the sediment loading are identified and those locations that appear in this set in most of the 10,000 model runs are identified as the sources that are most likely to contribute to most of the sediment delivered to the watershed outlet. Because the underlying model is quite simple and strongly anchored by reliable information on soil erosion, topography, and sediment load, we believe that the ensemble of simulation outputs provides a useful basis for identifying the dominant sediment sources in the watershed.

  6. The influence of signal type on the internal auditory representation of a room

    Science.gov (United States)

    Teret, Elizabeth

    Currently, architectural acousticians make no real distinction between a room impulse response and the auditory system's internal representation of a room. With this lack of a good model for the auditory representation of a room, it is indirectly assumed that our internal representation of a room is independent of the sound source needed to make the room characteristics audible. The extent to which this assumption holds true is examined with perceptual tests. Listeners are presented with various pairs of signals (music, speech, and noise) convolved with synthesized impulse responses of different reverberation times. They are asked to adjust the reverberation of one of the signals to match the other. Analysis of the data show that the source signal significantly influences perceived reverberance. Listeners are less accurate when matching reverberation times of varied signals than they are with identical signals. Additional testing shows that perception of reverberation can be linked to the existence of transients in the signal.

  7. A morphing technique for signal modelling in a multidimensional space of coupling parameters

    CERN Document Server

    The ATLAS collaboration

    2015-01-01

    This note describes a morphing method that produces signal models for fits to data in which both the affected event yields and kinematic distributions are simultaneously taken into account. The signal model is morphed in a continuous manner through the available multi-dimensional parameter space. Searches for deviations from Standard Model predictions for Higgs boson properties have so far used information either from event yields or kinematic distributions. The combined approach described here is expected to substantially enhance the sensitivity to beyond the Standard Model contributions.

  8. HP Memristor mathematical model for periodic signals and DC

    KAUST Repository

    Radwan, Ahmed G.; Salama, Khaled N.; Zidan, Mohammed A.

    2012-01-01

    the formulas for any general square wave. The limiting conditions for saturation are also provided in case of either DC or periodic signals. The derived equations are compared to the SPICE model of the Memristor showing a perfect match.

  9. Multiple Drug Treatments That Increase cAMP Signaling Restore Long-Term Memory and Aberrant Signaling in Fragile X Syndrome Models

    Science.gov (United States)

    Choi, Catherine H.; Schoenfeld, Brian P.; Bell, Aaron J.; Hinchey, Joseph; Rosenfelt, Cory; Gertner, Michael J.; Campbell, Sean R.; Emerson, Danielle; Hinchey, Paul; Kollaros, Maria; Ferrick, Neal J.; Chambers, Daniel B.; Langer, Steven; Sust, Steven; Malik, Aatika; Terlizzi, Allison M.; Liebelt, David A.; Ferreiro, David; Sharma, Ali; Koenigsberg, Eric; Choi, Richard J.; Louneva, Natalia; Arnold, Steven E.; Featherstone, Robert E.; Siegel, Steven J.; Zukin, R. Suzanne; McDonald, Thomas V.; Bolduc, Francois V.; Jongens, Thomas A.; McBride, Sean M. J.

    2016-01-01

    Fragile X is the most common monogenic disorder associated with intellectual disability (ID) and autism spectrum disorders (ASD). Additionally, many patients are afflicted with executive dysfunction, ADHD, seizure disorder and sleep disturbances. Fragile X is caused by loss of FMRP expression, which is encoded by the FMR1 gene. Both the fly and mouse models of fragile X are also based on having no functional protein expression of their respective FMR1 homologs. The fly model displays well defined cognitive impairments and structural brain defects and the mouse model, although having subtle behavioral defects, has robust electrophysiological phenotypes and provides a tool to do extensive biochemical analysis of select brain regions. Decreased cAMP signaling has been observed in samples from the fly and mouse models of fragile X as well as in samples derived from human patients. Indeed, we have previously demonstrated that strategies that increase cAMP signaling can rescue short term memory in the fly model and restore DHPG induced mGluR mediated long term depression (LTD) in the hippocampus to proper levels in the mouse model (McBride et al., 2005; Choi et al., 2011, 2015). Here, we demonstrate that the same three strategies used previously with the potential to be used clinically, lithium treatment, PDE-4 inhibitor treatment or mGluR antagonist treatment can rescue long term memory in the fly model and alter the cAMP signaling pathway in the hippocampus of the mouse model. PMID:27445731

  10. On the sources of technological change: What do the models assume?

    International Nuclear Information System (INIS)

    Clarke, Leon; Weyant, John; Edmonds, Jae

    2008-01-01

    It is widely acknowledged that technological change can substantially reduce the costs of stabilizing atmospheric concentrations of greenhouse gases. This paper discusses the sources of technological change and the representations of these sources in formal models of energy and the environment. The paper distinguishes between three major sources of technological change-R and D, learning-by-doing and spillovers-and introduces a conceptual framework for linking modeling approaches to assumptions about these real-world sources. A selective review of modeling approaches, including those employing exogenous technological change, suggests that most formal models have meaningful real-world interpretations that focus on a subset of possible sources of technological change while downplaying the roles of others

  11. The Unfolding of Value Sources During Online Business Model Transformation

    Directory of Open Access Journals (Sweden)

    Nadja Hoßbach

    2016-12-01

    Full Text Available Purpose: In the magazine publishing industry, viable online business models are still rare to absent. To prepare for the ‘digital future’ and safeguard their long-term survival, many publishers are currently in the process of transforming their online business model. Against this backdrop, this study aims to develop a deeper understanding of (1 how the different building blocks of an online business model are transformed over time and (2 how sources of value creation unfold during this transformation process. Methodology: To answer our research question, we conducted a longitudinal case study with a leading German business magazine publisher (called BIZ. Data was triangulated from multiple sources including interviews, internal documents, and direct observations. Findings: Based on our case study, we nd that BIZ used the transformation process to differentiate its online business model from its traditional print business model along several dimensions, and that BIZ’s online business model changed from an efficiency- to a complementarity- to a novelty-based model during this process. Research implications: Our findings suggest that different business model transformation phases relate to different value sources, questioning the appropriateness of value source-based approaches for classifying business models. Practical implications: The results of our case study highlight the need for online-offline business model differentiation and point to the important distinction between service and product differentiation. Originality: Our study contributes to the business model literature by applying a dynamic and holistic perspective on the link between online business model changes and unfolding value sources.

  12. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    Science.gov (United States)

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  13. Application of source-receptor models to determine source areas of biological components (pollen and butterflies)

    OpenAIRE

    M. Alarcón; M. Àvila; J. Belmonte; C. Stefanescu; R. Izquierdo

    2010-01-01

    The source-receptor models allow the establishment of relationships between a receptor point (sampling point) and the probable source areas (regions of emission) through the association of concentration values at the receptor point with the corresponding atmospheric back-trajectories, and, together with other techniques, to interpret transport phenomena on a synoptic scale. These models are generally used in air pollution studies to determine the areas of origin of chemical compounds measured...

  14. The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle

    OpenAIRE

    Laaksonen, Pekka

    2011-01-01

    Laaksonen, Pekka The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle Jyväskylä: Jyväskylän yliopisto, 2011, 42 s. Tietojärjestelmätiede, kandidaatintutkielma Ohjaaja(t): Käkölä, Timo Tässä kandidaatintutkielmassa selvitettiin sitä, miten the eSourcing Capability Model for Service Providers-mallin käytännöt (practices) ovat liittyneet tietä-myksenhallinnan neljään prosessiin: tiedon luominen, varastointi/noutaminen, jakamine...

  15. The Growth of open source: A look at how companies are utilizing open source software in their business models

    OpenAIRE

    Feare, David

    2009-01-01

    This paper examines how open source software is being incorporated into the business models of companies in the software industry. The goal is to answer the question of whether the open source model can help sustain economic growth. While some companies are able to maintain a "pure" open source approach with their business model, the reality is that most companies are relying on proprietary add-on value in order to generate revenue because open source itself is simply not big business. Ultima...

  16. The differential Howland current source with high signal to noise ratio for bioimpedance measurement system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Jinzhen; Li, Gang; Lin, Ling, E-mail: linling@tju.edu.cn [State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University, Tianjin, People' s Republic of China, and Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin (China); Qiao, Xiaoyan [College of Physics and Electronic Engineering, Shanxi University, Shanxi (China); Wang, Mengjun [School of Information Engineering, Hebei University of Technology, Tianjin (China); Zhang, Weibo [Institute of Acupuncture and Moxibustion China Academy of Chinese Medical Sciences, Beijing (China)

    2014-05-15

    The stability and signal to noise ratio (SNR) of the current source circuit are the important factors contributing to enhance the accuracy and sensitivity in bioimpedance measurement system. In this paper we propose a new differential Howland topology current source and evaluate its output characters by simulation and actual measurement. The results include (1) the output current and impedance in high frequencies are stabilized after compensation methods. And the stability of output current in the differential current source circuit (DCSC) is 0.2%. (2) The output impedance of two current circuits below the frequency of 200 KHz is above 1 MΩ, and below 1 MHz the output impedance can arrive to 200 KΩ. Then in total the output impedance of the DCSC is higher than that of the Howland current source circuit (HCSC). (3) The SNR of the DCSC are 85.64 dB and 65 dB in the simulation and actual measurement with 10 KHz, which illustrates that the DCSC effectively eliminates the common mode interference. (4) The maximum load in the DCSC is twice as much as that of the HCSC. Lastly a two-dimensional phantom electrical impedance tomography is well reconstructed with the proposed HCSC. Therefore, the measured performance shows that the DCSC can significantly improve the output impedance, the stability, the maximum load, and the SNR of the measurement system.

  17. A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space

    Directory of Open Access Journals (Sweden)

    Jinjun Li

    2011-01-01

    Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.

  18. Sequential decoding of intramuscular EMG signals via estimation of a Markov model.

    Science.gov (United States)

    Monsifrot, Jonathan; Le Carpentier, Eric; Aoustin, Yannick; Farina, Dario

    2014-09-01

    This paper addresses the sequential decoding of intramuscular single-channel electromyographic (EMG) signals to extract the activity of individual motor neurons. A hidden Markov model is derived from the physiological generation of the EMG signal. The EMG signal is described as a sum of several action potentials (wavelet) trains, embedded in noise. For each train, the time interval between wavelets is modeled by a process that parameters are linked to the muscular activity. The parameters of this process are estimated sequentially by a Bayes filter, along with the firing instants. The method was tested on some simulated signals and an experimental one, from which the rates of detection and classification of action potentials were above 95% with respect to the reference decomposition. The method works sequentially in time, and is the first to address the problem of intramuscular EMG decomposition online. It has potential applications for man-machine interfacing based on motor neuron activities.

  19. HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL

    Science.gov (United States)

    The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...

  20. Influence of physiological sources on the impedance cardiogram analyzed using 4D FEM simulations

    International Nuclear Information System (INIS)

    Ulbrich, Mark; Leonhardt, Steffen; Walter, Marian; Mühlsteff, Jens

    2014-01-01

    Impedance cardiography is a simple and inexpensive method to acquire data on hemodynamic parameters. This study analyzes the influence of four dynamic physiological sources (aortic expansion, heart contraction, lung perfusion and erythrocyte orientation) on the impedance signal using a model of the human thorax with a high temporal resolution (125 Hz) based on human MRI data. Simulations of electromagnetic fields were conducted using the finite element method. The ICG signal caused by these sources shows very good agreement with the measured signals (r = 0.89). Standard algorithms can be used to extract characteristic points to calculate left ventricular ejection time and stroke volume (SV). In the presented model, the calculated SV equals the implemented left ventricular volume change of the heart. It is shown that impedance changes due to lung perfusion and heart contraction compensate themselves, and that erythrocyte orientation together with the aortic impedance basically form the ICG signal while taking its characteristic morphology from the aortic signal. The model is robust to conductivity changes of tissues and organ displacements. In addition, it reflects the multi-frequency behavior of the thoracic impedance. (paper)

  1. Transmembrane signaling in Saccharomyces cerevisiae as a model for signaling in metazoans: state of the art after 25 years.

    Science.gov (United States)

    Engelberg, David; Perlman, Riki; Levitzki, Alexander

    2014-12-01

    In the very first article that appeared in Cellular Signalling, published in its inaugural issue in October 1989, we reviewed signal transduction pathways in Saccharomyces cerevisiae. Although this yeast was already a powerful model organism for the study of cellular processes, it was not yet a valuable instrument for the investigation of signaling cascades. In 1989, therefore, we discussed only two pathways, the Ras/cAMP and the mating (Fus3) signaling cascades. The pivotal findings concerning those pathways undoubtedly contributed to the realization that yeast is a relevant model for understanding signal transduction in higher eukaryotes. Consequently, the last 25 years have witnessed the discovery of many signal transduction pathways in S. cerevisiae, including the high osmotic glycerol (Hog1), Stl2/Mpk1 and Smk1 mitogen-activated protein (MAP) kinase pathways, the TOR, AMPK/Snf1, SPS, PLC1 and Pkr/Gcn2 cascades, and systems that sense and respond to various types of stress. For many cascades, orthologous pathways were identified in mammals following their discovery in yeast. Here we review advances in the understanding of signaling in S. cerevisiae over the last 25 years. When all pathways are analyzed together, some prominent themes emerge. First, wiring of signaling cascades may not be identical in all S. cerevisiae strains, but is probably specific to each genetic background. This situation complicates attempts to decipher and generalize these webs of reactions. Secondly, the Ras/cAMP and the TOR cascades are pivotal pathways that affect all processes of the life of the yeast cell, whereas the yeast MAP kinase pathways are not essential. Yeast cells deficient in all MAP kinases proliferate normally. Another theme is the existence of central molecular hubs, either as single proteins (e.g., Msn2/4, Flo11) or as multisubunit complexes (e.g., TORC1/2), which are controlled by numerous pathways and in turn determine the fate of the cell. It is also apparent that

  2. Skull's acoustic attenuation and dispersion modeling on photoacoustic signal

    Science.gov (United States)

    Mohammadi, Leila; Behnam, Hamid; Tavakkoli, Jahan; Nasiriavanaki, Mohammadreza

    2018-02-01

    Despite the promising results of the recent novel transcranial photoacoustic (PA) brain imaging technology, it has been demonstrated that the presence of the skull severely affects the performance of this imaging modality. We theoretically investigate the effects of acoustic heterogeneity induced by skull on the PA signals generated from single particles, with firstly developing a mathematical model for this phenomenon and then explore experimental validation of the results. The model takes into account the frequency dependent attenuation and dispersion effects occur with wave reflection, refraction and mode conversion at the skull surfaces. Numerical simulations based on the developed model are performed for calculating the propagation of photoacoustic waves through the skull. The results show a strong agreement between simulation and ex-vivo study. The findings are as follow: The thickness of the skull is the most PA signal deteriorating factor that affects both its amplitude (attenuation) and phase (distortion). Also we demonstrated that, when the depth of target region is low and it is comparable to the skull thickness, however, the skull-induced distortion becomes increasingly severe and the reconstructed image would be strongly distorted without correcting these effects. It is anticipated that an accurate quantification and modeling of the skull transmission effects would ultimately allow for aberration correction in transcranial PA brain imaging.

  3. Faster universal modeling for two source classes

    NARCIS (Netherlands)

    Nowbakht, A.; Willems, F.M.J.; Macq, B.; Quisquater, J.-J.

    2002-01-01

    The Universal Modeling algorithms proposed in [2] for two general classes of finite-context sources are reviewed. The above methods were constructed by viewing a model structure as a partition of the context space and realizing that a partition can be reached through successive splits. Here we start

  4. Possible signals of vacuum dynamics in the Universe

    Science.gov (United States)

    Peracaula, Joan Solà; de Cruz Pérez, Javier; Gómez-Valent, Adrià

    2018-05-01

    We study a generic class of time-evolving vacuum models which can provide a better phenomenological account of the overall cosmological observations as compared to the ΛCDM. Among these models, the running vacuum model (RVM) appears to be the most motivated and favored one, at a confidence level of ˜3σ. We further support these results by computing the Akaike and Bayesian information criteria. Our analysis also shows that we can extract fair signals of dynamical dark energy (DDE) by confronting the same set of data to the generic XCDM and CPL parametrizations. In all cases we confirm that the combined triad of modern observations on Baryonic Acoustic Oscillations, Large Scale Structure formation, and the Cosmic Microwave Background, provide the bulk of the signal sustaining a possible vacuum dynamics. In the absence of any of these three crucial data sources, the DDE signal can not be perceived at a significant confidence level. Its possible existence could be a cure for some of the tensions existing in the ΛCDM when confronted to observations.

  5. Source mechanism of Vulcanian degassing at Popocatépetl Volcano, Mexico, determined from waveform inversions of very long period signals

    Science.gov (United States)

    Chouet, Bernard A.; Dawson, Phillip B.; Arciniega-Ceballos, Alejandra

    2005-01-01

    The source mechanism of very long period (VLP) signals accompanying volcanic degassing bursts at Popocatépetl is analyzed in the 15–70 s band by minimizing the residual error between data and synthetics calculated for a point source embedded in a homogeneous medium. The waveforms of two eruptions (23 April and 23 May 2000) representative of mild Vulcanian activity are well reproduced by our inversion, which takes into account volcano topography. The source centroid is positioned 1500 m below the western perimeter of the summit crater, and the modeled source is composed of a shallow dipping crack (sill with easterly dip of 10°) intersecting a steeply dipping crack (northeast striking dike dipping 83° northwest), whose surface extension bisects the vent. Both cracks undergo a similar sequence of inflation, deflation, and reinflation, reflecting a cycle of pressurization, depressurization, and repressurization within a time interval of 3–5 min. The largest moment release occurs in the sill, showing a maximum volume change of 500–1000 m3, pressure drop of 3–5 MPa, and amplitude of recovered pressure equal to 1.2 times the amplitude of the pressure drop. In contrast, the maximum volume change in the dike is less (200–300 m3), with a corresponding pressure drop of 1–2 MPa and pressure recovery equal to the pressure drop. Accompanying these volumetric sources are single-force components with magnitudes of 108 N, consistent with melt advection in response to pressure transients. The source time histories of the volumetric components of the source indicate that significant mass movement starts within the sill and triggers a mass movement response in the dike within a few seconds. Such source behavior is consistent with the opening of a pathway for escape of pent-up gases from slow pressurization of the sill driven by magma crystallization. The opening of this pathway and associated rapid evacuation of volcanic gases induces the pressure drop. Pressure

  6. Development of repository-wide radionuclide transport model considering the effects of multiple sources

    International Nuclear Information System (INIS)

    Hatanaka, Koichiro; Watari, Shingo; Ijiri, Yuji

    1999-11-01

    Safety assessment of the geological isolation system according to the groundwater scenario has traditionally been conducted based on the signal canister configuration and then the safety of total system has been evaluated based on the dose rates which were obtained by multiplying the migration rates released from the engineered barrier and/or the natural barrier by dose conversion factors and total number of canisters disposed in the repository. The dose conversion factors can be obtained from the biosphere analysis. In this study, we focused on the effect of multiple sources due to the disposal of canisters at different positions in the repository. By taking the effect of multiple sources into consideration, concentration interference in the repository region is possible to take place. Therefore, radionuclide transport model/code considering the effect of concentration interference due to the multiple sources was developed to make assessments of the effect quantitatively. The newly developed model/code was verified through the comparison analysis with the existing radionuclide transport analysis code used in the second progress report. In addition, the effect of the concentration interference was evaluated by setting a simple problem using the newly developed analysis code. This results shows that the maximum park value of the migration rates from the repository was about two orders of magnitude lower than that based on single canister configuration. Since the analysis code was developed by assuming that all canisters disposed of along the one-dimensional groundwater flow contribute to the concentration interference in the repository region, the assumption should be verified by conducting two or three-dimensional analysis considering heterogeneous geological structure as a future work. (author)

  7. A Dynamic Traffic Signal Timing Model and its Algorithm for Junction of Urban Road

    DEFF Research Database (Denmark)

    Cai, Yanguang; Cai, Hao

    2012-01-01

    As an important part of Intelligent Transportation System, the scientific traffic signal timing of junction can improve the efficiency of urban transport. This paper presents a novel dynamic traffic signal timing model. According to the characteristics of the model, hybrid chaotic quantum...... evolutionary algorithm is employed to solve it. The proposed model has simple structure, and only requires traffic inflow speed and outflow speed are bounded functions with at most finite number of discontinuity points. The condition is very loose and better meets the requirements of the practical real......-time and dynamic signal control of junction. To obtain the optimal solution of the model by hybrid chaotic quantum evolutionary algorithm, the model is converted to an easily solvable form. To simplify calculation, we give the expression of the partial derivative and change rate of the objective function...

  8. A 1D ion species model for an RF driven negative ion source

    Science.gov (United States)

    Turner, I.; Holmes, A. J. T.

    2017-08-01

    A one-dimensional model for an RF driven negative ion source has been developed based on an inductive discharge. The RF source differs from traditional filament and arc ion sources because there are no primary electrons present, and is simply composed of an antenna region (driver) and a main plasma discharge region. However the model does still make use of the classical plasma transport equations for particle energy and flow, which have previously worked well for modelling DC driven sources. The model has been developed primarily to model the Small Negative Ion Facility (SNIF) ion source at CCFE, but may be easily adapted to model other RF sources. Currently the model considers the hydrogen ion species, and provides a detailed description of the plasma parameters along the source axis, i.e. plasma temperature, density and potential, as well as current densities and species fluxes. The inputs to the model are currently the RF power, the magnetic filter field and the source gas pressure. Results from the model are presented and where possible compared to existing experimental data from SNIF, with varying RF power, source pressure.

  9. Open Source Software Success Model for Iran: End-User Satisfaction Viewpoint

    Directory of Open Access Journals (Sweden)

    Ali Niknafs

    2012-03-01

    Full Text Available The open source software development is notable option for software companies. Recent years, many advantages of this software type are cause of move to that in Iran. National security and international restrictions problems and also software and services costs and more other problems intensified importance of use of this software. Users and their viewpoints are the critical success factor in the software plans. But there is not an appropriate model for open source software case in Iran. This research tried to develop a measuring open source software success model for Iran. By use of data gathered from open source users and online survey the model was tested. The results showed that components by positive effect on open source success were user satisfaction, open source community services quality, open source quality, copyright and security.

  10. Model-based synthesis of locally contingent responses to global market signals

    Science.gov (United States)

    Magliocca, N. R.

    2015-12-01

    Rural livelihoods and the land systems on which they depend are increasingly influenced by distant markets through economic globalization. Place-based analyses of land and livelihood system sustainability must then consider both proximate and distant influences on local decision-making. Thus, advancing land change theory in the context of economic globalization calls for a systematic understanding of the general processes as well as local contingencies shaping local responses to global signals. Synthesis of insights from place-based case studies of land and livelihood change is a path forward for developing such systematic knowledge. This paper introduces a model-based synthesis approach to investigating the influence of local socio-environmental and agent-level factors in mediating land-use and livelihood responses to changing global market signals. A generalized agent-based modeling framework is applied to six case-study sites that differ in environmental conditions, market access and influence, and livelihood settings. The largest modeled land conversions and livelihood transitions to market-oriented production occurred in sties with relatively productive agricultural land and/or with limited livelihood options. Experimental shifts in the distributions of agents' risk tolerances generally acted to attenuate or amplify responses to changes in global market signals. Importantly, however, responses of agents at different points in the risk tolerance distribution varied widely, with the wealth gap growing wider between agents with higher or lower risk tolerance. These results demonstrate model-based synthesis is a promising approach to overcome many of the challenges of current synthesis methods in land change science, and to identify generalized as well as locally contingent responses to global market signals.

  11. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point

  12. Model Predictive Control of Z-source Neutral Point Clamped Inverter

    DEFF Research Database (Denmark)

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of Z-source Neutral Point Clamped (NPC) inverter. For illustration, current control of Z-source NPC grid-connected inverter is analyzed and simulated. With MPC’s advantage of easily including system constraints, load current, impedance network...... response are obtained at the same time with a formulated Z-source NPC inverter network model. Operation steady state and transient state simulation results of MPC are going to be presented, which shows good reference tracking ability of this method. It provides new control method for Z-source NPC inverter...

  13. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  14. A Heckman selection model for the safety analysis of signalized intersections.

    Directory of Open Access Journals (Sweden)

    Xuecai Xu

    Full Text Available The objective of this paper is to provide a new method for estimating crash rate and severity simultaneously.This study explores a Heckman selection model of the crash rate and severity simultaneously at different levels and a two-step procedure is used to investigate the crash rate and severity levels. The first step uses a probit regression model to determine the sample selection process, and the second step develops a multiple regression model to simultaneously evaluate the crash rate and severity for slight injury/kill or serious injury (KSI, respectively. The model uses 555 observations from 262 signalized intersections in the Hong Kong metropolitan area, integrated with information on the traffic flow, geometric road design, road environment, traffic control and any crashes that occurred during two years.The results of the proposed two-step Heckman selection model illustrate the necessity of different crash rates for different crash severity levels.A comparison with the existing approaches suggests that the Heckman selection model offers an efficient and convenient alternative method for evaluating the safety performance at signalized intersections.

  15. Attending to weak signals: the leader's challenge.

    Science.gov (United States)

    Kerfoot, Karlene

    2005-12-01

    Halverson and Isham (2003) quote sources that report the accidental death rate of simply being in a hospital is " ... four hundred times more likely than your risk of death from traveling by train, forty times higher than driving a car, and twenty times higher than flying in a commercial aircraft" (p. 13). High-reliability organizations such as nuclear power plants and aircraft carriers have been pioneers in the business of recognizing weak signals. Weike and Sutcliffe (2001) note that high-reliability organizations distinguish themselves from others because of their mindfulness which enables them to see the significance of weak signals and to give strong interventions to weak signals. To act mindfully, these organizations have an underlying mental model of continually updating, anticipating, and focusing the possibility of failure using the intelligence that weak signals provides. Much of what happens is unexpected in health care. However, with a culture that is continually looking for weak signals, and intervenes and rescues when these signals are detected, the unexpected happens less often. This is the epitome of how leaders can build a culture of safety that focuses on recognizing the weak signals to manage the unforeseen.

  16. Detection of Noise in Composite Step Signal Pattern by Visualizing Signal Waveforms

    Directory of Open Access Journals (Sweden)

    Chaman Verma

    2018-03-01

    Full Text Available The Step Composite Signals is the combination of vital informative signals that are compressed and coded to produce a predefined test image on a display device. It carries the desired sequence of information from source to destination. This information may be transmitted as digital signal, video information or data signal required as an input for the destination module. For testing of display panels, Composite Test Signals are the most important attribute of test signal transmission system. In the current research paper we present an approach for the noise detection in Composite Step Signal by analysing Composite Step Signal waveforms. The analysis of the signal waveforms reveals that the noise affected components of the signal and subsequently noise reduction process is initiated which targets noisy signal component only. Thus the quality of signal is not compromised during noise reduction process.

  17. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  18. Synchronous Modeling of Modular Avionics Architectures using the SIGNAL Language

    OpenAIRE

    Gamatié , Abdoulaye; Gautier , Thierry

    2002-01-01

    This document presents a study on the modeling of architecture components for avionics applications. We consider the avionics standard ARINC 653 specifications as basis, as well as the synchronous language SIGNAL to describe the modeling. A library of APEX object models (partition, process, communication and synchronization services, etc.) has been implemented. This should allow to describe distributed real-time applications using POLYCHRONY, so as to access formal tools and techniques for ar...

  19. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  20. Data Sources Available for Modeling Environmental Exposures in Older Adults

    Science.gov (United States)

    This report, “Data Sources Available for Modeling Environmental Exposures in Older Adults,” focuses on information sources and data available for modeling environmental exposures in the older U.S. population, defined here to be people 60 years and older, with an emphasis on those...

  1. Retrieving global aerosol sources from satellites using inverse modeling

    Directory of Open Access Journals (Sweden)

    O. Dubovik

    2008-01-01

    Full Text Available Understanding aerosol effects on global climate requires knowing the global distribution of tropospheric aerosols. By accounting for aerosol sources, transports, and removal processes, chemical transport models simulate the global aerosol distribution using archived meteorological fields. We develop an algorithm for retrieving global aerosol sources from satellite observations of aerosol distribution by inverting the GOCART aerosol transport model.

    The inversion is based on a generalized, multi-term least-squares-type fitting, allowing flexible selection and refinement of a priori algorithm constraints. For example, limitations can be placed on retrieved quantity partial derivatives, to constrain global aerosol emission space and time variability in the results. Similarities and differences between commonly used inverse modeling and remote sensing techniques are analyzed. To retain the high space and time resolution of long-period, global observational records, the algorithm is expressed using adjoint operators.

    Successful global aerosol emission retrievals at 2°×2.5 resolution were obtained by inverting GOCART aerosol transport model output, assuming constant emissions over the diurnal cycle, and neglecting aerosol compositional differences. In addition, fine and coarse mode aerosol emission sources were inverted separately from MODIS fine and coarse mode aerosol optical thickness data, respectively. These assumptions are justified, based on observational coverage and accuracy limitations, producing valuable aerosol source locations and emission strengths. From two weeks of daily MODIS observations during August 2000, the global placement of fine mode aerosol sources agreed with available independent knowledge, even though the inverse method did not use any a priori information about aerosol sources, and was initialized with a "zero aerosol emission" assumption. Retrieving coarse mode aerosol emissions was less successful

  2. Effects of Source RDP Models and Near-source Propagation: Implication for Seismic Yield Estimation

    Science.gov (United States)

    Saikia, C. K.; Helmberger, D. V.; Stead, R. J.; Woods, B. B.

    - It has proven difficult to uniquely untangle the source and propagation effects on the observed seismic data from underground nuclear explosions, even when large quantities of near-source, broadband data are available for analysis. This leads to uncertainties in our ability to quantify the nuclear seismic source function and, consequently the accuracy of seismic yield estimates for underground explosions. Extensive deterministic modeling analyses of the seismic data recorded from underground explosions at a variety of test sites have been conducted over the years and the results of these studies suggest that variations in the seismic source characteristics between test sites may be contributing to the observed differences in the magnitude/yield relations applicable at those sites. This contributes to our uncertainty in the determination of seismic yield estimates for explosions at previously uncalibrated test sites. In this paper we review issues involving the relationship of Nevada Test Site (NTS) source scaling laws to those at other sites. The Joint Verification Experiment (JVE) indicates that a magnitude (mb) bias (δmb) exists between the Semipalatinsk test site (STS) in the former Soviet Union (FSU) and the Nevada test site (NTS) in the United States. Generally this δmb is attributed to differential attenuation in the upper-mantle beneath the two test sites. This assumption results in rather large estimates of yield for large mb tunnel shots at Novaya Zemlya. A re-examination of the US testing experiments suggests that this δmb bias can partly be explained by anomalous NTS (Pahute) source characteristics. This interpretation is based on the modeling of US events at a number of test sites. Using a modified Haskell source description, we investigated the influence of the source Reduced Displacement Potential (RDP) parameters ψ ∞ , K and B by fitting short- and long-period data simultaneously, including the near-field body and surface waves. In general

  3. Icotinib inhibits EGFR signaling and alleviates psoriasis-like symptoms in animal models.

    Science.gov (United States)

    Tan, Fenlai; Yang, Guiqun; Wang, Yanping; Chen, Haibo; Yu, Bo; Li, He; Guo, Jing; Huang, Xiaoling; Deng, Yifang; Yu, Pengxia; Ding, Lieming

    2018-02-01

    To investigate the effects of icotinib hydrochloride and a derivative cream on epidermal growth factor receptor (EGFR) signaling and within animal psoriasis models, respectively. The effect of icotinib on EGFR signaling was examined in HaCaT cells, while its effect on angiogenesis was tested in chick embryo chorioallantoic membranes (CAM). The effectiveness of icotinib in treating psoriasis was tested in three psoriasis models, including diethylstilbestrol-treated mouse vaginal epithelial cells, mouse tail granular cell layer formation, and propranolol-induced psoriasis-like features in guinea pig ear skin. Icotinib treatment blocked EGFR signaling and reduced HaCaT cell viability as well as suppressed CAM angiogenesis. Topical application of icotinib ameliorated psoriasis-like histological characteristics in mouse and guinea pig psoriasis models. Icotinib also significantly inhibited mouse vaginal epithelium mitosis, promoted mouse tail squamous epidermal granular layer formation, and reduced the thickness of the horny layer in propranolol treated auricular dorsal surface of guinea pig. We conclude that icotinib can effectively inhibit psoriasis in animal models. Future clinical studies should be conducted to explore the therapeutic effects of icotinb in humans. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  4. Developing seismogenic source models based on geologic fault data

    Science.gov (United States)

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Calculating seismic hazard usually requires input that includes seismicity associated with known faults, historical earthquake catalogs, geodesy, and models of ground shaking. This paper will address the input generally derived from geologic studies that augment the short historical catalog to predict ground shaking at time scales of tens, hundreds, or thousands of years (e.g., SSHAC 1997). A seismogenic source model, terminology we adopt here for a fault source model, includes explicit three-dimensional faults deemed capable of generating ground motions of engineering significance within a specified time frame of interest. In tectonically active regions of the world, such as near plate boundaries, multiple seismic cycles span a few hundred to a few thousand years. In contrast, in less active regions hundreds of kilometers from the nearest plate boundary, seismic cycles generally are thousands to tens of thousands of years long. Therefore, one should include sources having both longer recurrence intervals and possibly older times of most recent rupture in less active regions of the world rather than restricting the model to include only Holocene faults (i.e., those with evidence of large-magnitude earthquakes in the past 11,500 years) as is the practice in tectonically active regions with high deformation rates. During the past 15 years, our institutions independently developed databases to characterize seismogenic sources based on geologic data at a national scale. Our goal here is to compare the content of these two publicly available seismogenic source models compiled for the primary purpose of supporting seismic hazard calculations by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the U.S. Geological Survey (USGS); hereinafter we refer to the two seismogenic source models as INGV and USGS, respectively. This comparison is timely because new initiatives are emerging to characterize seismogenic sources at the continental scale (e.g., SHARE in the

  5. New model for gain control of signal intensity to object distance in echolocating bats

    DEFF Research Database (Denmark)

    Nørum, Ulrik; Brinkløv, Signe; Surlykke, Annemarie

    2012-01-01

    Echolocating bats emit ultrasonic calls and listen for the returning echoes to orient and localize prey in darkness. The emitted source level, SL (estimated signal intensity 10 cm from the mouth), is adjusted dynamically from call to call in response to sensory feedback as bats approach objects. ...

  6. Acoustic sources of opportunity in the marine environment - Applied to source localization and ocean sensing

    Science.gov (United States)

    Verlinden, Christopher M.

    Controlled acoustic sources have typically been used for imaging the ocean. These sources can either be used to locate objects or characterize the ocean environment. The processing involves signal extraction in the presence of ambient noise, with shipping being a major component of the latter. With the advent of the Automatic Identification System (AIS) which provides accurate locations of all large commercial vessels, these major noise sources can be converted from nuisance to beacons or sources of opportunity for the purpose of studying the ocean. The source localization method presented here is similar to traditional matched field processing, but differs in that libraries of data-derived measured replicas are used in place of modeled replicas. In order to account for differing source spectra between library and target vessels, cross-correlation functions are compared instead of comparing acoustic signals directly. The library of measured cross-correlation function replicas is extrapolated using waveguide invariant theory to fill gaps between ship tracks, fully populating the search grid with estimated replicas allowing for continuous tracking. In addition to source localization, two ocean sensing techniques are discussed in this dissertation. The feasibility of estimating ocean sound speed and temperature structure, using ship noise across a drifting volumetric array of hydrophones suspended beneath buoys, in a shallow water marine environment is investigated. Using the attenuation of acoustic energy along eigenray paths to invert for ocean properties such as temperature, salinity, and pH is also explored. In each of these cases, the theory is developed, tested using numerical simulations, and validated with data from acoustic field experiments.

  7. MP3 compression of Doppler ultrasound signals.

    Science.gov (United States)

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  8. Real-time monitoring and massive inversion of source parameters of very long period seismic signals: An application to Stromboli Volcano, Italy

    Science.gov (United States)

    Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.

    2006-01-01

    We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.

  9. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    Science.gov (United States)

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  10. Weak signal transmission in complex networks and its application in detecting connectivity.

    Science.gov (United States)

    Liang, Xiaoming; Liu, Zonghua; Li, Baowen

    2009-10-01

    We present a network model of coupled oscillators to study how a weak signal is transmitted in complex networks. Through both theoretical analysis and numerical simulations, we find that the response of other nodes to the weak signal decays exponentially with their topological distance to the signal source and the coupling strength between two neighboring nodes can be figured out by the responses. This finding can be conveniently used to detect the topology of unknown network, such as the degree distribution, clustering coefficient and community structure, etc., by repeatedly choosing different nodes as the signal source. Through four typical networks, i.e., the regular one dimensional, small world, random, and scale-free networks, we show that the features of network can be approximately given by investigating many fewer nodes than the network size, thus our approach to detect the topology of unknown network may be efficient in practical situations with large network size.

  11. A divide-down RF source generation system for the Advanced Photon Source

    International Nuclear Information System (INIS)

    Horan, D.; Lenkszus, F.; Laird, R.

    1997-01-01

    A divide-down rf source system has been designed and built at Argonne National Laboratory to provide harmonically-related and phase-locked rf source signals between the APS 352-MHz storage ring and booster synchrotron rf systems and the 9.77-MHz and 117-MHz positron accumulator ring rf systems. The design provides rapid switching capability back to individual rf synthesizers for each one. The system also contains a digital bucket phase shifter for injection bucket selection. Input 352-MHz rf from a master synthesizer is supplied to a VXI-based ECL divider board which produces 117-MHz and 9.77-MHz square-wave outputs. These outputs are passed through low-pass filters to produce pure signals at the required fundamental frequencies. These signals, plus signals at the same frequencies from independent synthesizers, are fed to an interface chassis where source selection is made via local/remote control of coaxial relays. This chassis also produces buffered outputs at each frequency for monitoring and synchronization of ancillary equipment

  12. An approach for optimally extending mathematical models of signaling networks using omics data.

    Science.gov (United States)

    Bianconi, Fortunato; Patiti, Federico; Baldelli, Elisa; Crino, Lucio; Valigi, Paolo

    2015-01-01

    Mathematical modeling is a key process in Systems Biology and the use of computational tools such as Cytoscape for omics data processing, need to be integrated in the modeling activity. In this paper we propose a new methodology for modeling signaling networks by combining ordinary differential equation models and a gene recommender system, GeneMANIA. We started from existing models, that are stored in the BioModels database, and we generated a query to use as input for the GeneMANIA algorithm. The output of the recommender system was then led back to the kinetic reactions that were finally added to the starting model. We applied the proposed methodology to EGFR-IGF1R signal transduction network, which plays an important role in translational oncology and cancer therapy of non small cell lung cancer.

  13. Mathematical modeling of sustainable synaptogenesis by repetitive stimuli suggests signaling mechanisms in vivo.

    Directory of Open Access Journals (Sweden)

    Hiromu Takizawa

    Full Text Available The mechanisms of long-term synaptic maintenance are a key component to understanding the mechanism of long-term memory. From biological experiments, a hypothesis arose that repetitive stimuli with appropriate intervals are essential to maintain new synapses for periods of longer than a few days. We successfully reproduce the time-course of relative numbers of synapses with our mathematical model in the same conditions as biological experiments, which used Adenosine-3', 5'-cyclic monophosphorothioate, Sp-isomer (Sp-cAMPS as external stimuli. We also reproduce synaptic maintenance responsiveness to intervals of Sp-cAMPS treatment accompanied by PKA activation. The model suggests a possible mechanism of sustainable synaptogenesis which consists of two steps. First, the signal transduction from an external stimulus triggers the synthesis of a new signaling protein. Second, the new signaling protein is required for the next signal transduction with the same stimuli. As a result, the network component is modified from the first network, and a different signal is transferred which triggers the synthesis of another new signaling molecule. We refer to this hypothetical mechanism as network succession. We build our model on the basis of two hypotheses: (1 a multi-step network succession induces downregulation of SSH and COFILIN gene expression, which triggers the production of stable F-actin; (2 the formation of a complex of stable F-actin with Drebrin at PSD is the critical mechanism to achieve long-term synaptic maintenance. Our simulation shows that a three-step network succession is sufficient to reproduce sustainable synapses for a period longer than 14 days. When we change the network structure to a single step network, the model fails to follow the exact condition of repetitive signals to reproduce a sufficient number of synapses. Another advantage of the three-step network succession is that this system indicates a greater tolerance of parameter

  14. The source of infrasound associated with long-period events at mount St. Helens

    Science.gov (United States)

    Matoza, R.S.; Garces, M.A.; Chouet, B.A.; D'Auria, L.; Hedlin, M.A.H.; De Groot-Hedlin, C.; Waite, G.P.

    2009-01-01

    During the early stages of the 2004-2008 Mount St. Helens eruption, the source process that produced a sustained sequence of repetitive long-period (LP) seismic events also produced impulsive broadband infrasonic signals in the atmosphere. To assess whether the signals could be generated simply by seismic-acoustic coupling from the shallow LP events, we perform finite difference simulation of the seismo-acoustic wavefield using a single numerical scheme for the elastic ground and atmosphere. The effects of topography, velocity structure, wind, and source configuration are considered. The simulations show that a shallow source buried in a homogeneous elastic solid produces a complex wave train in the atmosphere consisting of P/SV and Rayleigh wave energy converted locally along the propagation path, and acoustic energy originating from , the source epicenter. Although the horizontal acoustic velocity of the latter is consistent with our data, the modeled amplitude ratios of pressure to vertical seismic velocity are too low in comparison with observations, and the characteristic differences in seismic and acoustic waveforms and spectra cannot be reproduced from a common point source. The observations therefore require a more complex source process in which the infrasonic signals are a record of only the broadband pressure excitation mechanism of the seismic LP events. The observations and numerical results can be explained by a model involving the repeated rapid pressure loss from a hydrothermal crack by venting into a shallow layer of loosely consolidated, highly permeable material. Heating by magmatic activity causes pressure to rise, periodically reaching the pressure threshold for rupture of the "valve" sealing the crack. Sudden opening of the valve generates the broadband infrasonic signal and simultaneously triggers the collapse of the crack, initiating resonance of the remaining fluid. Subtle waveform and amplitude variability of the infrasonic signals as

  15. Systems biology derived source-sink mechanism of BMP gradient formation.

    Science.gov (United States)

    Zinski, Joseph; Bu, Ye; Wang, Xu; Dou, Wei; Umulis, David; Mullins, Mary C

    2017-08-09

    A morphogen gradient of Bone Morphogenetic Protein (BMP) signaling patterns the dorsoventral embryonic axis of vertebrates and invertebrates. The prevailing view in vertebrates for BMP gradient formation is through a counter-gradient of BMP antagonists, often along with ligand shuttling to generate peak signaling levels. To delineate the mechanism in zebrafish, we precisely quantified the BMP activity gradient in wild-type and mutant embryos and combined these data with a mathematical model-based computational screen to test hypotheses for gradient formation. Our analysis ruled out a BMP shuttling mechanism and a bmp transcriptionally-informed gradient mechanism. Surprisingly, rather than supporting a counter-gradient mechanism, our analyses support a fourth model, a source-sink mechanism, which relies on a restricted BMP antagonist distribution acting as a sink that drives BMP flux dorsally and gradient formation. We measured Bmp2 diffusion and found that it supports the source-sink model, suggesting a new mechanism to shape BMP gradients during development.

  16. An improved large signal model of InP HEMTs

    Science.gov (United States)

    Li, Tianhao; Li, Wenjun; Liu, Jun

    2018-05-01

    An improved large signal model for InP HEMTs is proposed in this paper. The channel current and charge model equations are constructed based on the Angelov model equations. Both the equations for channel current and gate charge models were all continuous and high order drivable, and the proposed gate charge model satisfied the charge conservation. For the strong leakage induced barrier reduction effect of InP HEMTs, the Angelov current model equations are improved. The channel current model could fit DC performance of devices. A 2 × 25 μm × 70 nm InP HEMT device is used to demonstrate the extraction and validation of the model, in which the model has predicted the DC I–V, C–V and bias related S parameters accurately. Project supported by the National Natural Science Foundation of China (No. 61331006).

  17. Applying computer modeling to eddy current signal analysis for steam generator and heat exchanger tube inspections

    International Nuclear Information System (INIS)

    Sullivan, S.P.; Cecco, V.S.; Carter, J.R.; Spanner, M.; McElvanney, M.; Krause, T.W.; Tkaczyk, R.

    2000-01-01

    Licensing requirements for eddy current inspections for nuclear steam generators and heat exchangers are becoming increasingly stringent. The traditional industry-standard method of comparing inspection signals with flaw signals from simple in-line calibration standards is proving to be inadequate. A more complete understanding of eddy current and magnetic field interactions with flaws and other anomalies is required for the industry to generate consistently reliable inspections. Computer modeling is a valuable tool in improving the reliability of eddy current signal analysis. Results from computer modeling are helping inspectors to properly discriminate between real flaw signals and false calls, and improving reliability in flaw sizing. This presentation will discuss complementary eddy current computer modeling techniques such as the Finite Element Method (FEM), Volume Integral Method (VIM), Layer Approximation and other analytic methods. Each of these methods have advantages and limitations. An extension of the Layer Approximation to model eddy current probe responses to ferromagnetic materials will also be presented. Finally examples will be discussed demonstrating how some significant eddy current signal analysis problems have been resolved using appropriate electromagnetic computer modeling tools

  18. Toward multimodal signal detection of adverse drug reactions.

    Science.gov (United States)

    Harpaz, Rave; DuMouchel, William; Schuemie, Martijn; Bodenreider, Olivier; Friedman, Carol; Horvitz, Eric; Ripple, Anna; Sorbello, Alfred; White, Ryen W; Winnenburg, Rainer; Shah, Nigam H

    2017-12-01

    Improving mechanisms to detect adverse drug reactions (ADRs) is key to strengthening post-marketing drug safety surveillance. Signal detection is presently unimodal, relying on a single information source. Multimodal signal detection is based on jointly analyzing multiple information sources. Building on, and expanding the work done in prior studies, the aim of the article is to further research on multimodal signal detection, explore its potential benefits, and propose methods for its construction and evaluation. Four data sources are investigated; FDA's adverse event reporting system, insurance claims, the MEDLINE citation database, and the logs of major Web search engines. Published methods are used to generate and combine signals from each data source. Two distinct reference benchmarks corresponding to well-established and recently labeled ADRs respectively are used to evaluate the performance of multimodal signal detection in terms of area under the ROC curve (AUC) and lead-time-to-detection, with the latter relative to labeling revision dates. Limited to our reference benchmarks, multimodal signal detection provides AUC improvements ranging from 0.04 to 0.09 based on a widely used evaluation benchmark, and a comparative added lead-time of 7-22 months relative to labeling revision dates from a time-indexed benchmark. The results support the notion that utilizing and jointly analyzing multiple data sources may lead to improved signal detection. Given certain data and benchmark limitations, the early stage of development, and the complexity of ADRs, it is currently not possible to make definitive statements about the ultimate utility of the concept. Continued development of multimodal signal detection requires a deeper understanding the data sources used, additional benchmarks, and further research on methods to generate and synthesize signals. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise

    Science.gov (United States)

    Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.

    2015-01-01

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351

  20. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise.

    Science.gov (United States)

    Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A

    2015-04-21

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.

  1. Small- and large-signal modeling of InP HBTs in transferred-substrate technology

    DEFF Research Database (Denmark)

    Johansen, Tom Keinicke; Rudolph, Matthias; Jensen, Thomas

    2014-01-01

    In this paper, the small- and large-signal modeling of InP heterojunction bipolar transistors (HBTs) in transferred substrate (TS) technology is investigated. The small-signal equivalent circuit parameters for TS-HBTs in two-terminal and three-terminal configurations are determined by employing...

  2. Simulation based investigation of source-detector configurations for non-invasive fetal pulse oximetry

    Directory of Open Access Journals (Sweden)

    Böttrich Marcel

    2015-09-01

    Full Text Available Transabdominal fetal pulse oximetry is a method to monitor the oxygen supply of the unborn child non-invasively. Due to the measurement setup, the received signal of the detector is composed of photons coding purely maternal and photons coding mixed fetal-maternal information. To analyze the wellbeing of the fetus, the fetal signal is extracted from the mixed component. In this paper we assess source-detector configurations, such that the mixed fetal-maternal components of the acquired signals are maximized. Monte-Carlo method is used to simulate light propagation and photon distribution in tissue. We use a plane layer and a spherical layer geometry to model the abdomen of a pregnant woman. From the simulations we extracted the fluence at the detector side for several source-detector distances and analyzed the ratio of the mixed fluence component to total fluence. Our simulations showed that the power of the mixed component depends on the source-detector distance as expected. Further we were able to visualize hot spot areas in the spherical layer model where the mixed fluence ratio reaches the highest level. The results are of high importance for sensor design considering signal composition and quality for non-invasive fetal pulse oximetry.

  3. On stochastic modeling of the modernized global positioning system (GPS) L2C signal

    International Nuclear Information System (INIS)

    Elsobeiey, Mohamed; El-Rabbany, Ahmed

    2010-01-01

    In order to take full advantage of the modernized GPS L2C signal, it is essential that its stochastic characteristics and code bias be rigorously determined. In this paper, long sessions of GPS measurements are used to study the stochastic characteristics of the modernized GPS L2C signal. As a byproduct, the stochastic characteristics of the legacy GPS signals, namely C/A and P2 codes, are also determined, which are used to verify the developed stochastic model of the modernized signal. The differential code biases between P2 and C2, DCB P2-C2 , are also estimated using the Bernese GPS software. It is shown that the developed models improved the precise point positioning (PPP) solution and convergence time

  4. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  5. Chloroplasts as source and target of cellular redox regulation: a discussion on chloroplast redox signals in the context of plant physiology.

    Science.gov (United States)

    Baier, Margarete; Dietz, Karl-Josef

    2005-06-01

    During the evolution of plants, chloroplasts have lost the exclusive genetic control over redox regulation and antioxidant gene expression. Together with many other genes, all genes encoding antioxidant enzymes and enzymes involved in the biosynthesis of low molecular weight antioxidants were transferred to the nucleus. On the other hand, photosynthesis bears a high risk for photo-oxidative damage. Concomitantly, an intricate network for mutual regulation by anthero- and retrograde signals has emerged to co-ordinate the activities of the different genetic and metabolic compartments. A major focus of recent research in chloroplast regulation addressed the mechanisms of redox sensing and signal transmission, the identification of regulatory targets, and the understanding of adaptation mechanisms. In addition to redox signals communicated through signalling cascades also used in pathogen and wounding responses, specific chloroplast signals control nuclear gene expression. Signalling pathways are triggered by the redox state of the plastoquinone pool, the thioredoxin system, and the acceptor availability at photosystem I, in addition to control by oxolipins, tetrapyrroles, carbohydrates, and abscisic acid. The signalling function is discussed in the context of regulatory circuitries that control the expression of antioxidant enzymes and redox modulators, demonstrating the principal role of chloroplasts as the source and target of redox regulation.

  6. Modeling Noise Sources and Propagation in External Gear Pumps

    Directory of Open Access Journals (Sweden)

    Sangbeom Woo

    2017-07-01

    Full Text Available As a key component in power transfer, positive displacement machines often represent the major source of noise in hydraulic systems. Thus, investigation into the sources of noise and discovering strategies to reduce noise is a key part of improving the performance of current hydraulic systems, as well as applying fluid power systems to a wider range of applications. The present work aims at developing modeling techniques on the topic of noise generation caused by external gear pumps for high pressure applications, which can be useful and effective in investigating the interaction between noise sources and radiated noise and establishing the design guide for a quiet pump. In particular, this study classifies the internal noise sources into four types of effective load functions and, in the proposed model, these load functions are applied to the corresponding areas of the pump case in a realistic way. Vibration and sound radiation can then be predicted using a combined finite element and boundary element vibro-acoustic model. The radiated sound power and sound pressure for the different operating conditions are presented as the main outcomes of the acoustic model. The noise prediction was validated through comparison with the experimentally measured sound power levels.

  7. Applied to neuro-fuzzy models for signal validation in Angra 1 nuclear power plant

    International Nuclear Information System (INIS)

    Oliveira, Mauro Vitor de

    1999-06-01

    This work develops two models of signal validation in which the analytical redundancy of the monitored signals from an industrial plant is made by neural networks. In one model the analytical redundancy is made by only one neural network while in the other it is done by several neural networks, each one working in a specific part of the entire operation region of the plant. Four cluster techniques were tested to separate the entire region of operation in several specific regions. An additional information of systems' reliability is supplied by a fuzzy inference system. The models were implemented in C language and tested with signals acquired from Angra I nuclear power plant, from its start to 100% of power. (author)

  8. Purkinje Cell Signaling Deficits in Animal Models of Ataxia

    Directory of Open Access Journals (Sweden)

    Eriola Hoxha

    2018-04-01

    Full Text Available Purkinje cell (PC dysfunction or degeneration is the most frequent finding in animal models with ataxic symptoms. Mutations affecting intrinsic membrane properties can lead to ataxia by altering the firing rate of PCs or their firing pattern. However, the relationship between specific firing alterations and motor symptoms is not yet clear, and in some cases PC dysfunction precedes the onset of ataxic signs. Moreover, a great variety of ionic and synaptic mechanisms can affect PC signaling, resulting in different features of motor dysfunction. Mutations affecting Na+ channels (NaV1.1, NaV1.6, NaVβ4, Fgf14 or Rer1 reduce the firing rate of PCs, mainly via an impairment of the Na+ resurgent current. Mutations that reduce Kv3 currents limit the firing rate frequency range. Mutations of Kv1 channels act mainly on inhibitory interneurons, generating excessive GABAergic signaling onto PCs, resulting in episodic ataxia. Kv4.3 mutations are responsible for a complex syndrome with several neurologic dysfunctions including ataxia. Mutations of either Cav or BK channels have similar consequences, consisting in a disruption of the firing pattern of PCs, with loss of precision, leading to ataxia. Another category of pathogenic mechanisms of ataxia regards alterations of synaptic signals arriving at the PC. At the parallel fiber (PF-PC synapse, mutations of glutamate delta-2 (GluD2 or its ligand Crbl1 are responsible for the loss of synaptic contacts, abolishment of long-term depression (LTD and motor deficits. At the same synapse, a correct function of metabotropic glutamate receptor 1 (mGlu1 receptors is necessary to avoid ataxia. Failure of climbing fiber (CF maturation and establishment of PC mono-innervation occurs in a great number of mutant mice, including mGlu1 and its transduction pathway, GluD2, semaphorins and their receptors. All these models have in common the alteration of PC output signals, due to a variety of mechanisms affecting incoming

  9. The analysis and interpretation of very-long-period seismic signals on volcanoes

    Science.gov (United States)

    Sindija, Dinko; Neuberg, Jurgen; Smith, Patrick

    2017-04-01

    The study of very long period (VLP) seismic signals became possible with the widespread use of broadband instruments. VLP seismic signals are caused by transients of pressure in the volcanic edifice and have periods ranging from several seconds to several minutes. For the VLP events recorded in March 2012 and 2014 at Soufriere Hills Volcano, Montserrat, we model the ground displacement using several source time functions: a step function using Richards growth equation, Küpper wavelet, and a damped sine wave to which an instrument response is then applied. This way we get a synthetic velocity seismogram which is directly comparable to the data. After the full vector field of ground displacement is determined, we model the source mechanism to determine the relationship between the source mechanism and the observed VLP waveforms. Emphasis of the research is on how different VLP waveforms are related to the volcano environment and the instrumentation used and on the processing steps in this low frequency band to get most out of broadband instruments.

  10. Source-based neurofeedback methods using EEG recordings: training altered brain activity in a functional brain source derived from blind source separation

    Science.gov (United States)

    White, David J.; Congedo, Marco; Ciorciari, Joseph

    2014-01-01

    A developing literature explores the use of neurofeedback in the treatment of a range of clinical conditions, particularly ADHD and epilepsy, whilst neurofeedback also provides an experimental tool for studying the functional significance of endogenous brain activity. A critical component of any neurofeedback method is the underlying physiological signal which forms the basis for the feedback. While the past decade has seen the emergence of fMRI-based protocols training spatially confined BOLD activity, traditional neurofeedback has utilized a small number of electrode sites on the scalp. As scalp EEG at a given electrode site reflects a linear mixture of activity from multiple brain sources and artifacts, efforts to successfully acquire some level of control over the signal may be confounded by these extraneous sources. Further, in the event of successful training, these traditional neurofeedback methods are likely influencing multiple brain regions and processes. The present work describes the use of source-based signal processing methods in EEG neurofeedback. The feasibility and potential utility of such methods were explored in an experiment training increased theta oscillatory activity in a source derived from Blind Source Separation (BSS) of EEG data obtained during completion of a complex cognitive task (spatial navigation). Learned increases in theta activity were observed in two of the four participants to complete 20 sessions of neurofeedback targeting this individually defined functional brain source. Source-based EEG neurofeedback methods using BSS may offer important advantages over traditional neurofeedback, by targeting the desired physiological signal in a more functionally and spatially specific manner. Having provided preliminary evidence of the feasibility of these methods, future work may study a range of clinically and experimentally relevant brain processes where individual brain sources may be targeted by source-based EEG neurofeedback. PMID

  11. Source modelling of train noise - Literature review and some initial measurements

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Xuetao; Jonasson, Hans; Holmberg, Kjell

    2000-07-01

    A literature review of source modelling of railway noise is reported. Measurements on a special test rig at Surahammar and on the new railway line between Arlanda and Stockholm City are reported and analyzed. In the analysis the train is modelled as a number of point sources with or without directivity and each source is combined with analytical sound propagation theory to predict the sound propagation pattern best fitting the measured data. Wheel/rail rolling noise is considered to be the most important noise source. The rolling noise can be modelled as an array of moving point sources, which have a dipole-like horizontal directivity and some kind of vertical directivity. In general it is necessary to distribute the point sources on several heights. Based on our model analysis the source heights for the rolling noise should be below the wheel axles and the most important height is about a quarter of wheel diameter above the railheads. When train speeds are greater than 250 km/h aerodynamic noise will become important and even dominant. It may be important for low frequency components only if the train speed is less than 220 km/h. Little data are available for these cases. It is believed that aerodynamic noise has dipole-like directivity. Its spectrum depends on many factors: speed, railway system, type of train, bogies, wheels, pantograph, presence of barriers and even weather conditions. Other sources such as fans, engine, transmission and carriage bodies are at most second order noise sources, but for trains with a diesel locomotive engine the engine noise will be dominant if train speeds are less than about 100 km/h. The Nord 2000 comprehensive model for sound propagation outdoors, together with the source model that is based on the understandings above, can suitably handle the problems of railway noise propagation in one-third octave bands although there are still problems left to be solved.

  12. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    Science.gov (United States)

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  13. Audio Source Separation in Reverberant Environments Using β-Divergence-Based Nonnegative Factorization

    DEFF Research Database (Denmark)

    Fakhry, Mahmoud; Svaizer, Piergiorgio; Omologo, Maurizio

    2017-01-01

    -maximization algorithm and used to separate the signals by means of multichannel Wiener filtering. We propose to estimate these parameters by applying nonnegative factorization based on prior information on source variances. In the nonnegative factorization, spectral basis matrices can be defined as the prior...... information. The matrices can be either extracted or indirectly made available through a redundant library that is trained in advance. In a separate step, applying nonnegative tensor factorization, two algorithms are proposed in order to either extract or detect the basis matrices that best represent......In Gaussian model-based multichannel audio source separation, the likelihood of observed mixtures of source signals is parametrized by source spectral variances and by associated spatial covariance matrices. These parameters are estimated by maximizing the likelihood through an expectation...

  14. Neurotransmitter signaling in white matter.

    Science.gov (United States)

    Butt, Arthur M; Fern, Robert F; Matute, Carlos

    2014-11-01

    White matter (WM) tracts are bundles of myelinated axons that provide for rapid communication throughout the CNS and integration in grey matter (GM). The main cells in myelinated tracts are oligodendrocytes and astrocytes, with small populations of microglia and oligodendrocyte precursor cells. The prominence of neurotransmitter signaling in WM, which largely exclude neuronal cell bodies, indicates it must have physiological functions other than neuron-to-neuron communication. A surprising aspect is the diversity of neurotransmitter signaling in WM, with evidence for glutamatergic, purinergic (ATP and adenosine), GABAergic, glycinergic, adrenergic, cholinergic, dopaminergic and serotonergic signaling, acting via a wide range of ionotropic and metabotropic receptors. Both axons and glia are potential sources of neurotransmitters and may express the respective receptors. The physiological functions of neurotransmitter signaling in WM are subject to debate, but glutamate and ATP-mediated signaling have been shown to evoke Ca(2+) signals in glia and modulate axonal conduction. Experimental findings support a model of neurotransmitters being released from axons during action potential propagation acting on glial receptors to regulate the homeostatic functions of astrocytes and myelination by oligodendrocytes. Astrocytes also release neurotransmitters, which act on axonal receptors to strengthen action potential propagation, maintaining signaling along potentially long axon tracts. The co-existence of multiple neurotransmitters in WM tracts suggests they may have diverse functions that are important for information processing. Furthermore, the neurotransmitter signaling phenomena described in WM most likely apply to myelinated axons of the cerebral cortex and GM areas, where they are doubtless important for higher cognitive function. © 2014 Wiley Periodicals, Inc.

  15. Investigations of incorporating source directivity into room acoustics computer models to improve auralizations

    Science.gov (United States)

    Vigeant, Michelle C.

    Room acoustics computer modeling and auralizations are useful tools when designing or modifying acoustically sensitive spaces. In this dissertation, the input parameter of source directivity has been studied in great detail to determine first its effect in room acoustics computer models and secondly how to better incorporate the directional source characteristics into these models to improve auralizations. To increase the accuracy of room acoustics computer models, the source directivity of real sources, such as musical instruments, must be included in the models. The traditional method for incorporating source directivity into room acoustics computer models involves inputting the measured static directivity data taken every 10° in a sphere-shaped pattern around the source. This data can be entered into the room acoustics software to create a directivity balloon, which is used in the ray tracing algorithm to simulate the room impulse response. The first study in this dissertation shows that using directional sources over an omni-directional source in room acoustics computer models produces significant differences both in terms of calculated room acoustics parameters and auralizations. The room acoustics computer model was also validated in terms of accurately incorporating the input source directivity. A recently proposed technique for creating auralizations using a multi-channel source representation has been investigated with numerous subjective studies, applied to both solo instruments and an orchestra. The method of multi-channel auralizations involves obtaining multi-channel anechoic recordings of short melodies from various instruments and creating individual channel auralizations. These auralizations are then combined to create a total multi-channel auralization. Through many subjective studies, this process was shown to be effective in terms of improving the realism and source width of the auralizations in a number of cases, and also modeling different

  16. Real-Time Localization of Moving Dipole Sources for Tracking Multiple Free-Swimming Weakly Electric Fish

    Science.gov (United States)

    Jun, James Jaeyoon; Longtin, André; Maler, Leonard

    2013-01-01

    In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source

  17. Real-Time Localization of Moving Dipole Sources for Tracking Multiple Free-Swimming Weakly Electric Fish.

    Directory of Open Access Journals (Sweden)

    James Jaeyoon Jun

    Full Text Available In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal's positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole

  18. Modeling the signaling endosome hypothesis: Why a drive to the nucleus is better than a (random walk

    Directory of Open Access Journals (Sweden)

    Howe Charles L

    2005-10-01

    Full Text Available Abstract Background Information transfer from the plasma membrane to the nucleus is a universal cell biological property. Such information is generally encoded in the form of post-translationally modified protein messengers. Textbook signaling models typically depend upon the diffusion of molecular signals from the site of initiation at the plasma membrane to the site of effector function within the nucleus. However, such models fail to consider several critical constraints placed upon diffusion by the cellular milieu, including the likelihood of signal termination by dephosphorylation. In contrast, signaling associated with retrogradely transported membrane-bounded organelles such as endosomes provides a dephosphorylation-resistant mechanism for the vectorial transmission of molecular signals. We explore the relative efficiencies of signal diffusion versus retrograde transport of signaling endosomes. Results Using large-scale Monte Carlo simulations of diffusing STAT-3 molecules coupled with probabilistic modeling of dephosphorylation kinetics we found that predicted theoretical measures of STAT-3 diffusion likely overestimate the effective range of this signal. Compared to the inherently nucleus-directed movement of retrogradely transported signaling endosomes, diffusion of STAT-3 becomes less efficient at information transfer in spatial domains greater than 200 nanometers from the plasma membrane. Conclusion Our model suggests that cells might utilize two distinct information transmission paradigms: 1 fast local signaling via diffusion over spatial domains on the order of less than 200 nanometers; 2 long-distance signaling via information packets associated with the cytoskeletal transport apparatus. Our model supports previous observations suggesting that the signaling endosome hypothesis is a subset of a more general hypothesis that the most efficient mechanism for intracellular signaling-at-a-distance involves the association of signaling

  19. Impedance-based Analysis of DC Link Control in Voltage Source Rectifiers

    DEFF Research Database (Denmark)

    Lu, Dapeng; Wang, Xiongfei; Blaabjerg, Frede

    2018-01-01

    This paper analyzes the dynamics influences of the outer dc link control in the voltage source rectifiers based on the impedance model. The ac-dc interactions are firstly presented by means of full order small signal model in dq frame, which shows the input voltage and load condition are the two...

  20. MCNP model for the many KE-Basin radiation sources

    International Nuclear Information System (INIS)

    Rittmann, P.D.

    1997-01-01

    This document presents a model for the location and strength of radiation sources in the accessible areas of KE-Basin which agrees well with data taken on a regular grid in September of 1996. This modelling work was requested to support dose rate reduction efforts in KE-Basin. Anticipated fuel removal activities require lower dose rates to minimize annual dose to workers. With this model, the effects of component cleanup or removal can be estimated in advance to evaluate their effectiveness. In addition, the sources contributing most to the radiation fields in a given location can be identified and dealt with

  1. Determining the VLF/ULF source height using phase measurements

    Science.gov (United States)

    Ryabov, A.; Kotik, D. S.

    2012-12-01

    Generation of ULF/VLF waves in the ionosphere using powerful RF facilities has been studied for the last 40 years, both theoretically and experimentally. During this time, it was proposed several mechanisms for explaining the experimental results: modulation of ionospheric currents based on thermal nonlinearity, ponderomotive mechanisms for generation both VLF and ULF signals, cubic nonlinearity, etc. According mentioned above mechanisms the VLF/ULF signal source could be located in the lower or upper ionosphere. The group velocity of signal propagation in the ionosphere is significantly smaller than speed of light. As a result the appreciable time delay of the received signals will occur at the earth surface. This time delay could be determine by measuring the phase difference between received and reference signals, which are GPS synchronized. The experiment on determining the time delay of ULF signal propagation from the ionospheric source was carried out at SURA facility in 2012 and the results are presented in this paper. The comparison with numerical simulation of the time delay using the adjusted IRI model and ionosonde data shows well agreement with the experimental observations. The work was supported by RFBR grant 11-02-00419-a and RF Ministry of education and science by state contract 16.518.11.7066.

  2. Large-Signal DG-MOSFET Modelling for RFID Rectification

    Directory of Open Access Journals (Sweden)

    R. Rodríguez

    2016-01-01

    Full Text Available This paper analyses the undoped DG-MOSFETs capability for the operation of rectifiers for RFIDs and Wireless Power Transmission (WPT at microwave frequencies. For this purpose, a large-signal compact model has been developed and implemented in Verilog-A. The model has been numerically validated with a device simulator (Sentaurus. It is found that the number of stages to achieve the optimal rectifier performance is inferior to that required with conventional MOSFETs. In addition, the DC output voltage could be incremented with the use of appropriate mid-gap metals for the gate, as TiN. Minor impact of short channel effects (SCEs on rectification is also pointed out.

  3. Assessment of the Dominant Path Model and Field Measurements for NLOS DTV Signal Propagation

    Science.gov (United States)

    Adonias, Geoflly L.; Carvalho, Joabson N.

    2018-03-01

    In Brazil, one of the most important telecommunications systems is broadcast television. Such relevance demands an extensive analysis to be performed chasing technical excellence in order to offer a better digital transmission to the user. Therefore, it is mandatory to evaluate the quality and strength of the Digital TV signal, through studies of coverage predictions models, allowing stations to be projected in a way that their respective signals are harmoniously distributed. The purpose of this study is to appraise measurements of digital television signal obtained in the field and to compare them with numerical results from the simulation of the Dominant Path Model. The outcomes indicate possible blocking zones and a low accumulated probability index above the reception threshold, as well as characterise the gain level of the receiving antenna, which would prevent signal blocking.

  4. Model of contamination sources of electron for radiotherapy of beams of photons

    International Nuclear Information System (INIS)

    Gonzalez Infantes, W.; Lallena Rojo, A. M.; Anguiano Millan, M.

    2013-01-01

    Proposes a model of virtual sources of electrons, that allows to reproduce the sources to the input parameters of the representation of the patient. To compare performance in depth values and calculated profiles from the full simulation of the heads, with the calculated values using sources model, found that the model is capable of playing depth dose distributions and profiles. (Author)

  5. Optimization of Excitation in FDTD Method and Corresponding Source Modeling

    Directory of Open Access Journals (Sweden)

    B. Dimitrijevic

    2015-04-01

    Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.

  6. Multiobjective optimization model of intersection signal timing considering emissions based on field data: A case study of Beijing.

    Science.gov (United States)

    Kou, Weibin; Chen, Xumei; Yu, Lei; Gong, Huibo

    2018-04-18

    Most existing signal timing models are aimed to minimize the total delay and stops at intersections, without considering environmental factors. This paper analyzes the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. First, considering the different operating modes of cruising, acceleration, deceleration, and idling, field data of emissions and Global Positioning System (GPS) are collected to estimate emission rates for heavy-duty and light-duty vehicles. Second, multiobjective signal timing optimization model is established based on a genetic algorithm to minimize delay, stops, and emissions. Finally, a case study is conducted in Beijing. Nine scenarios are designed considering different weights of emission and traffic efficiency. The results compared with those using Highway Capacity Manual (HCM) 2010 show that signal timing optimized by the model proposed in this paper can decrease vehicles delay and emissions more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development. Vehicle emissions are heavily at signal intersections in urban area. The multiobjective signal timing optimization model is proposed considering the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. The results indicate that signal timing optimized by the model proposed in this paper can decrease vehicle emissions and delays more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development.

  7. A Systems Thinking Model for Open Source Software Development in Social Media

    OpenAIRE

    Mustaquim, Moyen

    2010-01-01

    In this paper a social media model, based on systems thinking methodology is proposed to understand the behavior of the open source software development community working in social media.The proposed model is focused on relational influences of two different systems- social media and the open source community. This model can be useful for taking decisions which are complicated and where solutions are not apparent.Based on the proposed model, an efficient way of working in open source developm...

  8. On the time delay of the ultrahigh-energy radiation signal from the source Cygnus X-3

    International Nuclear Information System (INIS)

    Arbuzov, B.A.; Razuvaev, E.A.

    1986-01-01

    The time delay of the signal from the source Gygnus X-3 detected by EAS observation with E ≥ 3x10 14 eV and counted off the maximum of radioburst in October, 1985 is considered. The effect is shown to get the explanation in the framework of the earlier proposed interpretation of the ulrahigh-energy radiation as free gluons. The agreement of this interpretation with the totality of experimental data is emphasized. A possibility of relict gluons to give a significant contribution to the density of a hidden mass in the Universe is discussed

  9. The Role of Skull Modeling in EEG Source Imaging for Patients with Refractory Temporal Lobe Epilepsy.

    Science.gov (United States)

    Montes-Restrepo, Victoria; Carrette, Evelien; Strobbe, Gregor; Gadeyne, Stefanie; Vandenberghe, Stefaan; Boon, Paul; Vonck, Kristl; Mierlo, Pieter van

    2016-07-01

    We investigated the influence of different skull modeling approaches on EEG source imaging (ESI), using data of six patients with refractory temporal lobe epilepsy who later underwent successful epilepsy surgery. Four realistic head models with different skull compartments, based on finite difference methods, were constructed for each patient: (i) Three models had skulls with compact and spongy bone compartments as well as air-filled cavities, segmented from either computed tomography (CT), magnetic resonance imaging (MRI) or a CT-template and (ii) one model included a MRI-based skull with a single compact bone compartment. In all patients we performed ESI of single and averaged spikes marked in the clinical 27-channel EEG by the epileptologist. To analyze at which time point the dipole estimations were closer to the resected zone, ESI was performed at two time instants: the half-rising phase and peak of the spike. The estimated sources for each model were validated against the resected area, as indicated by the postoperative MRI. Our results showed that single spike analysis was highly influenced by the signal-to-noise ratio (SNR), yielding estimations with smaller distances to the resected volume at the peak of the spike. Although averaging reduced the SNR effects, it did not always result in dipole estimations lying closer to the resection. The proposed skull modeling approaches did not lead to significant differences in the localization of the irritative zone from clinical EEG data with low spatial sampling density. Furthermore, we showed that a simple skull model (MRI-based) resulted in similar accuracy in dipole estimation compared to more complex head models (based on CT- or CT-template). Therefore, all the considered head models can be used in the presurgical evaluation of patients with temporal lobe epilepsy to localize the irritative zone from low-density clinical EEG recordings.

  10. Energy models for commercial energy prediction and substitution of renewable energy sources

    International Nuclear Information System (INIS)

    Iniyan, S.; Suganthi, L.; Samuel, Anand A.

    2006-01-01

    In this paper, three models have been projected namely Modified Econometric Mathematical (MEM) model, Mathematical Programming Energy-Economy-Environment (MPEEE) model, and Optimal Renewable Energy Mathematical (OREM) model. The actual demand for coal, oil and electricity is predicted using the MEM model based on economic, technological and environmental factors. The results were used in the MPEEE model, which determines the optimum allocation of commercial energy sources based on environmental limitations. The gap between the actual energy demand from the MEM model and optimal energy use from the MPEEE model, has to be met by the renewable energy sources. The study develops an OREM model that would facilitate effective utilization of renewable energy sources in India, based on cost, efficiency, social acceptance, reliability, potential and demand. The economic variations in solar energy systems and inclusion of environmental constraint are also analyzed with OREM model. The OREM model will help policy makers in the formulation and implementation of strategies concerning renewable energy sources in India for the next two decades

  11. Nonlinear signal processing using neural networks: Prediction and system modelling

    Energy Technology Data Exchange (ETDEWEB)

    Lapedes, A.; Farber, R.

    1987-06-01

    The backpropagation learning algorithm for neural networks is developed into a formalism for nonlinear signal processing. We illustrate the method by selecting two common topics in signal processing, prediction and system modelling, and show that nonlinear applications can be handled extremely well by using neural networks. The formalism is a natural, nonlinear extension of the linear Least Mean Squares algorithm commonly used in adaptive signal processing. Simulations are presented that document the additional performance achieved by using nonlinear neural networks. First, we demonstrate that the formalism may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method. Deterministic chaos is thought to be involved in many physical situations including the onset of turbulence in fluids, chemical reactions and plasma physics. Secondly, we demonstrate the use of the formalism in nonlinear system modelling by providing a graphic example in which it is clear that the neural network has accurately modelled the nonlinear transfer function. It is interesting to note that the formalism provides explicit, analytic, global, approximations to the nonlinear maps underlying the various time series. Furthermore, the neural net seems to be extremely parsimonious in its requirements for data points from the time series. We show that the neural net is able to perform well because it globally approximates the relevant maps by performing a kind of generalized mode decomposition of the maps. 24 refs., 13 figs.

  12. Wireless Energy Harvesting Using Signals from Multiple Fading Channels

    KAUST Repository

    Chen, Yunfei

    2017-08-01

    In this paper, we study the average, the probability density function and the cumulative distribution function of the harvested power. In the study, the signals are transmitted from multiple sources. The channels are assumed to be either Rician fading or Gamma-shadowed Rician fading. The received signals are then harvested by using either a single harvester for simultaneous transmissions or multiple harvesters for transmissions at different frequencies, antennas or time slots. Both linear and nonlinear models for the energy harvester at the receiver are examined. Numerical results are presented to show that, when a large amount of harvested power is required, a single harvester or the linear range of a practical nonlinear harvester are more efficient, to avoid power outage. Further, the power transfer strategy can be optimized for fixed total power. Specifically, for Rayleigh fading, the optimal strategy is to put the total power at the source with the best channel condition and switch off all other sources, while for general Rician fading, the optimum magnitudes and phases of the transmitting waveforms depend on the channel parameters.

  13. An Analysis/Synthesis System of Audio Signal with Utilization of an SN Model

    Directory of Open Access Journals (Sweden)

    G. Rozinaj

    2004-12-01

    Full Text Available An SN (sinusoids plus noise model is a spectral model, in which theperiodic components of the sound are represented by sinusoids withtime-varying frequencies, amplitudes and phases. The remainingnon-periodic components are represented by a filtered noise. Thesinusoidal model utilizes physical properties of musical instrumentsand the noise model utilizes the human inability to perceive the exactspectral shape or the phase of stochastic signals. SN modeling can beapplied in a compression, transformation, separation of sounds, etc.The designed system is based on methods used in the SN modeling. Wehave proposed a model that achieves good results in audio perception.Although many systems do not save phases of the sinusoids, they areimportant for better modelling of transients, for the computation ofresidual and last but not least for stereo signals, too. One of thefundamental properties of the proposed system is the ability of thesignal reconstruction not only from the amplitude but from the phasepoint of view, as well.

  14. Evaluation of dual-source parallel RF excitation for diffusion-weighted whole-body MR imaging with background body signal suppression at 3.0 T.

    Science.gov (United States)

    Mürtz, Petra; Kaschner, Marius; Träber, Frank; Kukuk, Guido M; Büdenbender, Sarah M; Skowasch, Dirk; Gieseke, Jürgen; Schild, Hans H; Willinek, Winfried A

    2012-11-01

    To evaluate the use of dual-source parallel RF excitation (TX) for diffusion-weighted whole-body MRI with background body signal suppression (DWIBS) at 3.0 T. Forty consecutive patients were examined on a clinical 3.0-T MRI system using a diffusion-weighted (DW) spin-echo echo-planar imaging sequence with a combination of short TI inversion recovery and slice-selective gradient reversal fat suppression. DWIBS of the neck (n=5), thorax (n=8), abdomen (n=6) and pelvis (n=21) was performed both with TX (2:56 min) and with standard single-source RF excitation (4:37 min). The quality of DW images and reconstructed inverted maximum intensity projections was visually judged by two readers (blinded to acquisition technique). Signal homogeneity and fat suppression were scored as "improved", "equal", "worse" or "ambiguous". Moreover, the apparent diffusion coefficient (ADC) values were measured in muscles, urinary bladder, lymph nodes and lesions. By the use of TX, signal homogeneity was "improved" in 25/40 and "equal" in 15/40 cases. Fat suppression was "improved" in 17/40 and "equal" in 23/40 cases. These improvements were statistically significant (p3.0 T with respect to signal homogeneity and fat suppression, reduced scan time by approximately one-third, and did not influence the measured ADC values. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. Degree of polarization and source counts of faint radio sources from Stacking Polarized intensity

    International Nuclear Information System (INIS)

    Stil, J. M.; George, S. J.; Keller, B. W.; Taylor, A. R.

    2014-01-01

    We present stacking polarized intensity as a means to study the polarization of sources that are too faint to be detected individually in surveys of polarized radio sources. Stacking offers not only high sensitivity to the median signal of a class of radio sources, but also avoids a detection threshold in polarized intensity, and therefore an arbitrary exclusion of sources with a low percentage of polarization. Correction for polarization bias is done through a Monte Carlo analysis and tested on a simulated survey. We show that the nonlinear relation between the real polarized signal and the detected signal requires knowledge of the shape of the distribution of fractional polarization, which we constrain using the ratio of the upper quartile to the lower quartile of the distribution of stacked polarized intensities. Stacking polarized intensity for NRAO VLA Sky Survey (NVSS) sources down to the detection limit in Stokes I, we find a gradual increase in median fractional polarization that is consistent with a trend that was noticed before for bright NVSS sources, but is much more gradual than found by previous deep surveys of radio polarization. Consequently, the polarized radio source counts derived from our stacking experiment predict fewer polarized radio sources for future surveys with the Square Kilometre Array and its pathfinders.

  16. The Chandra Source Catalog 2.0: Spectral Properties

    Science.gov (United States)

    McCollough, Michael L.; Siemiginowska, Aneta; Burke, Douglas; Nowak, Michael A.; Primini, Francis Anthony; Laurino, Omar; Nguyen, Dan T.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Paxson, Charles; Plummer, David A.; Rots, Arnold H.; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula; Chandra Source Catalog Team

    2018-01-01

    The second release of the Chandra Source Catalog (CSC) contains all sources identified from sixteen years' worth of publicly accessible observations. The vast majority of these sources have been observed with the ACIS detector and have spectral information in 0.5-7 keV energy range. Here we describe the methods used to automatically derive spectral properties for each source detected by the standard processing pipeline and included in the final CSC. The sources with high signal to noise ratio (exceeding 150 net counts) were fit in Sherpa (the modeling and fitting application from the Chandra Interactive Analysis of Observations package) using wstat as a fit statistic and Bayesian draws method to determine errors. Three models were fit to each source: an absorbed power-law, blackbody, and Bremsstrahlung emission. The fitted parameter values for the power-law, blackbody, and Bremsstrahlung models were included in the catalog with the calculated flux for each model. The CSC also provides the source energy fluxes computed from the normalizations of predefined absorbed power-law, black-body, Bremsstrahlung, and APEC models needed to match the observed net X-ray counts. For sources that have been observed multiple times we performed a Bayesian Blocks analysis will have been performed (see the Primini et al. poster) and the most significant block will have a joint fit performed for the mentioned spectral models. In addition, we provide access to data products for each source: a file with source spectrum, the background spectrum, and the spectral response of the detector. Hardness ratios were calculated for each source between pairs of energy bands (soft, medium and hard). This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  17. Modeling oscillatory control in NF-¿B, p53 and Wnt signaling

    DEFF Research Database (Denmark)

    Mengel, Benedicte; Hunziker, Alexander; Pedersen, Lykke

    2010-01-01

    Oscillations are commonly observed in cellular behavior and span a wide range of timescales, from seconds in calcium signaling to 24 hours in circadian rhythms. In between lie oscillations with time periods of 1-5 hours seen in NF-¿B, p53 and Wnt signaling, which play key roles in the immune system......, cell growth/death and embryo development, respectively. In the first part of this article, we provide a brief overview of simple deterministic models of oscillations. In particular, we explain the mechanism of saturated degradation that has been used to model oscillations in the NF-¿B, p53 and Wnt...

  18. Extended nonnegative tensor factorisation models for musical sound source separation.

    Science.gov (United States)

    FitzGerald, Derry; Cranitch, Matt; Coyle, Eugene

    2008-01-01

    Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  19. Extended Nonnegative Tensor Factorisation Models for Musical Sound Source Separation

    Directory of Open Access Journals (Sweden)

    Derry FitzGerald

    2008-01-01

    Full Text Available Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  20. GRAVITATIONAL WAVE SIGNAL FROM ASSEMBLING THE LIGHTEST SUPERMASSIVE BLACK HOLES

    International Nuclear Information System (INIS)

    Holley-Bockelmann, Kelly; Micic, Miroslav; Sigurdsson, Steinn; Rubbo, Louis J.

    2010-01-01

    We calculate the gravitational wave signal from the growth of 10 7 M sun supermassive black holes (SMBHs) from the remnants of Population III stars. The assembly of these lower mass black holes (BHs) is particularly important because observing SMBHs in this mass range is one of the primary science goals for the Laser Interferometer Space Antenna (LISA), a planned NASA/ESA mission to detect gravitational waves. We use high-resolution cosmological N-body simulations to track the merger history of the host dark matter halos, and model the growth of the SMBHs with a semianalytic approach that combines dynamical friction, gas accretion, and feedback. We find that the most common source in the LISA band from our volume consists of mergers between intermediate-mass BHs and SMBHs at redshifts less than 2. This type of high mass ratio merger has not been widely considered in the gravitational wave community; detection and characterization of this signal will likely require a different technique than is used for SMBH mergers or extreme mass ratio inspirals. We find that the event rate of this new LISA source depends on prescriptions for gas accretion onto the BH as well as an accurate model of the dynamics on a galaxy scale; our best estimate yields ∼40 sources with a signal-to-noise ratio greater than 30 occuring within a volume like the Local Group during SMBH assembly-extrapolated over the volume of the universe yields ∼500 observed events over 10 years, although the accuracy of this rate is affected by cosmic variance.

  1. Evaluation of dual-source parallel RF excitation for diffusion-weighted whole-body MR imaging with background body signal suppression at 3.0 T

    Energy Technology Data Exchange (ETDEWEB)

    Muertz, Petra, E-mail: petra.muertz@ukb.uni-bonn.de [Department of Radiology, University of Bonn (Germany); Kaschner, Marius, E-mail: marius.kaschner@ukb.uni-bonn.de [Department of Radiology, University of Bonn (Germany); Traeber, Frank, E-mail: frank.traeber@ukb.uni-bonn.de [Department of Radiology, University of Bonn (Germany); Kukuk, Guido M., E-mail: guido.kukuk@ukb.uni-bonn.de [Department of Radiology, University of Bonn (Germany); Buedenbender, Sarah M., E-mail: sarah_m_buedenbender@yahoo.de [Department of Radiology, University of Bonn (Germany); Skowasch, Dirk, E-mail: dirk.skowasch@ukb.uni-bonn.de [Department of Medicine, University of Bonn (Germany); Gieseke, Juergen, E-mail: juergen.gieseke@philips.com [Philips Healthcare, Best (Netherlands); Department of Radiology, University of Bonn (Germany); Schild, Hans H., E-mail: hans.schild@ukb.uni-bonn.de [Department of Radiology, University of Bonn (Germany); Willinek, Winfried A., E-mail: winfried.willinek@ukb.uni-bonn.de [Department of Radiology, University of Bonn (Germany)

    2012-11-15

    Purpose: To evaluate the use of dual-source parallel RF excitation (TX) for diffusion-weighted whole-body MRI with background body signal suppression (DWIBS) at 3.0 T. Materials and methods: Forty consecutive patients were examined on a clinical 3.0-T MRI system using a diffusion-weighted (DW) spin-echo echo-planar imaging sequence with a combination of short TI inversion recovery and slice-selective gradient reversal fat suppression. DWIBS of the neck (n = 5), thorax (n = 8), abdomen (n = 6) and pelvis (n = 21) was performed both with TX (2:56 min) and with standard single-source RF excitation (4:37 min). The quality of DW images and reconstructed inverted maximum intensity projections was visually judged by two readers (blinded to acquisition technique). Signal homogeneity and fat suppression were scored as 'improved', 'equal', 'worse' or 'ambiguous'. Moreover, the apparent diffusion coefficient (ADC) values were measured in muscles, urinary bladder, lymph nodes and lesions. Results: By the use of TX, signal homogeneity was 'improved' in 25/40 and 'equal' in 15/40 cases. Fat suppression was 'improved' in 17/40 and 'equal' in 23/40 cases. These improvements were statistically significant (p < 0.001, Wilcoxon signed-rank test). In five patients, fluid-related dielectric shading was present, which improved remarkably. The ADC values did not significantly differ for the two RF excitation methods (p = 0.630 over all data, pairwise Student's t-test). Conclusion: Dual-source parallel RF excitation improved image quality of DWIBS at 3.0 T with respect to signal homogeneity and fat suppression, reduced scan time by approximately one-third, and did not influence the measured ADC values.

  2. Parallel Beam Dynamics Simulation Tools for Future Light Source Linac Modeling

    International Nuclear Information System (INIS)

    Qiang, Ji; Pogorelov, Ilya v.; Ryne, Robert D.

    2007-01-01

    Large-scale modeling on parallel computers is playing an increasingly important role in the design of future light sources. Such modeling provides a means to accurately and efficiently explore issues such as limits to beam brightness, emittance preservation, the growth of instabilities, etc. Recently the IMPACT codes suite was enhanced to be applicable to future light source design. Simulations with IMPACT-Z were performed using up to one billion simulation particles for the main linac of a future light source to study the microbunching instability. Combined with the time domain code IMPACT-T, it is now possible to perform large-scale start-to-end linac simulations for future light sources, including the injector, main linac, chicanes, and transfer lines. In this paper we provide an overview of the IMPACT code suite, its key capabilities, and recent enhancements pertinent to accelerator modeling for future linac-based light sources

  3. Modeling Guidelines for Code Generation in the Railway Signaling Context

    Science.gov (United States)

    Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo

    2009-01-01

    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these

  4. Mathematical model with autoregressive process for electrocardiogram signals

    Science.gov (United States)

    Evaristo, Ronaldo M.; Batista, Antonio M.; Viana, Ricardo L.; Iarosz, Kelly C.; Szezech, José D., Jr.; Godoy, Moacir F. de

    2018-04-01

    The cardiovascular system is composed of the heart, blood and blood vessels. Regarding the heart, cardiac conditions are determined by the electrocardiogram, that is a noninvasive medical procedure. In this work, we propose autoregressive process in a mathematical model based on coupled differential equations in order to obtain the tachograms and the electrocardiogram signals of young adults with normal heartbeats. Our results are compared with experimental tachogram by means of Poincaré plot and dentrended fluctuation analysis. We verify that the results from the model with autoregressive process show good agreement with experimental measures from tachogram generated by electrical activity of the heartbeat. With the tachogram we build the electrocardiogram by means of coupled differential equations.

  5. AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source

    Science.gov (United States)

    Nightingale, J. W.; Dye, S.; Massey, Richard J.

    2018-05-01

    This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.

  6. Detecting Large-Scale Brain Networks Using EEG: Impact of Electrode Density, Head Modeling and Source Localization

    Science.gov (United States)

    Liu, Quanying; Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante

    2018-01-01

    Resting state networks (RSNs) in the human brain were recently detected using high-density electroencephalography (hdEEG). This was done by using an advanced analysis workflow to estimate neural signals in the cortex and to assess functional connectivity (FC) between distant cortical regions. FC analyses were conducted either using temporal (tICA) or spatial independent component analysis (sICA). Notably, EEG-RSNs obtained with sICA were very similar to RSNs retrieved with sICA from functional magnetic resonance imaging data. It still remains to be clarified, however, what technological aspects of hdEEG acquisition and analysis primarily influence this correspondence. Here we examined to what extent the detection of EEG-RSN maps by sICA depends on the electrode density, the accuracy of the head model, and the source localization algorithm employed. Our analyses revealed that the collection of EEG data using a high-density montage is crucial for RSN detection by sICA, but also the use of appropriate methods for head modeling and source localization have a substantial effect on RSN reconstruction. Overall, our results confirm the potential of hdEEG for mapping the functional architecture of the human brain, and highlight at the same time the interplay between acquisition technology and innovative solutions in data analysis. PMID:29551969

  7. Modeling water demand when households have multiple sources of water

    Science.gov (United States)

    Coulibaly, Lassina; Jakus, Paul M.; Keith, John E.

    2014-07-01

    A significant portion of the world's population lives in areas where public water delivery systems are unreliable and/or deliver poor quality water. In response, people have developed important alternatives to publicly supplied water. To date, most water demand research has been based on single-equation models for a single source of water, with very few studies that have examined water demand from two sources of water (where all nonpublic system water sources have been aggregated into a single demand). This modeling approach leads to two outcomes. First, the demand models do not capture the full range of alternatives, so the true economic relationship among the alternatives is obscured. Second, and more seriously, economic theory predicts that demand for a good becomes more price-elastic as the number of close substitutes increases. If researchers artificially limit the number of alternatives studied to something less than the true number, the price elasticity estimate may be biased downward. This paper examines water demand in a region with near universal access to piped water, but where system reliability and quality is such that many alternative sources of water exist. In extending the demand analysis to four sources of water, we are able to (i) demonstrate why households choose the water sources they do, (ii) provide a richer description of the demand relationships among sources, and (iii) calculate own-price elasticity estimates that are more elastic than those generally found in the literature.

  8. Improved moving source photometry with TRIPPy

    Science.gov (United States)

    Alexandersen, Mike; Fraser, Wesley Cristopher

    2017-10-01

    Photometry of moving sources is more complicated than for stationary sources, because the sources trail their signal out over more pixels than a point source of the same magnitude. Using a circular aperture of same size as would be appropriate for point sources can cut out a large amount of flux if a moving source moves substantially relative to the size of the aperture during the exposure, resulting in underestimated fluxes. Using a large circular aperture can mitigate this issue at the cost of a significantly reduced signal to noise compared to a point source, as a result of the inclusion of a larger background region within the aperture.Trailed Image Photometry in Python (TRIPPy) solves this problem by using a pill-shaped aperture: the traditional circular aperture is sliced in half perpendicular to the direction of motion and separated by a rectangle as long as the total motion of the source during the exposure. TRIPPy can also calculate the appropriate aperture correction (which will depend both on the radius and trail length of the pill-shaped aperture), and has features for selecting good PSF stars, creating a PSF model (convolved moffat profile + lookup table) and selecting a custom sky-background area in order to ensure no other sources contribute to the background estimate.In this poster, we present an overview of the TRIPPy features and demonstrate the improvements resulting from using TRIPPy compared to photometry obtained by other methods with examples from real projects where TRIPPy has been implemented in order to obtain the best-possible photometric measurements of Solar System objects. While TRIPPy has currently mainly been used for Trans-Neptunian Objects, the improvement from using the pill-shaped aperture increases with source motion, making TRIPPy highly relevant for asteroid and centaur photometry as well.

  9. Explaining neural signals in human visual cortex with an associative learning model.

    Science.gov (United States)

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  10. Stability analysis of direct current control in current source rectifier

    DEFF Research Database (Denmark)

    Lu, Dapeng; Wang, Xiongfei; Blaabjerg, Frede

    2017-01-01

    Current source rectifier with high switching frequency has a great potential for improving the power efficiency and power density in ac-dc power conversion. This paper analyzes the stability of direct current control based on the time delay effect. Small signal model including dynamic behaviors...

  11. Drought Signaling in Plants

    Indian Academy of Sciences (India)

    depending upon the source and nature of signaling: (i) hormone signal, (ii) .... plants to regulate the rate of transpiration through minor structural .... cell has to keep spending energy (in the form of A TP) to maintain a .... enzymes and proteins in the regulation of cellular metabolism can be determined by either inactivating.

  12. Non-linear signal response functions and their effects on the statistical and noise cancellation properties of isotope ratio measurements by multi-collector plasma mass spectrometry

    International Nuclear Information System (INIS)

    Doherty, W.

    2013-01-01

    A nebulizer-centric response function model of the analytical inductively coupled argon plasma ion source was used to investigate the statistical frequency distributions and noise reduction factors of simultaneously measured flicker noise limited isotope ion signals and their ratios. The response function model was extended by assuming i) a single gaussian distributed random noise source (nebulizer gas pressure fluctuations) and ii) the isotope ion signal response is a parabolic function of the nebulizer gas pressure. Model calculations of ion signal and signal ratio histograms were obtained by applying the statistical method of translation to the non-linear response function model of the plasma. Histograms of Ni, Cu, Pr, Tl and Pb isotope ion signals measured using a multi-collector plasma mass spectrometer were, without exception, negative skew. Histograms of the corresponding isotope ratios of Ni, Cu, Tl and Pb were either positive or negative skew. There was a complete agreement between the measured and model calculated histogram skew properties. The nebulizer-centric response function model was also used to investigate the effect of non-linear response functions on the effectiveness of noise cancellation by signal division. An alternative noise correction procedure suitable for parabolic signal response functions was derived and applied to measurements of isotope ratios of Cu, Ni, Pb and Tl. The largest noise reduction factors were always obtained when the non-linearity of the response functions was taken into account by the isotope ratio calculation. Possible applications of the nebulizer-centric response function model to other types of analytical instrumentation, large amplitude signal noise sources (e.g., lasers, pumped nebulizers) and analytical error in isotope ratio measurements by multi-collector plasma mass spectrometry are discussed. - Highlights: ► Isotope ion signal noise is modelled as a parabolic transform of a gaussian variable. ► Flicker

  13. Development of a RF large signal MOSFET model, based on an equivalent circuit, and comparison with the BSIM3v3 compact model.

    NARCIS (Netherlands)

    Vandamme, E.P.; Schreurs, D.; Dinther, van C.H.J.; Badenes, G.; Deferm, L.

    2002-01-01

    The improved RF performance of silicon-based technologies over the years and their potential use in telecommunication applications has increased the research in RF modelling of MOS transistors. Especially for analog circuits, accurate RF small signal and large signal transistor models are required.

  14. A Segmented Signal Progression Model for the Modern Streetcar System

    Directory of Open Access Journals (Sweden)

    Baojie Wang

    2015-01-01

    Full Text Available This paper is on the purpose of developing a segmented signal progression model for modern streetcar system. The new method is presented with the following features: (1 the control concept is based on the assumption of only one streetcar line operating along an arterial under a constant headway and no bandwidth demand for streetcar system signal progression; (2 the control unit is defined as a coordinated intersection group associated with several streetcar stations, and the control joints must be streetcar stations; (3 the objective function is built to ensure the two-way streetcar arrival times distributing within the available time of streetcar phase; (4 the available time of streetcar phase is determined by timing schemes, intersection structures, track locations, streetcar speeds, and vehicular accelerations; (5 the streetcar running speed is constant separately whether it is in upstream or downstream route; (6 the streetcar dwell time is preset according to historical data distribution or charging demand. The proposed method is experimentally examined in Hexi New City Streetcar Project in Nanjing, China. In the experimental results, the streetcar system operation and the progression impacts are shown to affect transit and vehicular traffic. The proposed model presents promising outcomes through the design of streetcar system segmented signal progression, in terms of ensuring high streetcar system efficiency and minimizing negative impacts on transit and vehicular traffic.

  15. Time-Reversal Study of the Hemet (CA) Tremor Source

    Science.gov (United States)

    Larmat, C. S.; Johnson, P. A.; Guyer, R. A.

    2010-12-01

    Since its first observation by Nadeau & Dolenc (2005) and Gomberg et al. (2008), tremor along the San Andreas fault system is thought to be a probe into the frictional state of the deep part of the fault (e.g. Shelly et al., 2007). Tremor is associated with slow, otherwise deep, aseismic slip events that may be triggered by faint signals such as passing waves from remote earthquakes or solid Earth tides.Well resolved tremor source location is key to constrain frictional models of the fault. However, tremor source location is challenging because of the high-frequency and highly-scattered nature of tremor signal characterized by the lack of isolated phase arrivals. Time Reversal (TR) methods are emerging as a useful tool for location. The unique requirement is a good velocity model for the different time-reversed phases to arrive coherently onto the source point. We present results of location for a tremor source near the town of Hemet, CA, which was triggered by the 2002 M 7.9 Denali Fault earthquake (Gomberg et al., 2008) and by the 2009 M 6.9 Gulf of California earthquake. We performed TR in a volume model of 88 (N-S) x 70 (W-E) x 60 km (Z) using the full-wave 3D wave-propagation package SPECFEM3D (Komatitsch et al., 2002). The results for the 2009 episode indicate a deep source (at about 22km) which is about 4km SW the fault surface scarp. We perform STA/SLA and correlation analysis in order to have independent confirmation of the Hemet tremor source. We gratefully acknowledge the support of the U. S. Department of Energy through the LANL/LDRD Program for this work.

  16. An Adaptive Model for Calculating the Correlation Degree of Multiple Adjacent Signalized Intersections

    Directory of Open Access Journals (Sweden)

    Linhong Wang

    2013-01-01

    Full Text Available As an important component of the urban adaptive traffic control system, subarea partition algorithm divides the road network into some small subareas and then determines the optimal signal control mode for each signalized intersection. Correlation model is the core of subarea partition algorithm because it can quantify the correlation degree of adjacent signalized intersections and decides whether these intersections can be grouped into one subarea. In most cases, there are more than two intersections in one subarea. However, current researches only focus on the correlation model for two adjacent intersections. The objective of this study is to develop a model which can calculate the correlation degree of multiple intersections adaptively. The cycle lengths, link lengths, number of intersections, and path flow between upstream and downstream coordinated phases were selected as the contributing factors of the correlation model. Their jointly impacts on the performance of the coordinated control mode relative to the isolated control mode were further studied using numerical experiments. The paper then proposed a correlation index (CI as an alternative to relative performance. The relationship between CI and the four contributing factors was established in order to predict the correlation, which determined whether adjacent intersections could be partitioned into one subarea. A value of 0 was set as the threshold of CI. If CI was larger than 0, multiple intersections could be partitioned into one subarea; otherwise, they should be separated. Finally, case studies were conducted in a real-life signalized network to evaluate the performance of the model. The results show that the CI simulates the relative performance well and could be a reliable index for subarea partition.

  17. A Framework for Diagnosing the Out-of-Control Signals in Multivariate Process Using Optimized Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Tai-fu Li

    2013-01-01

    Full Text Available Multivariate statistical process control is the continuation and development of unitary statistical process control. Most multivariate statistical quality control charts are usually used (in manufacturing and service industries to determine whether a process is performing as intended or if there are some unnatural causes of variation upon an overall statistics. Once the control chart detects out-of-control signals, one difficulty encountered with multivariate control charts is the interpretation of an out-of-control signal. That is, we have to determine whether one or more or a combination of variables is responsible for the abnormal signal. A novel approach for diagnosing the out-of-control signals in the multivariate process is described in this paper. The proposed methodology uses the optimized support vector machines (support vector machine classification based on genetic algorithm to recognize set of subclasses of multivariate abnormal patters, identify the responsible variable(s on the occurrence of abnormal pattern. Multiple sets of experiments are used to verify this model. The performance of the proposed approach demonstrates that this model can accurately classify the source(s of out-of-control signal and even outperforms the conventional multivariate control scheme.

  18. MOTION ARTIFACT REDUCTION IN FUNCTIONAL NEAR INFRARED SPECTROSCOPY SIGNALS BY AUTOREGRESSIVE MOVING AVERAGE MODELING BASED KALMAN FILTERING

    Directory of Open Access Journals (Sweden)

    MEHDI AMIAN

    2013-10-01

    Full Text Available Functional near infrared spectroscopy (fNIRS is a technique that is used for noninvasive measurement of the oxyhemoglobin (HbO2 and deoxyhemoglobin (HHb concentrations in the brain tissue. Since the ratio of the concentration of these two agents is correlated with the neuronal activity, fNIRS can be used for the monitoring and quantifying the cortical activity. The portability of fNIRS makes it a good candidate for studies involving subject's movement. The fNIRS measurements, however, are sensitive to artifacts generated by subject's head motion. This makes fNIRS signals less effective in such applications. In this paper, the autoregressive moving average (ARMA modeling of the fNIRS signal is proposed for state-space representation of the signal which is then fed to the Kalman filter for estimating the motionless signal from motion corrupted signal. Results are compared to the autoregressive model (AR based approach, which has been done previously, and show that the ARMA models outperform AR models. We attribute it to the richer structure, containing more terms indeed, of ARMA than AR. We show that the signal to noise ratio (SNR is about 2 dB higher for ARMA based method.

  19. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  20. Time-dependent source model of the Lusi mud volcano

    Science.gov (United States)

    Shirzaei, M.; Rudolph, M. L.; Manga, M.

    2014-12-01

    The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.

  1. Source localization of rhythmic ictal EEG activity: a study of diagnostic accuracy following STARD criteria.

    Science.gov (United States)

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders

    2013-10-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method

  2. Hierarchic stochastic modelling applied to intracellular Ca(2+ signals.

    Directory of Open Access Journals (Sweden)

    Gregor Moenke

    Full Text Available Important biological processes like cell signalling and gene expression have noisy components and are very complex at the same time. Mathematical analysis of such systems has often been limited to the study of isolated subsystems, or approximations are used that are difficult to justify. Here we extend a recently published method (Thurley and Falcke, PNAS 2011 which is formulated in observable system configurations instead of molecular transitions. This reduces the number of system states by several orders of magnitude and avoids fitting of kinetic parameters. The method is applied to Ca(2+ signalling. Ca(2+ is a ubiquitous second messenger transmitting information by stochastic sequences of concentration spikes, which arise by coupling of subcellular Ca(2+ release events (puffs. We derive analytical expressions for a mechanistic Ca(2+ model, based on recent data from live cell imaging, and calculate Ca(2+ spike statistics in dependence on cellular parameters like stimulus strength or number of Ca(2+ channels. The new approach substantiates a generic Ca(2+ model, which is a very convenient way to simulate Ca(2+ spike sequences with correct spiking statistics.

  3. Air quality dispersion models from energy sources

    International Nuclear Information System (INIS)

    Lazarevska, Ana

    1996-01-01

    Along with the continuing development of new air quality models that cover more complex problems, in the Clean Air Act, legislated by the US Congress, a consistency and standardization of air quality model applications were encouraged. As a result, the Guidelines on Air Quality Models were published, which are regularly reviewed by the Office of Air Quality Planning and Standards, EPA. These guidelines provide a basis for estimating the air quality concentrations used in accessing control strategies as well as defining emission limits. This paper presents a review and analysis of the recent versions of the models: Simple Terrain Stationary Source Model; Complex Terrain Dispersion Model; Ozone,Carbon Monoxide and Nitrogen Dioxide Models; Long Range Transport Model; Other phenomenon Models:Fugitive Dust/Fugitive Emissions, Particulate Matter, Lead, Air Pathway Analyses - Air Toxic as well as Hazardous Waste. 8 refs., 4 tabs., 2 ills

  4. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    Science.gov (United States)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling

  5. Considering a point-source in a regional air pollution model; Prise en compte d`une source ponctuelle dans un modele regional de pollution atmospherique

    Energy Technology Data Exchange (ETDEWEB)

    Lipphardt, M.

    1997-06-19

    This thesis deals with the development and validation of a point-source plume model, with the aim to refine the representation of intensive point-source emissions in regional-scale air quality models. The plume is modelled at four levels of increasing complexity, from a modified Gaussian plume model to the Freiberg and Lusis ring model. Plume elevation is determined by Netterville`s plume rise model, using turbulence and atmospheric stability parameters. A model for the effect of a fine-scale turbulence on the mean concentrations in the plume is developed and integrated in the ring model. A comparison between results with and without considering micro-mixing shows the importance of this effect in a chemically reactive plume. The plume model is integrated into the Eulerian transport/chemistry model AIRQUAL, using an interface between Airqual and the sub-model, and interactions between the two scales are described. A simulation of an air pollution episode over Paris is carried out, showing that the utilization of such a sub-scale model improves the accuracy of the air quality model

  6. ECG Signal Processing, Classification and Interpretation A Comprehensive Framework of Computational Intelligence

    CERN Document Server

    Pedrycz, Witold

    2012-01-01

    Electrocardiogram (ECG) signals are among the most important sources of diagnostic information in healthcare so improvements in their analysis may also have telling consequences. Both the underlying signal technology and a burgeoning variety of algorithms and systems developments have proved successful targets for recent rapid advances in research. ECG Signal Processing, Classification and Interpretation shows how the various paradigms of Computational Intelligence, employed either singly or in combination, can produce an effective structure for obtaining often vital information from ECG signals. Neural networks do well at capturing the nonlinear nature of the signals, information granules realized as fuzzy sets help to confer interpretability on the data and evolutionary optimization may be critical in supporting the structural development of ECG classifiers and models of ECG signals. The contributors address concepts, methodology, algorithms, and case studies and applications exploiting the paradigm of Comp...

  7. IFN signaling: how a non-canonical model led to the development of IFN mimetics

    Directory of Open Access Journals (Sweden)

    Howard M Johnson

    2013-07-01

    Full Text Available The classical model of cytokine signaling dominates our view of specific gene activation by cytokines such as the interferons (IFNs. The importance of the model extends beyond cytokines and applies to hormones such as growth hormone (GH and insulin, and growth factors such as epidermal growth factor (EGF and fibroblast growth factor (FGF. According to this model, ligand activates the cell via interaction with the extracellular domain of the receptor. This results in activation of receptor or receptor-associated tyrosine kinases, primarily of the Janus kinase (JAK family, phosphorylation and dimerization of the STAT transcription factors, which dissociate from the receptor cytoplasmic domain and translocate to the nucleus. This view ascribes no further role to the ligand, JAK kinase, or receptor in either specific gene activation or the associated epigenetic events. The presence of dimeric STATs in the nucleus essentially explains it all. Our studies have resulted in the development of a non-canonical, more complex model of IFNγ signaling that is akin to that of steroid hormone/steroid receptor signaling. We have shown that ligand, receptor, activated JAKs and STATs are associated with specific gene activation, where the receptor subunit IFNGR1 functions as a co-transcription factor and the JAKs are involved in associated epigenetic events. We found that the type I IFN system functions similarly. The fact that GH receptor, insulin receptor, EGF receptor, and FGF receptor undergo nuclear translocation upon ligand binding suggests that they may also function similarly. The steroid hormone/steroid receptor nature of type I and II IFN signaling provides insight into the specificity of signaling by members of cytokine families. The non-canonical model could also provide better understanding to more complex cytokine families such as those of IL-2 and IL-12, whose members often use the same JAKs and STATs, but also have different functions and

  8. Comparison of receptor models for source apportionment of volatile organic compounds in Beijing, China

    International Nuclear Information System (INIS)

    Song Yu; Dai Wei; Shao Min; Liu Ying; Lu Sihua; Kuster, William; Goldan, Paul

    2008-01-01

    Identifying the sources of volatile organic compounds (VOCs) is key to reducing ground-level ozone and secondary organic aerosols (SOAs). Several receptor models have been developed to apportion sources, but an intercomparison of these models had not been performed for VOCs in China. In the present study, we compared VOC sources based on chemical mass balance (CMB), UNMIX, and positive matrix factorization (PMF) models. Gasoline-related sources, petrochemical production, and liquefied petroleum gas (LPG) were identified by all three models as the major contributors, with UNMIX and PMF producing quite similar results. The contributions of gasoline-related sources and LPG estimated by the CMB model were higher, and petrochemical emissions were lower than in the UNMIX and PMF results, possibly because the VOC profiles used in the CMB model were for fresh emissions and the profiles extracted from ambient measurements by the two-factor analysis models were 'aged'. - VOCs sources were similar for three models with CMB showing a higher estimate for vehicles

  9. Comparison of receptor models for source apportionment of volatile organic compounds in Beijing, China

    Energy Technology Data Exchange (ETDEWEB)

    Song Yu; Dai Wei [Department of Environmental Sciences, Peking University, Beijing 100871 (China); Shao Min [State Joint Key Laboratory of Environmental Simulation and Pollution Control, Peking University, Beijing 100871 (China)], E-mail: mshao@pku.edu.cn; Liu Ying; Lu Sihua [State Joint Key Laboratory of Environmental Simulation and Pollution Control, Peking University, Beijing 100871 (China); Kuster, William; Goldan, Paul [Chemical Sciences Division, NOAA Earth System Research Laboratory, Boulder, CO 80305 (United States)

    2008-11-15

    Identifying the sources of volatile organic compounds (VOCs) is key to reducing ground-level ozone and secondary organic aerosols (SOAs). Several receptor models have been developed to apportion sources, but an intercomparison of these models had not been performed for VOCs in China. In the present study, we compared VOC sources based on chemical mass balance (CMB), UNMIX, and positive matrix factorization (PMF) models. Gasoline-related sources, petrochemical production, and liquefied petroleum gas (LPG) were identified by all three models as the major contributors, with UNMIX and PMF producing quite similar results. The contributions of gasoline-related sources and LPG estimated by the CMB model were higher, and petrochemical emissions were lower than in the UNMIX and PMF results, possibly because the VOC profiles used in the CMB model were for fresh emissions and the profiles extracted from ambient measurements by the two-factor analysis models were 'aged'. - VOCs sources were similar for three models with CMB showing a higher estimate for vehicles.

  10. Estimating the seismotelluric current required for observable electromagnetic ground signals

    Directory of Open Access Journals (Sweden)

    J. Bortnik

    2010-08-01

    Full Text Available We use a relatively simple model of an underground current source co-located with the earthquake hypocenter to estimate the magnitude of the seismotelluric current required to produce observable ground signatures. The Alum Rock earthquake of 31 October 2007, is used as an archetype of a typical California earthquake, and the effects of varying the ground conductivity and length of the current element are examined. Results show that for an observed 30 nT pulse at 1 Hz, the expected seismotelluric current magnitudes fall in the range ~10–100 kA. By setting the detectability threshold to 1 pT, we show that even when large values of ground conductivity are assumed, magnetic signals are readily detectable within a range of 30 km from the epicenter. When typical values of ground conductivity are assumed, the minimum current required to produce an observable signal within a 30 km range was found to be ~1 kA, which is a surprisingly low value. Furthermore, we show that deep nulls in the signal power develop in the non-cardinal directions relative to the orientation of the source current, indicating that a magnetometer station located in those regions may not observe a signal even though it is well within the detectable range. This result underscores the importance of using a network of magnetometers when searching for preseismic electromagnetic signals.

  11. Unique Signatures of Population III Stars in the Global 21-cm Signal

    Science.gov (United States)

    Mirocha, Jordan; Mebane, Richard H.; Furlanetto, Steven R.; Singal, Krishma; Trinh, Donald

    2018-05-01

    We investigate the effects of Population III stars on the sky-averaged 21-cm background radiation, which traces the collective emission from all sources of ultraviolet and X-ray photons before reionization is complete. While UV photons from Pop III stars can in principle shift the onset of radiative coupling of the 21-cm transition - and potentially reionization - to early times, we find that the remnants of Pop III stars are likely to have a more discernible impact on the 21-cm signal than Pop III stars themselves. The X-rays from such sources preferentially heat the IGM at early times, which elongates the epoch of reheating and results in a more gradual transition from an absorption signal to emission. This gradual heating gives rise to broad, asymmetric wings in the absorption signal, which stand in contrast to the relatively sharp, symmetric signals that arise in models treating Pop II sources only. A stronger signature of Pop III, in which the position of the absorption minimum becomes inconsistent with Pop II-only models, requires extreme star-forming events that may not be physically plausible, lending further credence to predictions of relatively high frequency absorption troughs, νmin ˜ 100 MHz. As a result, though the trough location alone may not be enough to indicate the presence of Pop III, the asymmetric wings should arise even if only a few Pop III stars form in each halo before the transition to Pop II star formation occurs, provided that the Pop III IMF is sufficiently top-heavy and at least some Pop III stars form in binaries.

  12. Receptor modeling for source apportionment of polycyclic aromatic hydrocarbons in urban atmosphere.

    Science.gov (United States)

    Singh, Kunwar P; Malik, Amrita; Kumar, Ranjan; Saxena, Puneet; Sinha, Sarita

    2008-01-01

    This study reports source apportionment of polycyclic aromatic hydrocarbons (PAHs) in particulate depositions on vegetation foliages near highway in the urban environment of Lucknow city (India) using the principal components analysis/absolute principal components scores (PCA/APCS) receptor modeling approach. The multivariate method enables identification of major PAHs sources along with their quantitative contributions with respect to individual PAH. The PCA identified three major sources of PAHs viz. combustion, vehicular emissions, and diesel based activities. The PCA/APCS receptor modeling approach revealed that the combustion sources (natural gas, wood, coal/coke, biomass) contributed 19-97% of various PAHs, vehicular emissions 0-70%, diesel based sources 0-81% and other miscellaneous sources 0-20% of different PAHs. The contributions of major pyrolytic and petrogenic sources to the total PAHs were 56 and 42%, respectively. Further, the combustion related sources contribute major fraction of the carcinogenic PAHs in the study area. High correlation coefficient (R2 > 0.75 for most PAHs) between the measured and predicted concentrations of PAHs suggests for the applicability of the PCA/APCS receptor modeling approach for estimation of source contribution to the PAHs in particulates.

  13. Source Term Model for Fine Particle Resuspension from Indoor Surfaces

    National Research Council Canada - National Science Library

    Kim, Yoojeong; Gidwani, Ashok; Sippola, Mark; Sohn, Chang W

    2008-01-01

    This Phase I effort developed a source term model for particle resuspension from indoor surfaces to be used as a source term boundary condition for CFD simulation of particle transport and dispersion in a building...

  14. Quorum Quenching Revisited—From Signal Decays to Signalling Confusion

    Directory of Open Access Journals (Sweden)

    Kok-Gan Chan

    2012-04-01

    Full Text Available In a polymicrobial community, while some bacteria are communicating with neighboring cells (quorum sensing, others are interrupting the communication (quorum quenching, thus creating a constant arms race between intercellular communication. In the past decade, numerous quorum quenching enzymes have been found and initially thought to inactivate the signalling molecules. Though this is widely accepted, the actual roles of these quorum quenching enzymes are now being uncovered. Recent evidence extends the role of quorum quenching to detoxification or metabolism of signalling molecules as food and energy source; this includes “signalling confusion”, a term coined in this paper to refer to the phenomenon of non-destructive modification of signalling molecules. While quorum quenching has been explored as a novel anti-infective therapy targeting, quorum sensing evidence begins to show the development of resistance against quorum quenching.

  15. Open source Modeling and optimization tools for Planning

    Energy Technology Data Exchange (ETDEWEB)

    Peles, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-10

    Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward to complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.

  16. Acoustic Localization with Infrasonic Signals

    Science.gov (United States)

    Threatt, Arnesha; Elbing, Brian

    2015-11-01

    Numerous geophysical and anthropogenic events emit infrasonic frequencies (<20 Hz), including volcanoes, hurricanes, wind turbines and tornadoes. These sounds, which cannot be heard by the human ear, can be detected from large distances (in excess of 100 miles) due to low frequency acoustic signals having a very low decay rate in the atmosphere. Thus infrasound could be used for long-range, passive monitoring and detection of these events. An array of microphones separated by known distances can be used to locate a given source, which is known as acoustic localization. However, acoustic localization with infrasound is particularly challenging due to contamination from other signals, sensitivity to wind noise and producing a trusted source for system development. The objective of the current work is to create an infrasonic source using a propane torch wand or a subwoofer and locate the source using multiple infrasonic microphones. This presentation will present preliminary results from various microphone configurations used to locate the source.

  17. An Equivalent Source Method for Modelling the Global Lithospheric Magnetic Field

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    2014-01-01

    We present a new technique for modelling the global lithospheric magnetic field at Earth's surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010 when...... are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid. The corresponding source values are estimated using an iteratively reweighted least squares algorithm...... in the CHAOS-4 and MF7 models using more conventional spherical harmonic based approaches. Advantages of the equivalent source method include its local nature, allowing e.g. for regional grid refinement, and the ease of transforming to spherical harmonics when needed. Future applications will make use of Swarm...

  18. Conditional Random Fields for Morphological Analysis of Wireless ECG Signals

    Science.gov (United States)

    Natarajan, Annamalai; Gaiser, Edward; Angarita, Gustavo; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin

    2015-01-01

    Thanks to advances in mobile sensing technologies, it has recently become practical to deploy wireless electrocardiograph sensors for continuous recording of ECG signals. This capability has diverse applications in the study of human health and behavior, but to realize its full potential, new computational tools are required to effectively deal with the uncertainty that results from the noisy and highly non-stationary signals collected using these devices. In this work, we present a novel approach to the problem of extracting the morphological structure of ECG signals based on the use of dynamically structured conditional random field (CRF) models. We apply this framework to the problem of extracting morphological structure from wireless ECG sensor data collected in a lab-based study of habituated cocaine users. Our results show that the proposed CRF-based approach significantly out-performs independent prediction models using the same features, as well as a widely cited open source toolkit. PMID:26726321

  19. The mathematical theory of signal processing and compression-designs

    Science.gov (United States)

    Feria, Erlan H.

    2006-05-01

    The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.

  20. Developing a Successful Open Source Training Model

    Directory of Open Access Journals (Sweden)

    Belinda Lopez

    2010-01-01

    Full Text Available Training programs for open source software provide a tangible, and sellable, product. A successful training program not only builds revenue, it also adds to the overall body of knowledge available for the open source project. By gathering best practices and taking advantage of the collective expertise within a community, it may be possible for a business to partner with an open source project to build a curriculum that promotes the project and supports the needs of the company's training customers. This article describes the initial approach used by Canonical, the commercial sponsor of the Ubuntu Linux operating system, to engage the community in the creation of its training offerings. We then discuss alternate curriculum creation models and some of the conditions that are necessary for successful collaboration between creators of existing documentation and commercial training providers.

  1. Custom chipset and compact module design for a 75–110 GHz laboratory signal source

    International Nuclear Information System (INIS)

    Morgan, Matthew A; Boyd, Tod A; Castro, Jason J

    2016-01-01

    We report on the development and characterization of a compact, full-waveguide bandwidth (WR-10) signal source for general-purpose testing of mm-wave components. The monolithic microwave integrated circuit (MMIC) based multichip module is designed for compactness and ease-of-use, especially in size-constrained test sets such as a wafer probe station. It takes as input a cm-wave continuous-wave (CW) reference and provides a factor of three frequency multiplication as well as amplification, output power adjustment, and in situ output power monitoring. It utilizes a number of custom MMIC chips such as a Schottky-diode limiter and a broadband mm-wave detector, both designed explicitly for this module, as well as custom millimeter-wave multipliers and amplifiers reported in previous papers. (paper)

  2. Model of the Sgr B2 radio source

    International Nuclear Information System (INIS)

    Gosachinskij, I.V.; Khersonskij, V.K.

    1981-01-01

    The dynamical model of the gas cloud around the radio source Sagittarius B2 is suggested. This model describes the kinematic features of the gas in this source: contraction of the core and rotation of the envelope. The stability of the cloud at the initial stage is supported by the turbulent motion of the gas, turbulence energy dissipates due to magnetic viscosity. This process is occurring more rapidly in the dense core and the core begins to collapse but the envelope remains stable. The parameters of the primary cloud and some parameters (mass, density and size) of the collapse are calculated. The conditions in the core at the moment of its fragmentation into masses of stellar order are established [ru

  3. Signal predictions for a proposed fast neutron interrogation method

    International Nuclear Information System (INIS)

    Sale, K.E.

    1992-12-01

    We have applied the Monte Carlo radiation transport code COG) to assess the utility of a proposed explosives detection scheme based on neutron emission. In this scheme a pulsed neutron beam is generated by an approximately seven MeV deuteron beam incident on a thick Be target. A scintillation detector operating in the current mode measures the neutrons transmitted through the object as a function of time. The flight time of unscattered neutrons from the source to the detector is simply related to the neutron energy. This information along with neutron cross section excitation functions is used to infer the densities of H, C, N and O in the volume sampled. The code we have chosen to use enables us to create very detailed and realistic models of the geometrical configuration of the system, the neutron source and of the detector response. By calculating the signals that will be observed for several configurations and compositions of interrogated object we can investigate and begin to understand how a system that could actually be fielded will perform. Using this modeling capability many early on with substantial savings in time and cost and with improvements in performance. We will present our signal predictions for simple single element test cases and for explosive compositions. From these studies it is dear that the interpretation of the signals from such an explosives identification system will pose a substantial challenge

  4. Open Sourcing Social Change: Inside the Constellation Model

    OpenAIRE

    Tonya Surman; Mark Surman

    2008-01-01

    The constellation model was developed by and for the Canadian Partnership for Children's Health and the Environment. The model offers an innovative approach to organizing collaborative efforts in the social mission sector and shares various elements of the open source model. It emphasizes self-organizing and concrete action within a network of partner organizations working on a common issue. Constellations are self-organizing action teams that operate within the broader strategic vision of a ...

  5. Effects of Host-rock Fracturing on Elastic-deformation Source Models of Volcano Deflation.

    Science.gov (United States)

    Holohan, Eoghan P; Sudhaus, Henriette; Walter, Thomas R; Schöpfer, Martin P J; Walsh, John J

    2017-09-08

    Volcanoes commonly inflate or deflate during episodes of unrest or eruption. Continuum mechanics models that assume linear elastic deformation of the Earth's crust are routinely used to invert the observed ground motions. The source(s) of deformation in such models are generally interpreted in terms of magma bodies or pathways, and thus form a basis for hazard assessment and mitigation. Using discontinuum mechanics models, we show how host-rock fracturing (i.e. non-elastic deformation) during drainage of a magma body can progressively change the shape and depth of an elastic-deformation source. We argue that this effect explains the marked spatio-temporal changes in source model attributes inferred for the March-April 2007 eruption of Piton de la Fournaise volcano, La Reunion. We find that pronounced deflation-related host-rock fracturing can: (1) yield inclined source model geometries for a horizontal magma body; (2) cause significant upward migration of an elastic-deformation source, leading to underestimation of the true magma body depth and potentially to a misinterpretation of ascending magma; and (3) at least partly explain underestimation by elastic-deformation sources of changes in sub-surface magma volume.

  6. Source apportionment of PM2.5 in North India using source-oriented air quality models

    International Nuclear Information System (INIS)

    Guo, Hao; Kota, Sri Harsha; Sahu, Shovan Kumar; Hu, Jianlin; Ying, Qi; Gao, Aifang; Zhang, Hongliang

    2017-01-01

    In recent years, severe pollution events were observed frequently in India especially at its capital, New Delhi. However, limited studies have been conducted to understand the sources to high pollutant concentrations for designing effective control strategies. In this work, source-oriented versions of the Community Multi-scale Air Quality (CMAQ) model with Emissions Database for Global Atmospheric Research (EDGAR) were applied to quantify the contributions of eight source types (energy, industry, residential, on-road, off-road, agriculture, open burning and dust) to fine particulate matter (PM 2.5 ) and its components including primary PM (PPM) and secondary inorganic aerosol (SIA) i.e. sulfate, nitrate and ammonium ions, in Delhi and three surrounding cities, Chandigarh, Lucknow and Jaipur in 2015. PPM mass is dominated by industry and residential activities (>60%). Energy (∼39%) and industry (∼45%) sectors contribute significantly to PPM at south of Delhi, which reach a maximum of 200 μg/m 3 during winter. Unlike PPM, SIA concentrations from different sources are more heterogeneous. High SIA concentrations (∼25 μg/m 3 ) at south Delhi and central Uttar Pradesh were mainly attributed to energy, industry and residential sectors. Agriculture is more important for SIA than PPM and contributions of on-road and open burning to SIA are also higher than to PPM. Residential sector contributes highest to total PM 2.5 (∼80 μg/m 3 ), followed by industry (∼70 μg/m 3 ) in North India. Energy and agriculture contribute ∼25 μg/m 3 and ∼16 μg/m 3 to total PM 2.5 , while SOA contributes <5 μg/m 3 . In Delhi, industry and residential activities contribute to 80% of total PM 2.5 . - Highlights: • Sources of PM 2.5 in North India were quantified by source-oriented CMAQ. • Industrial/residential activities are the dominating sources (60–70%) for PPM. • Energy/agriculture are the most important sources (30–40%) for SIA. • Strong seasonal

  7. Underwater Cylindrical Object Detection Using the Spectral Features of Active Sonar Signals with Logistic Regression Models

    Directory of Open Access Journals (Sweden)

    Yoojeong Seo

    2018-01-01

    Full Text Available The issue of detecting objects bottoming on the sea floor is significant in various fields including civilian and military areas. The objective of this study is to investigate the logistic regression model to discriminate the target from the clutter and to verify the possibility of applying the model trained by the simulated data generated by the mathematical model to the real experimental data because it is not easy to obtain sufficient data in the underwater field. In the first stage of this study, when the clutter signal energy is so strong that the detection of a target is difficult, the logistic regression model is employed to distinguish the strong clutter signal and the target signal. Previous studies have found that if the clutter energy is larger, false detection occurs even for the various existing detection schemes. For this reason, the discrete Fourier transform (DFT magnitude spectrum of acoustic signals received by active sonar is applied to train the model to distinguish whether the received signal contains a target signal or not. The goodness of fit of the model is verified in terms of receiver operation characteristic (ROC, area under ROC curve (AUC, and classification table. The detection performance of the proposed model is evaluated in terms of detection rate according to target to clutter ratio (TCR. Furthermore, the real experimental data are employed to test the proposed approach. When using the experimental data to test the model, the logistic regression model is trained by the simulated data that are generated based on the mathematical model for the backscattering of the cylindrical object. The mathematical model is developed according to the size of the cylinder used in the experiment. Since the information on the experimental environment including the sound speed, the sediment type and such is not available, once simulated data are generated under various conditions, valid simulated data are selected using 70% of the

  8. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    Science.gov (United States)

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  9. Advanced LIGO: sources and astrophysics

    International Nuclear Information System (INIS)

    Creighton, Teviet

    2003-01-01

    Second-generation detectors in LIGO will take us from the discovery phase of gravitational-wave observations to the phase of true gravitational-wave astrophysics, with hundreds or thousands of potential sources. This paper surveys the most likely and interesting potential sources for Advanced LIGO, and the astrophysical processes that each one will probe. I conclude that binary inspiral signals are expected, while continuous signals from pulsars are plausible but not guaranteed. Other sources, such as core-collapse bursts, cosmic strings and primordial stochastic backgrounds, are speculative sources for Advanced LIGO, but also potentially the most interesting, since they push the limits of our theoretical knowledge

  10. Raising Awareness and Signaling Quality to Uninformed Consumers: A Price-Advertising Model

    OpenAIRE

    Hao Zhao

    2000-01-01

    The objective of this paper is to investigate the firm's optimal advertising and pricing strategies when introducing a new product. We extend the existing signaling literature on advertising spending and price by constructing a model in which advertising is used both to raise awareness about the product and to signal its quality. By comparing the complete information game and the incomplete information game, we find that the high-quality firm will reduce advertising spending and increase pric...

  11. A cost-efficient expansion of renewable energy sources in the European electricity system. An integrated modelling approach with a particular emphasis on diurnal and seasonal patterns

    Energy Technology Data Exchange (ETDEWEB)

    Golling, Christiane

    2012-11-01

    This thesis determines a cost-efficient expansion of electricity generated by renewable energy sources (RES-E) in the European power generation system. It is an integrated modelling approach with a particular emphasis on diurnal and seasonal patterns of renewable energy sources (RES). An integrated modelling approach optimizes the overall European electricity system while comprising fossil, nuclear, and renewable generation as well as storage capacities. The integrated model approach corresponds to a situation in which renewable generation is subject to electricity price signals. In sensitivity scenarios cases of the integrated model approach are compared to situations in which renewable generation is granted priority feed-in and is decoupled from electricity price signals. In addition, the role of different flexibility options, which can be provided by storage capacities and grid expansion are scrutinized. The methodology of the thesis consists of two parts. First, it develops an integrative model approach by extending an existing European electricity model only comprising conventional power generating technologies. Second, an appropriate representation of intermittent RES for electricity market models is established by the determination of corresponding typedays. The typeday modelling takes the spatial correlation of RES and the correlation between wind and solar power into account. Moreover, the typeday modelling captures average dispatch-relevant, diurnal and seasonal RES characteristics such as the level, the variance, and the gradient. The scenario analysis shows that separate developments of renewable and conventional technologies imply several inefficiencies. These increase with higher RES-E penetration. Inefficiencies such as an increased wind power curtailment, an augmented capital turnover, or a higher cumulative installed power generating capacity are revealed and quantified.

  12. Detection of GNSS Signals Propagation in Urban Canyos Using 3D City Models

    Directory of Open Access Journals (Sweden)

    Petra Pisova

    2015-01-01

    Full Text Available This paper presents one of the solutions to the problem of multipath propagation and effects on Global Navigation Satellite Systems (GNSS signals in urban canyons. GNSS signals may reach a receiver not only through Line-of-Sight (LOS paths, but they are often blocked, reflected or diffracted from tall buildings, leading to unmodelled GNSS errors in position estimation. Therefore in order to detect and mitigate the impact of multipath, a new ray-tracing model for simulation of GNSS signals reception in urban canyons is proposed - based on digital 3D maps information, known positions of GNSS satellites and an assumed position of a receiver. The model is established and validated using experimental, as well as real data. It is specially designed for complex environments and situations where positioning with highest accuracy is required - a typical example is navigation for blind people.

  13. Variability of dynamic source parameters inferred from kinematic models of past earthquakes

    KAUST Repository

    Causse, M.

    2013-12-24

    We analyse the scaling and distribution of average dynamic source properties (fracture energy, static, dynamic and apparent stress drops) using 31 kinematic inversion models from 21 crustal earthquakes. Shear-stress histories are computed by solving the elastodynamic equations while imposing the slip velocity of a kinematic source model as a boundary condition on the fault plane. This is achieved using a 3-D finite difference method in which the rupture kinematics are modelled with the staggered-grid-split-node fault representation method of Dalguer & Day. Dynamic parameters are then estimated from the calculated stress-slip curves and averaged over the fault plane. Our results indicate that fracture energy, static, dynamic and apparent stress drops tend to increase with magnitude. The epistemic uncertainty due to uncertainties in kinematic inversions remains small (ϕ ∼ 0.1 in log10 units), showing that kinematic source models provide robust information to analyse the distribution of average dynamic source parameters. The proposed scaling relations may be useful to constrain friction law parameters in spontaneous dynamic rupture calculations for earthquake source studies, and physics-based near-source ground-motion prediction for seismic hazard and risk mitigation.

  14. Consistent modelling of wind turbine noise propagation from source to receiver.

    Science.gov (United States)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick

    2017-11-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.

  15. Lunatic fringe-mediated Notch signaling regulates adult hippocampal neural stem cell maintenance.

    Science.gov (United States)

    Semerci, Fatih; Choi, William Tin-Shing; Bajic, Aleksandar; Thakkar, Aarohi; Encinas, Juan Manuel; Depreux, Frederic; Segil, Neil; Groves, Andrew K; Maletic-Savatic, Mirjana

    2017-07-12

    Hippocampal neural stem cells (NSCs) integrate inputs from multiple sources to balance quiescence and activation. Notch signaling plays a key role during this process. Here, we report that Lunatic fringe ( Lfng), a key modifier of the Notch receptor, is selectively expressed in NSCs. Further, Lfng in NSCs and Notch ligands Delta1 and Jagged1, expressed by their progeny, together influence NSC recruitment, cell cycle duration, and terminal fate. We propose a new model in which Lfng-mediated Notch signaling enables direct communication between a NSC and its descendants, so that progeny can send feedback signals to the 'mother' cell to modify its cell cycle status. Lfng-mediated Notch signaling appears to be a key factor governing NSC quiescence, division, and fate.

  16. Data Sources for NetZero Ft Carson Model

    Data.gov (United States)

    U.S. Environmental Protection Agency — Table of values used to parameterize and evaluate the Ft Carson NetZero integrated Model with published reference sources for each value. This dataset is associated...

  17. Modeling of renewable hybrid energy sources

    Directory of Open Access Journals (Sweden)

    Dumitru Cristian Dragos

    2009-12-01

    Full Text Available Recent developments and trends in the electric power consumption indicate an increasing use of renewable energy. Renewable energy technologies offer the promise of clean, abundant energy gathered from self-renewing resources such as the sun, wind, earth and plants. Virtually all regions of the world have renewable resources of one type or another. By this point of view studies on renewable energies focuses more and more attention. The present paper intends to present different mathematical models related to different types of renewable energy sources such as: solar energy and wind energy. It is also presented the validation and adaptation of such models to hybrid systems working in geographical and meteorological conditions specific to central part of Transylvania region. The conclusions based on validation of such models are also shown.

  18. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    International Nuclear Information System (INIS)

    Yan Guanghua; Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G

    2008-01-01

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity

  19. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Yan Guanghua [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G [Department of Radiation Oncology, University of Florida, Gainesville, FL 32610-0385 (United States)

    2008-04-21

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity.

  20. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  1. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  2. Nonlinear Control of Heartbeat Models

    Directory of Open Access Journals (Sweden)

    Witt Thanom

    2011-02-01

    Full Text Available This paper presents a novel application of nonlinear control theory to heartbeat models. Existing heartbeat models are investigated and modified by incorporating the control input as a pacemaker to provide the control channel. A nonlinear feedback linearization technique is applied to force the output of the systems to generate artificial electrocardiogram (ECG signal using discrete data as the reference inputs. The synthetic ECG may serve as a flexible signal source to assess the effectiveness of a diagnostic ECG signal-processing device.

  3. Intestinal tumorigenesis is not affected by progesterone signaling in rodent models.

    Directory of Open Access Journals (Sweden)

    Jarom Heijmans

    Full Text Available Clinical data suggest that progestins have chemopreventive properties in the development of colorectal cancer. We set out to examine a potential protective effect of progestins and progesterone signaling on colon cancer development. In normal and neoplastic intestinal tissue, we found that the progesterone receptor (PR is not expressed. Expression was confined to sporadic mesenchymal cells. To analyze the influence of systemic progesterone receptor signaling, we crossed mice that lacked the progesterone receptor (PRKO to the Apc(Min/+ mouse, a model for spontaneous intestinal polyposis. PRKO-Apc(Min/+ mice exhibited no change in polyp number, size or localization compared to Apc(Min/+. To examine effects of progestins on the intestinal epithelium that are independent of the PR, we treated mice with MPA. We found no effects of either progesterone or MPA on gross intestinal morphology or epithelial proliferation. Also, in rats treated with MPA, injection with the carcinogen azoxymethane did not result in a difference in the number or size of aberrant crypt foci, a surrogate end-point for adenoma development. We conclude that expression of the progesterone receptor is limited to cells in the intestinal mesenchyme. We did not observe any effect of progesterone receptor signaling or of progestin treatment in rodent models of intestinal tumorigenesis.

  4. Martian methane plume models for defining Mars rover methane source search strategies

    Science.gov (United States)

    Nicol, Christopher; Ellery, Alex; Lynch, Brian; Cloutis, Ed

    2018-07-01

    The detection of atmospheric methane on Mars implies an active methane source. This introduces the possibility of a biotic source with the implied need to determine whether the methane is indeed biotic in nature or geologically generated. There is a clear need for robotic algorithms which are capable of manoeuvring a rover through a methane plume on Mars to locate its source. We explore aspects of Mars methane plume modelling to reveal complex dynamics characterized by advection and diffusion. A statistical analysis of the plume model has been performed and compared to analyses of terrestrial plume models. Finally, we consider a robotic search strategy to find a methane plume source. We find that gradient-based techniques are ineffective, but that more sophisticated model-based search strategies are unlikely to be available in near-term rover missions.

  5. Source term model evaluations for the low-level waste facility performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Yim, M.S.; Su, S.I. [North Carolina State Univ., Raleigh, NC (United States)

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  6. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  7. Basic Testing of the DUCHAMP Source Finder

    Science.gov (United States)

    Westmeier, T.; Popping, A.; Serra, P.

    2012-01-01

    This paper presents and discusses the results of basic source finding tests in three dimensions (using spectroscopic data cubes) with DUCHAMP, the standard source finder for the Australian Square Kilometre Array Pathfinder. For this purpose, we generated different sets of unresolved and extended Hi model sources. These models were then fed into DUCHAMP, using a range of different parameters and methods provided by the software. The main aim of the tests was to study the performance of DUCHAMP on sources with different parameters and morphologies and assess the accuracy of DUCHAMP's source parametrisation. Overall, we find DUCHAMP to be a powerful source finder capable of reliably detecting sources down to low signal-to-noise ratios and accurately measuring their position and velocity. In the presence of noise in the data, DUCHAMP's measurements of basic source parameters, such as spectral line width and integrated flux, are affected by systematic errors. These errors are a consequence of the effect of noise on the specific algorithms used by DUCHAMP for measuring source parameters in combination with the fact that the software only takes into account pixels above a given flux threshold and hence misses part of the flux. In scientific applications of DUCHAMP these systematic errors would have to be corrected for. Alternatively, DUCHAMP could be used as a source finder only, and source parametrisation could be done in a second step using more sophisticated parametrisation algorithms.

  8. Recognition of NEMP and LEMP signals based on auto-regression model and artificial neutral network

    International Nuclear Information System (INIS)

    Li Peng; Song Lijun; Han Chao; Zheng Yi; Cao Baofeng; Li Xiaoqiang; Zhang Xueqin; Liang Rui

    2010-01-01

    Auto-regression (AR) model, one power spectrum estimation method of stationary random signals, and artificial neutral network were adopted to recognize nuclear and lightning electromagnetic pulses. Self-correlation function and Burg algorithms were used to acquire the AR model coefficients as eigenvalues, and BP artificial neural network was introduced as the classifier with different numbers of hidden layers and hidden layer nodes. The results show that AR model is effective in those signals, feature extraction, and the Burg algorithm is more effective than the self-correlation function algorithm. (authors)

  9. Numerical model of electron cyclotron resonance ion source

    Directory of Open Access Journals (Sweden)

    V. Mironov

    2015-12-01

    Full Text Available Important features of the electron cyclotron resonance ion source (ECRIS operation are accurately reproduced with a numerical code. The code uses the particle-in-cell technique to model the dynamics of ions in ECRIS plasma. It is shown that a gas dynamical ion confinement mechanism is sufficient to provide the ion production rates in ECRIS close to the experimentally observed values. Extracted ion currents are calculated and compared to the experiment for a few sources. Changes in the simulated extracted ion currents are obtained with varying the gas flow into the source chamber and the microwave power. Empirical scaling laws for ECRIS design are studied and the underlying physical effects are discussed.

  10. Modeling binaural signal detection

    NARCIS (Netherlands)

    Breebaart, D.J.

    2001-01-01

    With the advent of multimedia technology and powerful signal processing systems, audio processing and reproduction has gained renewed interest. Examples of products that have been developed are audio coding algorithms to efficiently store and transmit music and speech, or audio reproduction systems

  11. Model developments for quantitative estimates of the benefits of the signals on nuclear power plant availability and economics

    International Nuclear Information System (INIS)

    Seong, Poong Hyun

    1993-01-01

    A novel framework for quantitative estimates of the benefits of signals on nuclear power plant availability and economics has been developed in this work. The models developed in this work quantify how the perfect signals affect the human operator's success in restoring the power plant to the desired state when it enters undesirable transients. Also, the models quantify the economic benefits of these perfect signals. The models have been applied to the condensate feedwater system of the nuclear power plant for demonstration. (Author)

  12. The signal-to-noise analysis of the Little-Hopfield model revisited

    International Nuclear Information System (INIS)

    Bolle, D; Blanco, J Busquets; Verbeiren, T

    2004-01-01

    Using the generating functional analysis an exact recursion relation is derived for the time evolution of the effective local field of the fully connected Little-Hopfield model. It is shown that, by leaving out the feedback correlations arising from earlier times in this effective dynamics, one precisely finds the recursion relations usually employed in the signal-to-noise approach. The consequences of this approximation as well as the physics behind it are discussed. In particular, it is pointed out why it is hard to notice the effects, especially for model parameters corresponding to retrieval. Numerical simulations confirm these findings. The signal-to-noise analysis is then extended to include all correlations, making it a full theory for dynamics at the level of the generating functional analysis. The results are applied to the frequently employed extremely diluted (a)symmetric architectures and to sequence processing networks

  13. Real-time traffic signal optimization model based on average delay time per person

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2015-10-01

    Full Text Available Real-time traffic signal control is very important for relieving urban traffic congestion. Many existing traffic control models were formulated using optimization approach, with the objective functions of minimizing vehicle delay time. To improve people’s trip efficiency, this article aims to minimize delay time per person. Based on the time-varying traffic flow data at intersections, the article first fits curves of accumulative arrival and departure vehicles, as well as the corresponding functions. Moreover, this article transfers vehicle delay time to personal delay time using average passenger load of cars and buses, employs such time as the objective function, and proposes a signal timing optimization model for intersections to achieve real-time signal parameters, including cycle length and green time. This research further implements a case study based on practical data collected at an intersection in Beijing, China. The average delay time per person and queue length are employed as evaluation indices to show the performances of the model. The results show that the proposed methodology is capable of improving traffic efficiency and is very effective for real-world applications.

  14. A Simulation Study of the Radiation-Induced Bystander Effect: Modeling with Stochastically Defined Signal Reemission

    Directory of Open Access Journals (Sweden)

    Kohei Sasaki

    2012-01-01

    Full Text Available The radiation-induced bystander effect (RIBE has been experimentally observed for different types of radiation, cell types, and cell culture conditions. However, the behavior of signal transmission between unirradiated and irradiated cells is not well known. In this study, we have developed a new model for RIBE based on the diffusion of soluble factors in cell cultures using a Monte Carlo technique. The model involves the signal emission probability from bystander cells following Poisson statistics. Simulations with this model show that the spatial configuration of the bystander cells agrees well with that of corresponding experiments, where the optimal emission probability is estimated through a large number of simulation runs. It was suggested that the most likely probability falls within 0.63–0.92 for mean number of the emission signals ranging from 1.0 to 2.5.

  15. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    Directory of Open Access Journals (Sweden)

    Wei He

    Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.

  16. On Alternate Relaying with Improper Gaussian Signaling

    KAUST Repository

    Gaafar, Mohamed; Amin, Osama; Ikhlef, Aissa; Chaaban, Anas; Alouini, Mohamed-Slim

    2016-01-01

    In this letter, we investigate the potential benefits of adopting improper Gaussian signaling (IGS) in a two-hop alternate relaying (AR) system. Given the known benefits of using IGS in interference-limited networks, we propose to use IGS to relieve the inter-relay interference (IRI) impact on the AR system assuming no channel state information is available at the source. In this regard, we assume that the two relays use IGS and the source uses proper Gaussian signaling (PGS). Then, we optimize the degree of impropriety of the relays signal, measured by the circularity coefficient, to maximize the total achievable rate. Simulation results show that using IGS yields a significant performance improvement over PGS, especially when the first hop is a bottleneck due to weak source-relay channel gains and/or strong IRI.

  17. On Alternate Relaying with Improper Gaussian Signaling

    KAUST Repository

    Gaafar, Mohamed

    2016-06-06

    In this letter, we investigate the potential benefits of adopting improper Gaussian signaling (IGS) in a two-hop alternate relaying (AR) system. Given the known benefits of using IGS in interference-limited networks, we propose to use IGS to relieve the inter-relay interference (IRI) impact on the AR system assuming no channel state information is available at the source. In this regard, we assume that the two relays use IGS and the source uses proper Gaussian signaling (PGS). Then, we optimize the degree of impropriety of the relays signal, measured by the circularity coefficient, to maximize the total achievable rate. Simulation results show that using IGS yields a significant performance improvement over PGS, especially when the first hop is a bottleneck due to weak source-relay channel gains and/or strong IRI.

  18. Transmission experiment by the simulated LMFBR model and propagation analysis of acoustic signals

    International Nuclear Information System (INIS)

    Kobayashi, Kenji; Yasuda, Tsutomu; Araki, Hitoshi.

    1981-01-01

    Acoustic transducers to detect a boiling of sodium may be installed in the upper structure and at the upper position of reactor vessel wall under constricted conditions. A set of the experiments of transmission of acoustic vibration to various points of the vessel was performed utilizing the half scale-hydraulic flow test facility simulating reactor vessel over the frequency range 20 kHz -- 100 kHz. Acoustic signals from an installed sound source in the core were measured at each point by both hydrophones in the vessel and vibration pickups on the vessel wall. In these experiments transmission of signals to each point of detectors were clearly observed to background noise level. These data have been summarized in terms of the transmission loss and furthermore are compared with background noise level of flow to estimate the feasibility of detection of sodium boiling sound. The ratio of signal to noise was obtained to be about 13 dB by hydrophone in the upper structure, 8 dB by accelerometer and 16 dB by AE-sensor at the upper position on the vessel in experiments used the simulation model. Sound waves emanated due to sodium boiling also propagate along the wall of the vessel may be predicted theoretically. The result of analysis suggests a capability of detection at the upper position of the reactor vessel wall. Leaky Lamb waves of the first symmetric (L 1 ) and of the antisymmetric (F 1 ) mode and shear horizontal wave (SH) have been derived in light of the attenuation due to coupling to liquid sodium as the traveling modes over the frequency range 10 kHz -- 100 kHz up to 50 mm in thickness of the vessel wall. Leaky Lamb wave (L 1 ) and (SH) mode have been proposed theoretically on the some assumption to be most available to detect the boiling sound of sodium propagating along the vessel wall. (author)

  19. Detection of anomalous signals in temporally correlated data (Invited)

    Science.gov (United States)

    Langbein, J. O.

    2010-12-01

    Detection of transient tectonic signals in data obtained from large geodetic networks requires the ability to detect signals that are both temporally and spatially coherent. In this report I will describe a modification to an existing method that estimates both the coefficients of temporally correlated noise model and an efficient filter based on the noise model. This filter, when applied to the original time-series, effectively whitens (or flattens) the power spectrum. The filtered data provide the means to calculate running averages which are then used to detect deviations from the background trends. For large networks, time-series of signal-to-noise ratio (SNR) can be easily constructed since, by filtering, each of the original time-series has been transformed into one that is closer to having a Gaussian distribution with a variance of 1.0. Anomalous intervals may be identified by counting the number of GPS sites for which the SNR exceeds a specified value. For example, during one time interval, if there were 5 out of 20 time-series with SNR>2, this would be considered anomalous; typically, one would expect at 95% confidence that there would be at least 1 out of 20 time-series with an SNR>2. For time intervals with an anomalously large number of high SNR, the spatial distribution of the SNR is mapped to identify the location of the anomalous signal(s) and their degree of spatial clustering. Estimating the filter that should be used to whiten the data requires modification of the existing methods that employ maximum likelihood estimation to determine the temporal covariance of the data. In these methods, it is assumed that the noise components in the data are a combination of white, flicker and random-walk processes and that they are derived from three different and independent sources. Instead, in this new method, the covariance matrix is constructed assuming that only one source is responsible for the noise and that source can be represented as a white

  20. A pedagogical walkthrough of computational modeling and simulation of Wnt signaling pathway using static causal models in MATLAB.

    Science.gov (United States)

    Sinha, Shriprakash

    2016-12-01

    Simulation study in systems biology involving computational experiments dealing with Wnt signaling pathways abound in literature but often lack a pedagogical perspective that might ease the understanding of beginner students and researchers in transition, who intend to work on the modeling of the pathway. This paucity might happen due to restrictive business policies which enforce an unwanted embargo on the sharing of important scientific knowledge. A tutorial introduction to computational modeling of Wnt signaling pathway in a human colorectal cancer dataset using static Bayesian network models is provided. The walkthrough might aid biologists/informaticians in understanding the design of computational experiments that is interleaved with exposition of the Matlab code and causal models from Bayesian network toolbox. The manuscript elucidates the coding contents of the advance article by Sinha (Integr. Biol. 6:1034-1048, 2014) and takes the reader in a step-by-step process of how (a) the collection and the transformation of the available biological information from literature is done, (b) the integration of the heterogeneous data and prior biological knowledge in the network is achieved, (c) the simulation study is designed, (d) the hypothesis regarding a biological phenomena is transformed into computational framework, and (e) results and inferences drawn using d -connectivity/separability are reported. The manuscript finally ends with a programming assignment to help the readers get hands-on experience of a perturbation project. Description of Matlab files is made available under GNU GPL v3 license at the Google code project on https://code.google.com/p/static-bn-for-wnt-signaling-pathway and https: //sites.google.com/site/shriprakashsinha/shriprakashsinha/projects/static-bn-for-wnt-signaling-pathway. Latest updates can be found in the latter website.

  1. A GIS-based time-dependent seismic source modeling of Northern Iran

    Science.gov (United States)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  2. Statistical mechanics of learning orthogonal signals for general covariance models

    International Nuclear Information System (INIS)

    Hoyle, David C

    2010-01-01

    Statistical mechanics techniques have proved to be useful tools in quantifying the accuracy with which signal vectors are extracted from experimental data. However, analysis has previously been limited to specific model forms for the population covariance C, which may be inappropriate for real world data sets. In this paper we obtain new statistical mechanical results for a general population covariance matrix C. For data sets consisting of p sample points in R N we use the replica method to study the accuracy of orthogonal signal vectors estimated from the sample data. In the asymptotic limit of N,p→∞ at fixed α = p/N, we derive analytical results for the signal direction learning curves. In the asymptotic limit the learning curves follow a single universal form, each displaying a retarded learning transition. An explicit formula for the location of the retarded learning transition is obtained and we find marked variation in the location of the retarded learning transition dependent on the distribution of population covariance eigenvalues. The results of the replica analysis are confirmed against simulation

  3. Composite mathematical modeling of calcium signaling behind neuronal cell death in Alzheimer's disease.

    Science.gov (United States)

    Ranjan, Bobby; Chong, Ket Hing; Zheng, Jie

    2018-04-11

    Alzheimer's disease (AD) is a progressive neurological disorder, recognized as the most common cause of dementia affecting people aged 65 and above. AD is characterized by an increase in amyloid metabolism, and by the misfolding and deposition of β-amyloid oligomers in and around neurons in the brain. These processes remodel the calcium signaling mechanism in neurons, leading to cell death via apoptosis. Despite accumulating knowledge about the biological processes underlying AD, mathematical models to date are restricted to depicting only a small portion of the pathology. Here, we integrated multiple mathematical models to analyze and understand the relationship among amyloid depositions, calcium signaling and mitochondrial permeability transition pore (PTP) related cell apoptosis in AD. The model was used to simulate calcium dynamics in the absence and presence of AD. In the absence of AD, i.e. without β-amyloid deposition, mitochondrial and cytosolic calcium level remains in the low resting concentration. However, our in silico simulation of the presence of AD with the β-amyloid deposition, shows an increase in the entry of calcium ions into the cell and dysregulation of Ca 2+ channel receptors on the Endoplasmic Reticulum. This composite model enabled us to make simulation that is not possible to measure experimentally. Our mathematical model depicting the mechanisms affecting calcium signaling in neurons can help understand AD at the systems level and has potential for diagnostic and therapeutic applications.

  4. Fetal source extraction from magnetocardiographic recordings by dependent component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Draulio B de [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Barros, Allan Kardec [Department of Electrical Engineering, Federal University of Maranhao, Sao Luis, Maranhao (Brazil); Estombelo-Montesco, Carlos [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Zhao, Hui [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Filho, A C Roque da Silva [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Baffa, Oswaldo [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Wakai, Ronald [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Ohnishi, Noboru [Department of Information Engineering, Nagoya University (Japan)

    2005-10-07

    Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.

  5. Modeling Left-Turn Driving Behavior at Signalized Intersections with Mixed Traffic Conditions

    Directory of Open Access Journals (Sweden)

    Hong Li

    2016-01-01

    Full Text Available In many developing countries, mixed traffic is the most common type of urban transportation; traffic of this type faces many major problems in traffic engineering, such as conflicts, inefficiency, and security issues. This paper focuses on the traffic engineering concerns on the driving behavior of left-turning vehicles caused by different degrees of pedestrian violations. The traffic characteristics of left-turning vehicles and pedestrians in the affected region at a signalized intersection were analyzed and a cellular-automata-based “following-conflict” driving behavior model that mainly addresses four basic behavior modes was proposed to study the conflict and behavior mechanisms of left-turning vehicles by mathematic methodologies. Four basic driving behavior modes were reproduced in computer simulations, and a logit model of the behavior mode choice was also developed to analyze the relative share of each behavior mode. Finally, the microscopic characteristics of driving behaviors and the macroscopic parameters of traffic flow in the affected region were all determined. These data are important reference for geometry and capacity design for signalized intersections. The simulation results show that the proposed models are valid and can be used to represent the behavior of left-turning vehicles in the case of conflicts with illegally crossing pedestrians. These results will have potential applications on improving traffic safety and traffic capacity at signalized intersections with mixed traffic conditions.

  6. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.

    Science.gov (United States)

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.

  7. Conflict anticipation in alcohol dependence - A model-based fMRI study of stop signal task.

    Science.gov (United States)

    Hu, Sien; Ide, Jaime S; Zhang, Sheng; Sinha, Rajita; Li, Chiang-Shan R

    2015-01-01

    Our previous work characterized altered cerebral activations during cognitive control in individuals with alcohol dependence (AD). A hallmark of cognitive control is the ability to anticipate changes and adjust behavior accordingly. Here, we employed a Bayesian model to describe trial-by-trial anticipation of the stop signal and modeled fMRI signals of conflict anticipation in a stop signal task. Our goal is to characterize the neural correlates of conflict anticipation and its relationship to response inhibition and alcohol consumption in AD. Twenty-four AD and 70 age and gender matched healthy control individuals (HC) participated in the study. fMRI data were pre-processed and modeled with SPM8. We modeled fMRI signals at trial onset with individual events parametrically modulated by estimated probability of the stop signal, p(Stop), and compared regional responses to conflict anticipation between AD and HC. To address the link to response inhibition, we regressed whole-brain responses to conflict anticipation against the stop signal reaction time (SSRT). Compared to HC (54/70), fewer AD (11/24) showed a significant sequential effect - a correlation between p(Stop) and RT during go trials - and the magnitude of sequential effect is diminished, suggesting a deficit in proactive control. Parametric analyses showed decreased learning rate and over-estimated prior mean of the stop signal in AD. In fMRI, both HC and AD responded to p(Stop) in bilateral inferior parietal cortex and anterior pre-supplementary motor area, although the magnitude of response increased in AD. In contrast, HC but not AD showed deactivation of the perigenual anterior cingulate cortex (pgACC). Furthermore, deactivation of the pgACC to increasing p(Stop) is positively correlated with the SSRT in HC but not AD. Recent alcohol consumption is correlated with increased activation of the thalamus and cerebellum in AD during conflict anticipation. The current results highlight altered proactive

  8. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    Science.gov (United States)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  9. A FRAMEWORK FOR AN OPEN SOURCE GEOSPATIAL CERTIFICATION MODEL

    Directory of Open Access Journals (Sweden)

    T. U. R. Khan

    2016-06-01

    Full Text Available The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission “Making geospatial education and opportunities accessible to all”. Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the “Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM. The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and

  10. a Framework for AN Open Source Geospatial Certification Model

    Science.gov (United States)

    Khan, T. U. R.; Davis, P.; Behr, F.-J.

    2016-06-01

    The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission "Making geospatial education and opportunities accessible to all". Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the "Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM). The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and evaluated with 105

  11. Noise and signal interference in optical fiber transmission systems an optimum design approach

    CERN Document Server

    Bottacchi, Stefano

    2008-01-01

    A comprehensive reference to noise and signal interference in optical fiber communications Noise and Signal Interference in Optical Fiber Transmission Systems is a compendium on specific topics within optical fiber transmission and the optimization process of the system design. It offers comprehensive treatment of noise and intersymbol interference (ISI) components affecting optical fiber communications systems, containing coverage on noise from the light source, the fiber and the receiver. The ISI is modeled with a statistical approach, leading to new useful computational m

  12. Performance analysis of NOAA tropospheric signal delay model

    International Nuclear Information System (INIS)

    Ibrahim, Hassan E; El-Rabbany, Ahmed

    2011-01-01

    Tropospheric delay is one of the dominant global positioning system (GPS) errors, which degrades the positioning accuracy. Recent development in tropospheric modeling relies on implementation of more accurate numerical weather prediction (NWP) models. In North America one of the NWP-based tropospheric correction models is the NOAA Tropospheric Signal Delay Model (NOAATrop), which was developed by the US National Oceanic and Atmospheric Administration (NOAA). Because of its potential to improve the GPS positioning accuracy, the NOAATrop model became the focus of many researchers. In this paper, we analyzed the performance of the NOAATrop model and examined its effect on ionosphere-free-based precise point positioning (PPP) solution. We generated 3 year long tropospheric zenith total delay (ZTD) data series for the NOAATrop model, Hopfield model, and the International GNSS Services (IGS) final tropospheric correction product, respectively. These data sets were generated at ten IGS reference stations spanning Canada and the United States. We analyzed the NOAATrop ZTD data series and compared them with those of the Hopfield model. The IGS final tropospheric product was used as a reference. The analysis shows that the performance of the NOAATrop model is a function of both season (time of the year) and geographical location. However, its performance was superior to the Hopfield model in all cases. We further investigated the effect of implementing the NOAATrop model on the ionosphere-free-based PPP solution convergence and accuracy. It is shown that the use of the NOAATrop model improved the PPP solution convergence by 1%, 10% and 15% for the latitude, longitude and height components, respectively

  13. Investigating Irregularly Patterned Deep Brain Stimulation Signal Design Using Biophysical Models

    Directory of Open Access Journals (Sweden)

    Samantha Rose Summerson

    2015-06-01

    Full Text Available Parkinson’s disease (PD is a neurodegenerative disorder which follows from cell loss of dopaminergic neurons in the substantia nigra pars compacta (SNc, a nucleus in the basal ganglia (BG. Deep brain stimulation (DBS is an electrical therapy that modulates the pathological activity to treat the motor symptoms of PD. Although this therapy is currently used in clinical practice, the sufficient conditions for therapeutic efficacy are unknown. In this work we develop a model of critical motor circuit structures in the brain using biophysical cell models as the base components and then evaluate performance of different DBS signals in this model to perform comparative studies of their efficacy. Biological models are an important tool for gaining insights into neural function and, in this case, serve as effective tools for investigating innovative new DBS paradigms. Experiments were performed using the hemi-parkinsonian rodent model to test the same set of signals, verifying the obedience of the model to physiological trends. We show that antidromic spiking from DBS of the subthalamic nucleus (STN has a significant impact on cortical neural activity, which is frequency dependent and additionally modulated by the regularity of the stimulus pulse train used. Irregular spacing between stimulus pulses, where the amount of variability added is bounded, is shown to increase diversification of response of basal ganglia neurons and reduce entropic noise in cortical neurons, which may be fundamentally important to restoration of information flow in the motor circuit.

  14. Large-Signal Code TESLA: Improvements in the Implementation and in the Model

    National Research Council Canada - National Science Library

    Chernyavskiy, Igor A; Vlasov, Alexander N; Anderson, Jr., Thomas M; Cooke, Simon J; Levush, Baruch; Nguyen, Khanh T

    2006-01-01

    We describe the latest improvements made in the large-signal code TESLA, which include transformation of the code to a Fortran-90/95 version with dynamical memory allocation and extension of the model...

  15. Modeling skull's acoustic attenuation and dispersion on photoacoustic signal

    Science.gov (United States)

    Mohammadi, L.; Behnam, H.; Nasiriavanaki, M. R.

    2017-03-01

    Despite the great promising results of a recent new transcranial photoacoustic brain imaging technology, it has been shown that the presence of the skull severely affects the performance of this imaging modality. In this paper, we investigate the effect of skull on generated photoacoustic signals with a mathematical model. The developed model takes into account the frequency dependence attenuation and acoustic dispersion effects occur with the wave reflection and refraction at the skull surface. Numerical simulations based on the developed model are performed for calculating the propagation of photoacoustic waves through the skull. From the simulation results, it was found that the skull-induced distortion becomes very important and the reconstructed image would be strongly distorted without correcting these effects. In this regard, it is anticipated that an accurate quantification and modeling of the skull transmission effects would ultimately allow for skull aberration correction in transcranial photoacoustic brain imaging.

  16. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    Science.gov (United States)

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.

  17. The Design of a Fire Source in Scale-Model Experiments with Smoke Ventilation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Brohus, Henrik; la Cour-Harbo, H.

    2004-01-01

    The paper describes the design of a fire and a smoke source for scale-model experiments with smoke ventilation. It is only possible to work with scale-model experiments where the Reynolds number is reduced compared to full scale, and it is demonstrated that special attention to the fire source...... (heat and smoke source) may improve the possibility of obtaining Reynolds number independent solutions with a fully developed flow. The paper shows scale-model experiments for the Ofenegg tunnel case. Design of a fire source for experiments with smoke ventilation in a large room and smoke movement...

  18. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite

  19. HSP v2: Haptic Signal Processing with Extensions for Physical Modeling

    DEFF Research Database (Denmark)

    Overholt, Daniel; Kontogeorgakopoulos, Alexandros; Berdahl, Edgar

    2010-01-01

    The Haptic Signal Processing (HSP) platform aims to enable musicians to easily design and perform with digital haptic musical instruments [1]. In this paper, we present some new objects introduced in version v2 for modeling of musical dynamical systems such as resonators and vibrating strings. To....... To our knowledge, this is the first time that these diverse physical modeling elements have all been made available for a modular, real-time haptics platform....

  20. Research on a Small Signal Stability Region Boundary Model of the Interconnected Power System with Large-Scale Wind Power

    Directory of Open Access Journals (Sweden)

    Wenying Liu

    2015-03-01

    Full Text Available For the interconnected power system with large-scale wind power, the problem of the small signal stability has become the bottleneck of restricting the sending-out of wind power as well as the security and stability of the whole power system. Around this issue, this paper establishes a small signal stability region boundary model of the interconnected power system with large-scale wind power based on catastrophe theory, providing a new method for analyzing the small signal stability. Firstly, we analyzed the typical characteristics and the mathematic model of the interconnected power system with wind power and pointed out that conventional methods can’t directly identify the topological properties of small signal stability region boundaries. For this problem, adopting catastrophe theory, we established a small signal stability region boundary model of the interconnected power system with large-scale wind power in two-dimensional power injection space and extended it to multiple dimensions to obtain the boundary model in multidimensional power injection space. Thirdly, we analyzed qualitatively the topological property’s changes of the small signal stability region boundary caused by large-scale wind power integration. Finally, we built simulation models by DIgSILENT/PowerFactory software and the final simulation results verified the correctness and effectiveness of the proposed model.