Springer, P.
1993-01-01
This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.
Generalized Canonical Time Warping.
Zhou, Feng; De la Torre, Fernando
2016-02-01
Temporal alignment of human motion has been of recent interest due to its applications in animation, tele-rehabilitation and activity recognition. This paper presents generalized canonical time warping (GCTW), an extension of dynamic time warping (DTW) and canonical correlation analysis (CCA) for temporally aligning multi-modal sequences from multiple subjects performing similar activities. GCTW extends previous work on DTW and CCA in several ways: (1) it combines CCA with DTW to align multi-modal data (e.g., video and motion capture data); (2) it extends DTW by using a linear combination of monotonic functions to represent the warping path, providing a more flexible temporal warp. Unlike exact DTW, which has quadratic complexity, we propose a linear time algorithm to minimize GCTW. (3) GCTW allows simultaneous alignment of multiple sequences. Experimental results on aligning multi-modal data, facial expressions, motion capture data and video illustrate the benefits of GCTW. The code is available at http://humansensing.cs.cmu.edu/ctw.
Han, Renmin
2017-12-24
Long-reads, point-of-care, and PCR-free are the promises brought by nanopore sequencing. Among various steps in nanopore data analysis, the global mapping between the raw electrical current signal sequence and the expected signal sequence from the pore model serves as the key building block to base calling, reads mapping, variant identification, and methylation detection. However, the ultra-long reads of nanopore sequencing and an order of magnitude difference in the sampling speeds of the two sequences make the classical dynamic time warping (DTW) and its variants infeasible to solve the problem. Here, we propose a novel multi-level DTW algorithm, cwDTW, based on continuous wavelet transforms with different scales of the two signal sequences. Our algorithm starts from low-resolution wavelet transforms of the two sequences, such that the transformed sequences are short and have similar sampling rates. Then the peaks and nadirs of the transformed sequences are extracted to form feature sequences with similar lengths, which can be easily mapped by the original DTW. Our algorithm then recursively projects the warping path from a lower-resolution level to a higher-resolution one by building a context-dependent boundary and enabling a constrained search for the warping path in the latter. Comprehensive experiments on two real nanopore datasets on human and on Pandoraea pnomenusa, as well as two benchmark datasets from previous studies, demonstrate the efficiency and effectiveness of the proposed algorithm. In particular, cwDTW can almost always generate warping paths that are very close to the original DTW, which are remarkably more accurate than the state-of-the-art methods including FastDTW and PrunedDTW. Meanwhile, on the real nanopore datasets, cwDTW is about 440 times faster than FastDTW and 3000 times faster than the original DTW. Our program is available at https://github.com/realbigws/cwDTW.
Time Warp Operating System (TWOS)
Bellenot, Steven F.
1993-01-01
Designed to support parallel discrete-event simulation, TWOS is complete implementation of Time Warp mechanism - distributed protocol for virtual time synchronization based on process rollback and message annihilation.
An, Xinliang; Wong, Willie Wai Yeung
2018-01-01
Many classical results in relativity theory concerning spherically symmetric space-times have easy generalizations to warped product space-times, with a two-dimensional Lorentzian base and arbitrary dimensional Riemannian fibers. We first give a systematic presentation of the main geometric constructions, with emphasis on the Kodama vector field and the Hawking energy; the construction is signature independent. This leads to proofs of general Birkhoff-type theorems for warped product manifolds; our theorems in particular apply to situations where the warped product manifold is not necessarily Einstein, and thus can be applied to solutions with matter content in general relativity. Next we specialize to the Lorentzian case and study the propagation of null expansions under the assumption of the dominant energy condition. We prove several non-existence results relating to the Yamabe class of the fibers, in the spirit of the black-hole topology theorem of Hawking–Galloway–Schoen. Finally we discuss the effect of the warped product ansatz on matter models. In particular we construct several cosmological solutions to the Einstein–Euler equations whose spatial geometry is generally not isotropic.
Time tunnels meet warped passages
Kushner, David
2006-01-01
"Just in time for its 40th anniversary, the classic sci-fi television show "The time tunnel" is out on DVD. The conceit is something every engineer can relate to: a pulled plug. Scientists in an underground lab are working on a secret government experiment in time travel. (1 page)
Selecting local constraint for alignment of batch process data with dynamic time warping
DEFF Research Database (Denmark)
Spooner, Max Peter; Kold, David; Kulahci, Murat
2017-01-01
” may be interpreted as a progress signature of the batch which may be appended to the aligned data for further analysis. For the warping function to be a realistic reflection of the progress of a batch, it is necessary to impose some constraints on the dynamic time warping algorithm, to avoid...
Directory of Open Access Journals (Sweden)
Vingron Martin
2011-08-01
Full Text Available Abstract Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.
Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp
2011-08-18
Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.
Time Warp Operating System, Version 2.5.1
Bellenot, Steven F.; Gieselman, John S.; Hawley, Lawrence R.; Peterson, Judy; Presley, Matthew T.; Reiher, Peter L.; Springer, Paul L.; Tupman, John R.; Wedel, John J., Jr.; Wieland, Frederick P.;
1993-01-01
Time Warp Operating System, TWOS, is special purpose computer program designed to support parallel simulation of discrete events. Complete implementation of Time Warp software mechanism, which implements distributed protocol for virtual synchronization based on rollback of processes and annihilation of messages. Supports simulations and other computations in which both virtual time and dynamic load balancing used. Program utilizes underlying resources of operating system. Written in C programming language.
A Study on Efficient Robust Speech Recognition with Stochastic Dynamic Time Warping
孫, 喜浩
2014-01-01
In recent years, great progress has been made in automatic speech recognition (ASR) system. The hidden Markov model (HMM) and dynamic time warping (DTW) are the two main algorithms which have been widely applied to ASR system. Although, HMM technique achieves higher recognition accuracy in clear speech environment and noisy environment. It needs large-set of words and realizes the algorithm more complexly.Thus, more and more researchers have focused on DTW-based ASR system.Dynamic time warpin...
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance
Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.
Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.
International Nuclear Information System (INIS)
Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.
2015-01-01
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.
Energy Technology Data Exchange (ETDEWEB)
Labaria, George R. [Univ. of California, Santa Cruz, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Warrick, Abbie L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, Peter M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kalantar, Daniel H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-01-12
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.
TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1
Bellenot, S. F.
1994-01-01
The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed
A time warping approach to multiple sequence alignment.
Arribas-Gil, Ana; Matias, Catherine
2017-04-25
We propose an approach for multiple sequence alignment (MSA) derived from the dynamic time warping viewpoint and recent techniques of curve synchronization developed in the context of functional data analysis. Starting from pairwise alignments of all the sequences (viewed as paths in a certain space), we construct a median path that represents the MSA we are looking for. We establish a proof of concept that our method could be an interesting ingredient to include into refined MSA techniques. We present a simple synthetic experiment as well as the study of a benchmark dataset, together with comparisons with 2 widely used MSA softwares.
Time warp operating system version 2.7 internals manual
1992-01-01
The Time Warp Operating System (TWOS) is an implementation of the Time Warp synchronization method proposed by David Jefferson. In addition, it serves as an actual platform for running discrete event simulations. The code comprising TWOS can be divided into several different sections. TWOS typically relies on an existing operating system to furnish some very basic services. This existing operating system is referred to as the Base OS. The existing operating system varies depending on the hardware TWOS is running on. It is Unix on the Sun workstations, Chrysalis or Mach on the Butterfly, and Mercury on the Mark 3 Hypercube. The base OS could be an entirely new operating system, written to meet the special needs of TWOS, but, to this point, existing systems have been used instead. The base OS's used for TWOS on various platforms are not discussed in detail in this manual, as they are well covered in their own manuals. Appendix G discusses the interface between one such OS, Mach, and TWOS.
MyDTW - Dynamic Time Warping program for stratigraphical time series
Kotov, Sergey; Paelike, Heiko
2017-04-01
One of the general tasks in many geological disciplines is matching of one time or space signal to another. It can be classical correlation between two cores or cross-sections in sedimentology or marine geology. For example, tuning a paleoclimatic signal to a target curve, driven by variations in the astronomical parameters, is a powerful technique to construct accurate time scales. However, these methods can be rather time-consuming and can take ours of routine work even with the help of special semi-automatic software. Therefore, different approaches to automate the processes have been developed during last decades. Some of them are based on classical statistical cross-correlations such as the 'Correlator' after Olea [1]. Another ones use modern ideas of dynamic programming. A good example is as an algorithm developed by Lisiecki and Lisiecki [2] or dynamic time warping based algorithm after Pälike [3]. We introduce here an algorithm and computer program, which are also stemmed from the Dynamic Time Warping algorithm class. Unlike the algorithm of Lisiecki and Lisiecki, MyDTW does not lean on a set of penalties to follow geological logics, but on a special internal structure and specific constrains. It differs also from [3] in basic ideas of implementation and constrains design. The algorithm is implemented as a computer program with a graphical user interface using Free Pascal and Lazarus IDE and available for Windows, Mac OS, and Linux. Examples with synthetic and real data are demonstrated. Program is available for free download at http://www.marum.de/Sergey_Kotov.html . References: 1. Olea, R.A. Expert systems for automated correlation and interpretation of wireline logs // Math Geol (1994) 26: 879. doi:10.1007/BF02083420 2. Lisiecki L. and Lisiecki P. Application of dynamic programming to the correlation of paleoclimate records // Paleoceanography (2002), Volume 17, Issue 4, pp. 1-1, CiteID 1049, doi: 10.1029/2001PA000733 3. Pälike, H. Extending the
Cough Recognition Based on Mel Frequency Cepstral Coefficients and Dynamic Time Warping
Zhu, Chunmei; Liu, Baojun; Li, Ping
Cough recognition provides important clinical information for the treatment of many respiratory diseases, but the assessment of cough frequency over a long period of time remains unsatisfied for either clinical or research purpose. In this paper, according to the advantage of dynamic time warping (DTW) and the characteristic of cough recognition, an attempt is made to adapt DTW as the recognition algorithm for cough recognition. The process of cough recognition based on mel frequency cepstral coefficients (MFCC) and DTW is introduced. Experiment results of testing samples from 3 subjects show that acceptable performances of cough recognition are obtained by DTW with a small training set.
A rotating and warping projector/backprojector for fan-beam and cone-beam iterative algorithm
International Nuclear Information System (INIS)
Zeng, G.L.; Hsieh, Y.L.; Gullberg, G.T.
1994-01-01
A rotating-and-warping projector/backprojector is proposed for iterative algorithms used to reconstruct fan-beam and cone-beam single photon emission computed tomography (SPECT) data. The development of a new projector/backprojector for implementing attenuation, geometric point response, and scatter models is motivated by the need to reduce the computation time yet to preserve the fidelity of the corrected reconstruction. At each projection angle, the projector/backprojector first rotates the image volume so that the pixelized cube remains parallel to the detector, and then warps the image volume so that the fan-beam and cone-beam rays are converted into parallel rays. In the authors implementation, these two steps are combined so that the interpolation of voxel values are performed only once. The projection operation is achieved by a simple weighted summation, and the backprojection operation is achieved by copying weighted projection array values to the image volume. An advantage of this projector/backprojector is that the system point response function can be deconvolved via the Fast Fourier Transform using the shift-invariant property of the point response when the voxel-to-detector distance is constant. The fan-beam and cone-beam rotating-and-warping projector/backprojector is applied to SPECT data showing improved resolution
Energy Technology Data Exchange (ETDEWEB)
Veiga, Catarina, E-mail: catarina.veiga.11@ucl.ac.uk; Royle, Gary [Radiation Physics Group, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT (United Kingdom); Lourenço, Ana Mónica [Radiation Physics Group, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, United Kingdom and Acoustics and Ionizing Radiation Team, National Physical Laboratory, Teddington TW11 0LW (United Kingdom); Mouinuddin, Syed [Department of Radiotherapy, University College London Hospital, London NW1 2BU (United Kingdom); Herk, Marcel van [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam 1066 CX (Netherlands); Modat, Marc; Ourselin, Sébastien; McClelland, Jamie R. [Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT (United Kingdom)
2015-02-15
Purpose: The aims of this work were to evaluate the performance of several deformable image registration (DIR) algorithms implemented in our in-house software (NiftyReg) and the uncertainties inherent to using different algorithms for dose warping. Methods: The authors describe a DIR based adaptive radiotherapy workflow, using CT and cone-beam CT (CBCT) imaging. The transformations that mapped the anatomy between the two time points were obtained using four different DIR approaches available in NiftyReg. These included a standard unidirectional algorithm and more sophisticated bidirectional ones that encourage or ensure inverse consistency. The forward (CT-to-CBCT) deformation vector fields (DVFs) were used to propagate the CT Hounsfield units and structures to the daily geometry for “dose of the day” calculations, while the backward (CBCT-to-CT) DVFs were used to remap the dose of the day onto the planning CT (pCT). Data from five head and neck patients were used to evaluate the performance of each implementation based on geometrical matching, physical properties of the DVFs, and similarity between warped dose distributions. Geometrical matching was verified in terms of dice similarity coefficient (DSC), distance transform, false positives, and false negatives. The physical properties of the DVFs were assessed calculating the harmonic energy, determinant of the Jacobian, and inverse consistency error of the transformations. Dose distributions were displayed on the pCT dose space and compared using dose difference (DD), distance to dose difference, and dose volume histograms. Results: All the DIR algorithms gave similar results in terms of geometrical matching, with an average DSC of 0.85 ± 0.08, but the underlying properties of the DVFs varied in terms of smoothness and inverse consistency. When comparing the doses warped by different algorithms, we found a root mean square DD of 1.9% ± 0.8% of the prescribed dose (pD) and that an average of 9% ± 4% of
Overcoming the Educational Time Warp: Anticipating a Different Future
Directory of Open Access Journals (Sweden)
Garry Jacobs
2015-10-01
Full Text Available Education abridges the time required for individual and social progress by preserving and propagating the essence of human experience. It delivers to youth the accumulated knowledge of countless past generations in an organized and abridged form, so that future generations can start off with all the capacities acquired by their predecessors. However, today education confronts a serious dilemma. We are living in an educational time warp. There is a growing gap between contemporary human experience and what is taught in our educational system and that gap is widening rapidly with each passing year. Today humanity confronts challenges of unprecedented scope, magnitude and intensity. The incremental development of educational content and pedagogy in recent decades has not kept with the ever-accelerating pace of technological and social evolution. Education is also subject to a generational time warp resulting from the fact that many of today’s teachers were educated decades ago during very different times and based on different values and perspectives. The challenge of preparing youth for the future is exasperated by the fact that the future for which we are educating youth does not yet exist and to a large extent is unknown or unknowable. The resulting gap between the content of education and societal needs inhibits our capacity to anticipate and effectively respond to social problems. All these factors argue for a major reorientation of educational content and pedagogy from transmission of acquired knowledge based on past experience to development of the knowledge, skills and capacities of personality needed in a future we cannot clearly envision. We may not be able to anticipate the precise nature of the future, but we can provide an education based on the understanding that it will be very different from the present. In terms of content, the emphasis needs to shift from facts regarding the actual state of affairs in the past, present and
Evaluation of oil biodegradation using time warping and PCA
International Nuclear Information System (INIS)
Christensen, J.H.; Hansen, A.B.; Andersen, O.
2005-01-01
The effects of biodegradation on the composition of stranded oil after the Baltic Carrier oil spill in March 2001 was evaluated using a newly developed multivariate statistical methodology. Gas chromatography and mass spectrometry provided data on the oil compounds and oil biodegradation was determined by applying weighted least square principal component analysis to the preprocessed chromatograms of methylphenanthrenes and methyldibenzothiophenes. One principal component explained 46 per cent of the variation in the complete data set. Samples collected immediately after the spill and 2.5 months after the spill did not exhibit changes in isomer composition. However, the isomer patterns changed in samples collected between 6.5 and 16.5 months after the spill. Samples collected after 8.5 months were the most greatly affected. An evaluation of the degradation patterns suggest that time warping and multivariate statistical methods can successfully identify links between spill samples and can determine how chemical composition will respond to biodegradation processes. 27 refs., 1 tab., 3 figs
Evaluation of oil biodegradation using time warping and PCA
Energy Technology Data Exchange (ETDEWEB)
Christensen, J.H. [Royal Veterinary and Agricultural Univ., Thorvaldsensvej (Denmark). Dept. of Natural Sciences; Hansen, A.B. [National Environmental Research Inst., Roskilde (Denmark). Dept. of Environmental Chemistry and Microbiology; Andersen, O. [Roskilde Univ., Roskilde (Denmark). Dept. of Life Sciences and Chemistry
2005-07-01
The effects of biodegradation on the composition of stranded oil after the Baltic Carrier oil spill in March 2001 was evaluated using a newly developed multivariate statistical methodology. Gas chromatography and mass spectrometry provided data on the oil compounds and oil biodegradation was determined by applying weighted least square principal component analysis to the preprocessed chromatograms of methylphenanthrenes and methyldibenzothiophenes. One principal component explained 46 per cent of the variation in the complete data set. Samples collected immediately after the spill and 2.5 months after the spill did not exhibit changes in isomer composition. However, the isomer patterns changed in samples collected between 6.5 and 16.5 months after the spill. Samples collected after 8.5 months were the most greatly affected. An evaluation of the degradation patterns suggest that time warping and multivariate statistical methods can successfully identify links between spill samples and can determine how chemical composition will respond to biodegradation processes. 27 refs., 1 tab., 3 figs.
Time-warp invariant pattern detection with bursting neurons
International Nuclear Information System (INIS)
Gollisch, Tim
2008-01-01
Sound patterns are defined by the temporal relations of their constituents, individual acoustic cues. Auditory systems need to extract these temporal relations to detect or classify sounds. In various cases, ranging from human speech to communication signals of grasshoppers, this pattern detection has been found to display invariance to temporal stretching or compression of the sound signal ('linear time-warp invariance'). In this work, a four-neuron network model is introduced, designed to solve such a detection task for the example of grasshopper courtship songs. As an essential ingredient, the network contains neurons with intrinsic bursting dynamics, which allow them to encode durations between acoustic events in short, rapid sequences of spikes. As shown by analytical calculations and computer simulations, these neuronal dynamics result in a powerful mechanism for temporal integration. Finally, the network reads out the encoded temporal information by detecting equal activity of two such bursting neurons. This leads to the recognition of rhythmic patterns independent of temporal stretching or compression
Dynamic time warping and machine learning for signal quality assessment of pulsatile signals
International Nuclear Information System (INIS)
Li, Q; Clifford, G D
2012-01-01
In this work, we describe a beat-by-beat method for assessing the clinical utility of pulsatile waveforms, primarily recorded from cardiovascular blood volume or pressure changes, concentrating on the photoplethysmogram (PPG). Physiological blood flow is nonstationary, with pulses changing in height, width and morphology due to changes in heart rate, cardiac output, sensor type and hardware or software pre-processing requirements. Moreover, considerable inter-individual and sensor-location variability exists. Simple template matching methods are therefore inappropriate, and a patient-specific adaptive initialization is therefore required. We introduce dynamic time warping to stretch each beat to match a running template and combine it with several other features related to signal quality, including correlation and the percentage of the beat that appeared to be clipped. The features were then presented to a multi-layer perceptron neural network to learn the relationships between the parameters in the presence of good- and bad-quality pulses. An expert-labeled database of 1055 segments of PPG, each 6 s long, recorded from 104 separate critical care admissions during both normal and verified arrhythmic events, was used to train and test our algorithms. An accuracy of 97.5% on the training set and 95.2% on test set was found. The algorithm could be deployed as a stand-alone signal quality assessment algorithm for vetting the clinical utility of PPG traces or any similar quasi-periodic signal. (paper)
Dynamic time warping and machine learning for signal quality assessment of pulsatile signals.
Li, Q; Clifford, G D
2012-09-01
In this work, we describe a beat-by-beat method for assessing the clinical utility of pulsatile waveforms, primarily recorded from cardiovascular blood volume or pressure changes, concentrating on the photoplethysmogram (PPG). Physiological blood flow is nonstationary, with pulses changing in height, width and morphology due to changes in heart rate, cardiac output, sensor type and hardware or software pre-processing requirements. Moreover, considerable inter-individual and sensor-location variability exists. Simple template matching methods are therefore inappropriate, and a patient-specific adaptive initialization is therefore required. We introduce dynamic time warping to stretch each beat to match a running template and combine it with several other features related to signal quality, including correlation and the percentage of the beat that appeared to be clipped. The features were then presented to a multi-layer perceptron neural network to learn the relationships between the parameters in the presence of good- and bad-quality pulses. An expert-labeled database of 1055 segments of PPG, each 6 s long, recorded from 104 separate critical care admissions during both normal and verified arrhythmic events, was used to train and test our algorithms. An accuracy of 97.5% on the training set and 95.2% on test set was found. The algorithm could be deployed as a stand-alone signal quality assessment algorithm for vetting the clinical utility of PPG traces or any similar quasi-periodic signal.
Time-dependent gravitating solitons in five dimensional warped space-times
Giovannini, Massimo
2007-01-01
Time-dependent soliton solutions are explicitly derived in a five-dimensional theory endowed with one (warped) extra-dimension. Some of the obtained geometries, everywhere well defined and technically regular, smoothly interpolate between two five-dimensional anti-de Sitter space-times for fixed value of the conformal time coordinate. Time dependent solutions containing both topological and non-topological sectors are also obtained. Supplementary degrees of freedom can be also included and, in this case, the resulting multi-soliton solutions may describe time-dependent kink-antikink systems.
Rai, Shesh N; Trainor, Patrick J; Khosravi, Farhad; Kloecker, Goetz; Panchapakesan, Balaji
2016-01-01
The development of biosensors that produce time series data will facilitate improvements in biomedical diagnostics and in personalized medicine. The time series produced by these devices often contains characteristic features arising from biochemical interactions between the sample and the sensor. To use such characteristic features for determining sample class, similarity-based classifiers can be utilized. However, the construction of such classifiers is complicated by the variability in the time domains of such series that renders the traditional distance metrics such as Euclidean distance ineffective in distinguishing between biological variance and time domain variance. The dynamic time warping (DTW) algorithm is a sequence alignment algorithm that can be used to align two or more series to facilitate quantifying similarity. In this article, we evaluated the performance of DTW distance-based similarity classifiers for classifying time series that mimics electrical signals produced by nanotube biosensors. Simulation studies demonstrated the positive performance of such classifiers in discriminating between time series containing characteristic features that are obscured by noise in the intensity and time domains. We then applied a DTW distance-based k -nearest neighbors classifier to distinguish the presence/absence of mesenchymal biomarker in cancer cells in buffy coats in a blinded test. Using a train-test approach, we find that the classifier had high sensitivity (90.9%) and specificity (81.8%) in differentiating between EpCAM-positive MCF7 cells spiked in buffy coats and those in plain buffy coats.
Time travel and warp drives a scientific guide to shortcuts through time and space
Everett, Allen
2012-01-01
Sci-fi makes it look so easy. Receive a distress call from Alpha Centauri? No problem: punch the warp drive and you're there in minutes. Facing a catastrophe that can't be averted? Just pop back in the timestream and stop it before it starts. But for those of us not lucky enough to live in a science-fictional universe, are these ideas merely flights of fancy—or could it really be possible to travel through time or take shortcuts between stars?Cutting-edge physics may not be able to answer those questions yet, but it does offer up some tantalizing possibilities. In Time Travel and W
A Dynamic Time Warping Approach to Real-Time Activity Recognition for Food Preparation
Pham, Cuong; Plötz, Thomas; Olivier, Patrick
We present a dynamic time warping based activity recognition system for the analysis of low-level food preparation activities. Accelerometers embedded into kitchen utensils provide continuous sensor data streams while people are using them for cooking. The recognition framework analyzes frames of contiguous sensor readings in real-time with low latency. It thereby adapts to the idiosyncrasies of utensil use by automatically maintaining a template database. We demonstrate the effectiveness of the classification approach by a number of real-world practical experiments on a publically available dataset. The adaptive system shows superior performance compared to a static recognizer. Furthermore, we demonstrate the generalization capabilities of the system by gradually reducing the amount of training samples. The system achieves excellent classification results even if only a small number of training samples is available, which is especially relevant for real-world scenarios.
Yang, Licai; Shen, Jun; Bao, Shudi; Wei, Shoushui
2013-10-01
To treat the problem of identification performance and the complexity of the algorithm, we proposed a piecewise linear representation and dynamic time warping (PLR-DTW) method for ECG biometric identification. Firstly we detected R peaks to get the heartbeats after denoising preprocessing. Then we used the PLR method to keep important information of an ECG signal segment while reducing the data dimension at the same time. The improved DTW method was used for similarity measurements between the test data and the templates. The performance evaluation was carried out on the two ECG databases: PTB and MIT-BIH. The analystic results showed that compared to the discrete wavelet transform method, the proposed PLR-DTW method achieved a higher accuracy rate which is nearly 8% of rising, and saved about 30% operation time, and this demonstrated that the proposed method could provide a better performance.
Dynamic Time Warping Distance Method for Similarity Test of Multipoint Ground Motion Field
Directory of Open Access Journals (Sweden)
Yingmin Li
2010-01-01
Full Text Available The reasonability of artificial multi-point ground motions and the identification of abnormal records in seismic array observations, are two important issues in application and analysis of multi-point ground motion fields. Based on the dynamic time warping (DTW distance method, this paper discusses the application of similarity measurement in the similarity analysis of simulated multi-point ground motions and the actual seismic array records. Analysis results show that the DTW distance method not only can quantitatively reflect the similarity of simulated ground motion field, but also offers advantages in clustering analysis and singularity recognition of actual multi-point ground motion field.
Sistem Gesture Accelerometer dengan Metode Fast Dynamic Time Warping (FastDTW
Directory of Open Access Journals (Sweden)
Sam Farisa Chaerul Haviana
2016-01-01
Full Text Available In the modern environment, the interaction between humans and computers require a more natural form of interaction. Therefore, it is important to be able to build a system that can meet these demands, such as by building a hand gesture recognition system or gesture to create a more natural form of interaction. This study aims to design a smartphone’s accelerometer gesture system as human computer interaction interfaces using FastDTW (Fast Dynamic Time Warping.The result of this study is form of gesture interaction which implemented in a system that can make the process of recognition of the human hand movements based on a smartphone accelerometer which generates a command to run the media player application functions as a case study. FastDTW as the development of Dynamic Time Warping method (DTW is able to compute faster than DTW and have an accuracy approaching DTW. From the test results, FastDTW show a fairly high degree of accuracy reached 86% and showed a better computing speed compared to DTW Keywords: Human and Computer Interaction, Accelerometer-based gesture, FastDTW, Media player application function
Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.
2017-09-01
Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.
Directory of Open Access Journals (Sweden)
Mohammad Iqbal
2015-11-01
Full Text Available ABSTRAK Algoritma Dynamic Time Warping (DTW digunakan secara luas untuk berbagai penelitian, salah satunya di bidang bahasa isyarat. DTW adalah algoritma pencocokan pola (template matching untuk mengukur kemiripan dua data sekuensial (time series temporal yang berbeda waktu dan kecepatan. Pada penelitian ini disajikan implementasi algoritma DTW untuk pengenalan bahasa isyarat Indonesia (Sistem Isyarat Bahasa Indonesia SIBI secara offline. Dataset yang digunakan dalam penelitian ini sebanyak 900 data untuk dengan jumlah kelas 50 kata isyarat, yaitu dengan rincian untuk masing-masing kelas adalah 3 data sebagai data template dan 15 data sebagai data testing. Hasil pengujian menunjukkan bahwa tingkat pengenalan atau nilai accuracy adalah 89,73%. Waktu rata-rata yang dibutuhkan adalah 654.59 milidetik untuk proses pengenalan satu data testing dengan menggunakan template sebanyak 3 data per kelas atau total template 150 data. Kata kunci: pengenalan, offline, SIBI, bahasa isyarat Indonesia, android.
Prediction of regulatory gene pairs using dynamic time warping and gene ontology.
Yang, Andy C; Hsu, Hui-Huang; Lu, Ming-Da; Tseng, Vincent S; Shih, Timothy K
2014-01-01
Selecting informative genes is the most important task for data analysis on microarray gene expression data. In this work, we aim at identifying regulatory gene pairs from microarray gene expression data. However, microarray data often contain multiple missing expression values. Missing value imputation is thus needed before further processing for regulatory gene pairs becomes possible. We develop a novel approach to first impute missing values in microarray time series data by combining k-Nearest Neighbour (KNN), Dynamic Time Warping (DTW) and Gene Ontology (GO). After missing values are imputed, we then perform gene regulation prediction based on our proposed DTW-GO distance measurement of gene pairs. Experimental results show that our approach is more accurate when compared with existing missing value imputation methods on real microarray data sets. Furthermore, our approach can also discover more regulatory gene pairs that are known in the literature than other methods.
Le, Long N; Jones, Douglas L
2018-03-01
Audio classification techniques often depend on the availability of a large labeled training dataset for successful performance. However, in many application domains of audio classification (e.g., wildlife monitoring), obtaining labeled data is still a costly and laborious process. Motivated by this observation, a technique is proposed to efficiently learn a clean template from a few labeled, but likely corrupted (by noise and interferences), data samples. This learning can be done efficiently via tensorial dynamic time warping on the articulation index-based time-frequency representations of audio data. The learned template can then be used in audio classification following the standard template-based approach. Experimental results show that the proposed approach outperforms both (1) the recurrent neural network approach and (2) the state-of-the-art in the template-based approach on a wildlife detection application with few training samples.
Solid waste bin detection and classification using Dynamic Time Warping and MLP classifier
Energy Technology Data Exchange (ETDEWEB)
Islam, Md. Shafiqul, E-mail: shafique@eng.ukm.my [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Hannan, M.A., E-mail: hannan@eng.ukm.my [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Basri, Hassan [Dept. of Civil and Structural Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Hussain, Aini; Arebey, Maher [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia)
2014-02-15
Highlights: • Solid waste bin level detection using Dynamic Time Warping (DTW). • Gabor wavelet filter is used to extract the solid waste image features. • Multi-Layer Perceptron classifier network is used for bin image classification. • The classification performance evaluated by ROC curve analysis. - Abstract: The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensor intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level.
LittleQuickWarp: an ultrafast image warping tool.
Qu, Lei; Peng, Hanchuan
2015-02-01
Warping images into a standard coordinate space is critical for many image computing related tasks. However, for multi-dimensional and high-resolution images, an accurate warping operation itself is often very expensive in terms of computer memory and computational time. For high-throughput image analysis studies such as brain mapping projects, it is desirable to have high performance image warping tools that are compatible with common image analysis pipelines. In this article, we present LittleQuickWarp, a swift and memory efficient tool that boosts 3D image warping performance dramatically and at the same time has high warping quality similar to the widely used thin plate spline (TPS) warping. Compared to the TPS, LittleQuickWarp can improve the warping speed 2-5 times and reduce the memory consumption 6-20 times. We have implemented LittleQuickWarp as an Open Source plug-in program on top of the Vaa3D system (http://vaa3d.org). The source code and a brief tutorial can be found in the Vaa3D plugin source code repository. Copyright © 2014 Elsevier Inc. All rights reserved.
Fault Diagnosis for Compensating Capacitors of Jointless Track Circuit Based on Dynamic Time Warping
Directory of Open Access Journals (Sweden)
Wei Dong
2014-01-01
Full Text Available Aiming at the problem of online fault diagnosis for compensating capacitors of jointless track circuit, a dynamic time warping (DTW based diagnosis method is proposed in this paper. Different from the existing related works, this method only uses the ground indoor monitoring signals of track circuit to locate the faulty compensating capacitor, not depending on the shunt current of inspection train, which is an indispensable condition for existing methods. So, it can be used for online diagnosis of compensating capacitor, which has not yet been realized by existing methods. To overcome the key problem that track circuit cannot obtain the precise position of the train, the DTW method is used for the first time in this situation to recover the function relationship between receiver’s peak voltage and shunt position. The necessity, thinking, and procedure of the method are described in detail. Besides the classical DTW based method, two improved methods for improving classification quality and reducing computation complexity are proposed. Finally, the diagnosis experiments based on the simulation model of track circuit show the effectiveness of the proposed methods.
An HMM-Like Dynamic Time Warping Scheme for Automatic Speech Recognition
Directory of Open Access Journals (Sweden)
Ing-Jr Ding
2014-01-01
Full Text Available In the past, the kernel of automatic speech recognition (ASR is dynamic time warping (DTW, which is feature-based template matching and belongs to the category technique of dynamic programming (DP. Although DTW is an early developed ASR technique, DTW has been popular in lots of applications. DTW is playing an important role for the known Kinect-based gesture recognition application now. This paper proposed an intelligent speech recognition system using an improved DTW approach for multimedia and home automation services. The improved DTW presented in this work, called HMM-like DTW, is essentially a hidden Markov model- (HMM- like method where the concept of the typical HMM statistical model is brought into the design of DTW. The developed HMM-like DTW method, transforming feature-based DTW recognition into model-based DTW recognition, will be able to behave as the HMM recognition technique and therefore proposed HMM-like DTW with the HMM-like recognition model will have the capability to further perform model adaptation (also known as speaker adaptation. A series of experimental results in home automation-based multimedia access service environments demonstrated the superiority and effectiveness of the developed smart speech recognition system by HMM-like DTW.
Time warping of evolutionary distant temporal gene expression data based on noise suppression
Directory of Open Access Journals (Sweden)
Papatsenko Dmitri
2009-10-01
Full Text Available Abstract Background Comparative analysis of genome wide temporal gene expression data has a broad potential area of application, including evolutionary biology, developmental biology, and medicine. However, at large evolutionary distances, the construction of global alignments and the consequent comparison of the time-series data are difficult. The main reason is the accumulation of variability in expression profiles of orthologous genes, in the course of evolution. Results We applied Pearson distance matrices, in combination with other noise-suppression techniques and data filtering to improve alignments. This novel framework enhanced the capacity to capture the similarities between the temporal gene expression datasets separated by large evolutionary distances. We aligned and compared the temporal gene expression data in budding (Saccharomyces cerevisiae and fission (Schizosaccharomyces pombe yeast, which are separated by more then ~400 myr of evolution. We found that the global alignment (time warping properly matched the duration of cell cycle phases in these distant organisms, which was measured in prior studies. At the same time, when applied to individual ortholog pairs, this alignment procedure revealed groups of genes with distinct alignments, different from the global alignment. Conclusion Our alignment-based predictions of differences in the cell cycle phases between the two yeast species were in a good agreement with the existing data, thus supporting the computational strategy adopted in this study. We propose that the existence of the alternative alignments, specific to distinct groups of genes, suggests presence of different synchronization modes between the two organisms and possible functional decoupling of particular physiological gene networks in the course of evolution.
White, Harold
2011-01-01
This paper will begin with a short review of the Alcubierre warp drive metric and describes how the phenomenon might work based on the original paper. The canonical form of the metric was developed and published in [6] which provided key insight into the field potential and boost for the field which remedied a critical paradox in the original Alcubierre concept of operations. A modified concept of operations based on the canonical form of the metric that remedies the paradox is presented and discussed. The idea of a warp drive in higher dimensional space-time (manifold) will then be briefly considered by comparing the null-like geodesics of the Alcubierre metric to the Chung-Freese metric to illustrate the mathematical role of hyperspace coordinates. The net effect of using a warp drive technology coupled with conventional propulsion systems on an exploration mission will be discussed using the nomenclature of early mission planning. Finally, an overview of the warp field interferometer test bed being implemented in the Advanced Propulsion Physics Laboratory: Eagleworks (APPL:E) at the Johnson Space Center will be detailed. While warp field mechanics has not had a Chicago Pile moment, the tools necessary to detect a modest instance of the phenomenon are near at hand.
Warped products and black holes
International Nuclear Information System (INIS)
Hong, Soon-Tae
2005-01-01
We apply the warped product space-time scheme to the Banados-Teitelboim-Zanelli black holes and the Reissner-Nordstroem-anti-de Sitter black hole to investigate their interior solutions in terms of warped products. It is shown that there exist no discontinuities of the Ricci and Einstein curvatures across event horizons of these black holes
International Nuclear Information System (INIS)
Bergmann, Ryan M.; Vujić, Jasmina L.
2015-01-01
Highlights: • WARP, a GPU-accelerated Monte Carlo neutron transport code, has been developed. • The NVIDIA OptiX high-performance ray tracing library is used to process geometric data. • The unionized cross section representation is modified for higher performance. • Reference remapping is used to keep the GPU busy as neutron batch population reduces. • Reference remapping is done using a key-value radix sort on neutron reaction type. - Abstract: In recent supercomputers, general purpose graphics processing units (GPGPUs) are a significant faction of the supercomputer’s total computational power. GPGPUs have different architectures compared to central processing units (CPUs), and for Monte Carlo neutron transport codes used in nuclear engineering to take advantage of these coprocessor cards, transport algorithms must be changed to execute efficiently on them. WARP is a continuous energy Monte Carlo neutron transport code that has been written to do this. The main thrust of WARP is to adapt previous event-based transport algorithms to the new GPU hardware; the algorithmic choices for all parts of which are presented in this paper. It is found that remapping history data references increases the GPU processing rate when histories start to complete. The main reason for this is that completed data are eliminated from the address space, threads are kept busy, and memory bandwidth is not wasted on checking completed data. Remapping also allows the interaction kernels to be launched concurrently, improving efficiency. The OptiX ray tracing framework and CUDPP library are used for geometry representation and parallel dataset-side operations, ensuring high performance and reliability
Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano
2018-03-01
Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical
Energy Technology Data Exchange (ETDEWEB)
2016-10-25
Sirepo is an open source framework for cloud computing. The graphical user interface (GUI) for Sirepo, also known as the client, executes in any HTML5 compliant web browser on any computing platform, including tablets. The client is built in JavaScript, making use of the following open source libraries: Bootstrap, which is fundamental for cross-platform web applications; AngularJS, which provides a model–view–controller (MVC) architecture and GUI components; and D3.js, which provides interactive plots and data-driven transformations. The Sirepo server is built on the following Python technologies: Flask, which is a lightweight framework for web development; Jin-ja, which is a secure and widely used templating language; and Werkzeug, a utility library that is compliant with the WSGI standard. We use Nginx as the HTTP server and proxy, which provides a scalable event-driven architecture. The physics codes supported by Sirepo execute inside a Docker container. One of the codes supported by Sirepo is Warp. Warp is a particle-in-cell (PIC) code de-signed to simulate high-intensity charged particle beams and plasmas in both the electrostatic and electromagnetic regimes, with a wide variety of integrated physics models and diagnostics. At pre-sent, Sirepo supports a small subset of Warp’s capabilities. Warp is open source and is part of the Berkeley Lab Accelerator Simulation Toolkit.
Tao, Laifa; Lu, Chen; Noktehdan, Azadeh
2015-10-01
Battery capacity estimation is a significant recent challenge given the complex physical and chemical processes that occur within batteries and the restrictions on the accessibility of capacity degradation data. In this study, we describe an approach called dynamic spatial time warping, which is used to determine the similarities of two arbitrary curves. Unlike classical dynamic time warping methods, this approach can maintain the invariance of curve similarity to the rotations and translations of curves, which is vital in curve similarity search. Moreover, it utilizes the online charging or discharging data that are easily collected and do not require special assumptions. The accuracy of this approach is verified using NASA battery datasets. Results suggest that the proposed approach provides a highly accurate means of estimating battery capacity at less time cost than traditional dynamic time warping methods do for different individuals and under various operating conditions.
Semiclassical instability of warp drives
Energy Technology Data Exchange (ETDEWEB)
Barcelo, C [Instituto de Astrofisica de Andalucia, IAA-CSIC, Glorieta de la Astronomia s/n, 18008 Granada (Spain); Finazzi, S; Liberati, S, E-mail: carlos@iaa.e, E-mail: finazzi@sissa.i, E-mail: liberati@sissa.i
2010-05-01
Warp drives, at least theoretically, provide a way to travel at superluminal speeds. However, even if one succeeded in providing the necessary exotic matter to construct them, it would still be necessary to check whether they would survive to the switching on of quantum effects. In this contribution we will report on the behaviour of the Renormalized Stress-Energy Tensor (RSET) in the spacetimes associated with superluminal warp drives. We find that the RSET will exponentially grow in time close to the front wall of the superluminal bubble, hence strongly supporting the conclusion that the warp-drive geometries are unstable against semiclassical back-reaction.
Van Beeck, Kristof; Goedemé, Toon; Tuytelaars, Tinne
2012-01-01
Van Beeck K., Goedemé T., Tuytelaars T., ''A warping window approach to real-time vision-based pedestrian detection in a truck’s blind spot zone'', Proceedings 9th international conference on informatics in control, automation and robotics - ICINCO 2012, vol. 2, pp. 561-568, July 28-31, 2012, Rome, Italy.
Van Beeck, Kristof; Goedemé, Toon; Tuytelaars, Tinne
2014-01-01
Van Beeck K., Goedemé G., Tuytelaars T., ''Real-time vision-based pedestrian detection in a truck’s blind spot zone using a warping window approach'', Informatics in control, automation and robotics - lecture notes in electrical engineering, vol. 283, pp. 251-264, Ferrier J.-L., Bernard A., Gusikhin O. and Madani K., eds., 2014.
Universal algorithm of time sharing
International Nuclear Information System (INIS)
Silin, I.N.; Fedyun'kin, E.D.
1979-01-01
Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed
Superluminal warp drive and dark energy
Energy Technology Data Exchange (ETDEWEB)
Gonzalez-Diaz, Pedro F. [Colina de los Chopos, Centro de Fisica ' Miguel A. Catalan' , Instituto de Matematicas y Fisica Fundamental, Consejo Superior de Investigaciones Cientificas, Serrano 121, 28006 Madrid (Spain)], E-mail: p.gonzalezdiaz@imaff.cfmac.csic.es
2007-11-29
In this Letter we consider a warp drive spacetime where the spaceship can only travel faster than light. Restricting to the two-dimensional case, we find that if the warp drive is placed in an accelerating universe the warp bubble size increases in a comoving way to the expansion of the universe in which it is immersed. Also shown is the result that the apparent velocity of the ship steadily increases with time as phantom energy is accreted onto it.
Directory of Open Access Journals (Sweden)
Xianglilan Zhang
Full Text Available Considering personal privacy and difficulty of obtaining training material for many seldom used English words and (often non-English names, language-independent (LI with lightweight speaker-dependent (SD automatic speech recognition (ASR is a promising option to solve the problem. The dynamic time warping (DTW algorithm is the state-of-the-art algorithm for small foot-print SD ASR applications with limited storage space and small vocabulary, such as voice dialing on mobile devices, menu-driven recognition, and voice control on vehicles and robotics. Even though we have successfully developed two fast and accurate DTW variations for clean speech data, speech recognition for adverse conditions is still a big challenge. In order to improve recognition accuracy in noisy environment and bad recording conditions such as too high or low volume, we introduce a novel one-against-all weighted DTW (OAWDTW. This method defines a one-against-all index (OAI for each time frame of training data and applies the OAIs to the core DTW process. Given two speech signals, OAWDTW tunes their final alignment score by using OAI in the DTW process. Our method achieves better accuracies than DTW and merge-weighted DTW (MWDTW, as 6.97% relative reduction of error rate (RRER compared with DTW and 15.91% RRER compared with MWDTW are observed in our extensive experiments on one representative SD dataset of four speakers' recordings. To the best of our knowledge, OAWDTW approach is the first weighted DTW specially designed for speech data in adverse conditions.
Yu, Z.; Bedig, A.; Quigley, M.; Montalto, F. A.
2017-12-01
In-situ field monitoring can help to improve the design and management of decentralized Green Infrastructure (GI) systems in urban areas. Because of the vast quantity of continuous data generated from multi-site sensor systems, cost-effective post-construction opportunities for real-time control are limited; and the physical processes that influence the observed phenomena (e.g. soil moisture) are hard to track and control. To derive knowledge efficiently from real-time monitoring data, there is currently a need to develop more efficient approaches to data quality control. In this paper, we employ dynamic time warping method to compare the similarity of two soil moisture patterns without ignoring the inherent autocorrelation. We also use a rule-based machine learning method to investigate the feasibility of detecting anomalous responses from soil moisture probes. The data was generated from both individual and clusters of probes, deployed in a GI site in Milwaukee, WI. In contrast to traditional QAQC methods, which seek to detect outliers at individual time steps, the new method presented here converts the continuous time series into event-based symbolic sequences from which unusual response patterns can be detected. Different Matching rules are developed on different physical characteristics for different seasons. The results suggest that this method could be used alternatively to detect sensor failure, to identify extreme events, and to call out abnormal change patterns, compared to intra-probe and inter-probe historical observations. Though this algorithm was developed for soil moisture probes, the same approach could easily be extended to advance QAQC efficiency for any continuous environmental datasets.
Evaluation of short-term physical weathering of a heavy fuel oil by use of time warping and PCA
International Nuclear Information System (INIS)
Malmquist, L.M.V.; Olsen, R.R.; Christensen, J.H.; Andersen, O.
2005-01-01
An estimated 1,140 billion tons of oil was accidentally spilled to the environment during the 1990s. These spills present an ecotoxicologic risk due to the presence of toxic and mutagenic compounds in the oil. Oil is affected by short term and long term weathering processes such as evaporation, dissolution, dispersion, emulsification, photodegradation and biodegradation. Physical weathering processes change the composition of the oil but they do not alter the oil components. Gas chromatography and mass spectrometry can characterize the compositional changes resulting from evaporation. However, the process depends on subjective analysis because it is based on manual interpretation of results and visual inspection. This paper presents a rapid and objective method to compare oil sample compositions. The method is based on automated data preprocessing involving baseline removal, alignment of chromatograms using correlation optimized warping (COW) and normalization. Preprocessed data is analyzed by principal component analysis (PCA) based on the total chromatograms. The method has successfully resolved the effects of evaporation and dissolution processes and showed clear dependence of time, but it did not completely resolve the effect of weathering from the analytical variability because better quality data is required. 21 refs., 3 figs
Evaluation of short-term physical weathering of a heavy fuel oil by use of time warping and PCA
Energy Technology Data Exchange (ETDEWEB)
Malmquist, L.M.V.; Olsen, R.R. [Roskilde Univ., Roskilde (Denmark). Dept. of Life Sciences and Chemistry]|[National Environmental Research Inst., Roskilde (Denmark). Dept. of Environmental Chemistry and Microbiology; Christensen, J.H. [Royal Veterinary and Agricultural Univ., Thorvaldsensvej (Denmark). Dept. of Natural Sciences; Andersen, O. [Roskilde Univ., Roskilde (Denmark). Dept. of Life Sciences and Chemistry
2005-07-01
An estimated 1,140 billion tons of oil was accidentally spilled to the environment during the 1990s. These spills present an ecotoxicologic risk due to the presence of toxic and mutagenic compounds in the oil. Oil is affected by short term and long term weathering processes such as evaporation, dissolution, dispersion, emulsification, photodegradation and biodegradation. Physical weathering processes change the composition of the oil but they do not alter the oil components. Gas chromatography and mass spectrometry can characterize the compositional changes resulting from evaporation. However, the process depends on subjective analysis because it is based on manual interpretation of results and visual inspection. This paper presents a rapid and objective method to compare oil sample compositions. The method is based on automated data preprocessing involving baseline removal, alignment of chromatograms using correlation optimized warping (COW) and normalization. Preprocessed data is analyzed by principal component analysis (PCA) based on the total chromatograms. The method has successfully resolved the effects of evaporation and dissolution processes and showed clear dependence of time, but it did not completely resolve the effect of weathering from the analytical variability because better quality data is required. 21 refs., 3 figs.
Wang, Gang-Jin; Xie, Chi; Han, Feng; Sun, Bo
2012-08-01
In this study, we employ a dynamic time warping method to study the topology of similarity networks among 35 major currencies in international foreign exchange (FX) markets, measured by the minimal spanning tree (MST) approach, which is expected to overcome the synchronous restriction of the Pearson correlation coefficient. In the empirical process, firstly, we subdivide the analysis period from June 2005 to May 2011 into three sub-periods: before, during, and after the US sub-prime crisis. Secondly, we choose NZD (New Zealand dollar) as the numeraire and then, analyze the topology evolution of FX markets in terms of the structure changes of MSTs during the above periods. We also present the hierarchical tree associated with the MST to study the currency clusters in each sub-period. Our results confirm that USD and EUR are the predominant world currencies. But USD gradually loses the most central position while EUR acts as a stable center in the MST passing through the crisis. Furthermore, an interesting finding is that, after the crisis, SGD (Singapore dollar) becomes a new center currency for the network.
Wireless Augmented Reality Prototype (WARP)
Devereaux, A. S.
1999-01-01
Initiated in January, 1997, under NASA's Office of Life and Microgravity Sciences and Applications, the Wireless Augmented Reality Prototype (WARP) is a means to leverage recent advances in communications, displays, imaging sensors, biosensors, voice recognition and microelectronics to develop a hands-free, tetherless system capable of real-time personal display and control of computer system resources. Using WARP, an astronaut may efficiently operate and monitor any computer-controllable activity inside or outside the vehicle or station. The WARP concept is a lightweight, unobtrusive heads-up display with a wireless wearable control unit. Connectivity to the external system is achieved through a high-rate radio link from the WARP personal unit to a base station unit installed into any system PC. The radio link has been specially engineered to operate within the high- interference, high-multipath environment of a space shuttle or space station module. Through this virtual terminal, the astronaut will be able to view and manipulate imagery, text or video, using voice commands to control the terminal operations. WARP's hands-free access to computer-based instruction texts, diagrams and checklists replaces juggling manuals and clipboards, and tetherless computer system access allows free motion throughout a cabin while monitoring and operating equipment.
Hyperspace a scientific odyssey through parallel universes, time warps, and the tenth dimension
Kaku, Michio
1994-01-01
Already thoroughly familiar to the seasoned science fiction fan, Hyperspace is that realm which enables a spaceship captain to take his ship on a physics-defying shortcut (or ""wormhole"") to the outer shores of the Galaxy in less time than it takes a 747 to fly from New York to Tokyo. But in the past few years, physicists on the cutting edge of science have found that a 10-dimensional Hyperspace may actually exist, albeit at a scale almost too small to comprehend, smaller even thana quark; and that in spite of its tiny size, it may be the basis on which all the forces of nature will be united
Swarup, Bob
2008-01-01
Warp drives are a staple of science fiction, transporting the heroes of shows like Star Trek between galaxies in a matter of hours. Now, increasing numbers of cosmologists are wondering whether this technology might eventually become science fact. Dozens of scientific papers on warp drives have appeared since 1994 when Miguel Alcubierre - a theoretical physicist then at the University of Wales in Cardiff - first argued that a warp drive was theoretically possible (Class. Quantum Grav. 11 L73)
Energy Technology Data Exchange (ETDEWEB)
Gonzalez-Diaz, Pedro F. [Colina de los Chopos, Centro de Fisica ' Miguel A. Catalan' , Instituto de Matematicas y Fisica Fundamental, Consejo Superior de Investigaciones Cientificas, Serrano 121, 28006 Madrid (Spain)], E-mail: p.gonzalezdiaz@imaff.cfmac.csic.es
2007-09-20
In this Letter we consider a warp drive spacetime resulting from that suggested by Alcubierre when the spaceship can only travel faster than light. Restricting to the two dimensions that retains most of the physics, we derive the thermodynamic properties of the warp drive and show that the temperature of the spaceship rises up as its apparent velocity increases. We also find that the warp drive spacetime can be exhibited in a manifestly cosmological form.
Seamless warping of diffusion tensor fields
DEFF Research Database (Denmark)
Xu, Dongrong; Hao, Xuejun; Bansal, Ravi
2008-01-01
To warp diffusion tensor fields accurately, tensors must be reoriented in the space to which the tensors are warped based on both the local deformation field and the orientation of the underlying fibers in the original image. Existing algorithms for warping tensors typically use forward mapping...... of seams, including voxels in which the deformation is extensive. Backward mapping, however, cannot reorient tensors in the template space because information about the directional orientation of fiber tracts is contained in the original, unwarped imaging space only, and backward mapping alone cannot...... transfer that information to the template space. To combine the advantages of forward and backward mapping, we propose a novel method for the spatial normalization of diffusion tensor (DT) fields that uses a bijection (a bidirectional mapping with one-to-one correspondences between image spaces) to warp DT...
RELAXATION OF WARPED DISKS: THE CASE OF PURE HYDRODYNAMICS
Energy Technology Data Exchange (ETDEWEB)
Sorathia, Kareem A.; Krolik, Julian H. [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States); Hawley, John F. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904 (United States)
2013-05-10
Orbiting disks may exhibit bends due to a misalignment between the angular momentum of the inner and outer regions of the disk. We begin a systematic simulational inquiry into the physics of warped disks with the simplest case: the relaxation of an unforced warp under pure fluid dynamics, i.e., with no internal stresses other than Reynolds stress. We focus on the nonlinear regime in which the bend rate is large compared to the disk aspect ratio. When warps are nonlinear, strong radial pressure gradients drive transonic radial motions along the disk's top and bottom surfaces that efficiently mix angular momentum. The resulting nonlinear decay rate of the warp increases with the warp rate and the warp width, but, at least in the parameter regime studied here, is independent of the sound speed. The characteristic magnitude of the associated angular momentum fluxes likewise increases with both the local warp rate and the radial range over which the warp extends; it also increases with increasing sound speed, but more slowly than linearly. The angular momentum fluxes respond to the warp rate after a delay that scales with the square root of the time for sound waves to cross the radial extent of the warp. These behaviors are at variance with a number of the assumptions commonly used in analytic models to describe linear warp dynamics.
González Martínez, Jose María; Ferrer Riquelme, Alberto José; Westerhuis, Johan A.
2011-01-01
This paper addresses the real-time monitoring of batch processes with multiple different local time trajectories of variables measured during the process run. For Unfold Principal Component Analysis (U-PCA)—or Unfold Partial Least Squares (U-PLS)-based on-line monitoring of batch processes, batch runs need to be synchronized, not only to have the same time length, but also such that key events happen at the same time. An adaptation from Kassidas et al.'s approach [1] will be introduced to ach...
Doǧan, S.; Nixon, C. J.; King, A. R.; Pringle, J. E.
2018-05-01
Accretion discs are generally warped. If a warp in a disc is too large, the disc can `break' apart into two or more distinct planes, with only tenuous connections between them. Further, if an initially planar disc is subject to a strong differential precession, then it can be torn apart into discrete annuli that precess effectively independently. In previous investigations, torque-balance formulae have been used to predict where and when the disc breaks into distinct parts. In this work, focusing on discs with Keplerian rotation and where the shearing motions driving the radial communication of the warp are damped locally by turbulence (the `diffusive' regime), we investigate the stability of warped discs to determine the precise criterion for an isolated warped disc to break. We find and solve the dispersion relation, which, in general, yields three roots. We provide a comprehensive analysis of this viscous-warp instability and the emergent growth rates and their dependence on disc parameters. The physics of the instability can be understood as a combination of (1) a term that would generally encapsulate the classical Lightman-Eardley instability in planar discs (given by ∂(νΣ)/∂Σ < 0) but is here modified by the warp to include ∂(ν1|ψ|)/∂|ψ| < 0, and (2) a similar condition acting on the diffusion of the warp amplitude given in simplified form by ∂(ν2|ψ|)/∂|ψ| < 0. We discuss our findings in the context of discs with an imposed precession, and comment on the implications for different astrophysical systems.
Algorithm for Compressing Time-Series Data
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
Distributed Algorithms for Time Optimal Reachability Analysis
DEFF Research Database (Denmark)
Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand
2016-01-01
. We propose distributed computing to accelerate time optimal reachability analysis. We develop five distributed state exploration algorithms, implement them in \\uppaal enabling it to exploit the compute resources of a dedicated model-checking cluster. We experimentally evaluate the implemented...... algorithms with four models in terms of their ability to compute near- or proven-optimal solutions, their scalability, time and memory consumption and communication overhead. Our results show that distributed algorithms work much faster than sequential algorithms and have good speedup in general.......Time optimal reachability analysis is a novel model based technique for solving scheduling and planning problems. After modeling them as reachability problems using timed automata, a real-time model checker can compute the fastest trace to the goal states which constitutes a time optimal schedule...
Directory of Open Access Journals (Sweden)
ThienLuan Ho
Full Text Available Approximate string matching with k-differences has a number of practical applications, ranging from pattern recognition to computational biology. This paper proposes an efficient memory-access algorithm for parallel approximate string matching with k-differences on Graphics Processing Units (GPUs. In the proposed algorithm, all threads in the same GPUs warp share data using warp-shuffle operation instead of accessing the shared memory. Moreover, we implement the proposed algorithm by exploiting the memory structure of GPUs to optimize its performance. Experiment results for real DNA packages revealed that the performance of the proposed algorithm and its implementation archived up to 122.64 and 1.53 times compared to that of sequential algorithm on CPU and previous parallel approximate string matching algorithm on GPUs, respectively.
Phi-s correlation and dynamic time warping - Two methods for tracking ice floes in SAR images
Mcconnell, Ross; Kober, Wolfgang; Kwok, Ronald; Curlander, John C.; Pang, Shirley S.
1991-01-01
The authors present two algorithms for performing shape matching on ice floe boundaries in SAR (synthetic aperture radar) images. These algorithms quickly produce a set of ice motion and rotation vectors that can be used to guide a pixel value correlator. The algorithms match a shape descriptor known as the Phi-s curve. The first algorithm uses normalized correlation to match the Phi-s curves, while the second uses dynamic programming to compute an elastic match that better accommodates ice floe deformation. Some empirical data on the performance of the algorithms on Seasat SAR images are presented.
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different
Zhang, Dongliang
2014-08-05
The quality of migration images depends on the accuracy of the velocity model. For large velocity errors, the migration image is strongly distorted, which unflattens events in the common image gathers and consequently leads to a blurring in the stacked migration image. To mitigate this problem, we propose dynamic image warping to flatten the common image gathers before stacking and to enhance the signal-to-noise ratio of the migration image. Numerical tests on the Marmousi model and GOM data show that image warping of the prestack images followed by stacking leads to much better resolved reflectors than the original migration image. The problem, however, is that the reflector locations have increased uncertainty because the wrong velocity model is still used.
Zhang, Dongliang; Wang, Xin; Huang, Yunsong; Schuster, Gerard T.
2014-01-01
The quality of migration images depends on the accuracy of the velocity model. For large velocity errors, the migration image is strongly distorted, which unflattens events in the common image gathers and consequently leads to a blurring in the stacked migration image. To mitigate this problem, we propose dynamic image warping to flatten the common image gathers before stacking and to enhance the signal-to-noise ratio of the migration image. Numerical tests on the Marmousi model and GOM data show that image warping of the prestack images followed by stacking leads to much better resolved reflectors than the original migration image. The problem, however, is that the reflector locations have increased uncertainty because the wrong velocity model is still used.
Geodesic congruences in warped spacetimes
International Nuclear Information System (INIS)
Ghosh, Suman; Dasgupta, Anirvan; Kar, Sayan
2011-01-01
In this article, we explore the kinematics of timelike geodesic congruences in warped five-dimensional bulk spacetimes, with and without thick or thin branes. Beginning with geodesic flows in the Randall-Sundrum anti-de Sitter geometry without and with branes, we find analytical expressions for the expansion scalar and comment on the effects of including thin branes on its evolution. Later, we move on to congruences in more general warped bulk geometries with a cosmological thick brane and a time-dependent extra dimensional scale. Using analytical expressions for the velocity field, we interpret the expansion, shear and rotation (ESR) along the flows, as functions of the extra dimensional coordinate. The evolution of a cross-sectional area orthogonal to the congruence, as seen from a local observer's point of view, is also shown graphically. Finally, the Raychaudhuri and geodesic equations in backgrounds with a thick brane are solved numerically in order to figure out the role of initial conditions (prescribed on the ESR) and spacetime curvature on the evolution of the ESR.
Warp drive with zero expansion
Energy Technology Data Exchange (ETDEWEB)
Natario, Jose [Department of Mathematics, Instituto Superior Tecnico (Portugal)
2002-03-21
It is commonly believed that Alcubierre's warp drive works by contracting space in front of the warp bubble and expanding the space behind it. We show that this contraction/expansion is but a marginal consequence of the choice made by Alcubierre and explicitly construct a similar spacetime where no contraction/expansion occurs. Global and optical properties of warp-drive spacetimes are also discussed.
Directory of Open Access Journals (Sweden)
Martin eDinov
2016-05-01
Full Text Available Dynamic time warping, or DTW, is a powerful and domain-general sequence alignment method for computing a similarity measure. Such dynamic programming-based techniques like DTW are now the backbone and driver of most bioinformatics methods and discoveries. In neuroscience it has had far less use, though this has begun to change. We wanted to explore new ways of applying DTW, not simply as a measure with which to cluster or compare similarity between features but in a conceptually different way. We have used DTW to provide a more interpretable spectral description of the data, compared to standard approaches such as the Fourier and related transforms. The DTW approach and standard discrete Fourier transform (DFT are assessed against benchmark measures of neural dynamics. These include EEG microstates, EEG avalanches and the sum squared error (SSE from a multilayer perceptron (MLP prediction of the EEG timeseries, and simultaneously acquired FMRI BOLD signal. We explored the relationships between these variables of interest in an EEG-FMRI dataset acquired during a standard cognitive task, which allowed us to explore how DTW differentially performs in different task settings. We found that despite strong correlations between DTW and DFT-spectra, DTW was a better predictor for almost every measure of brain dynamics. Using these DTW measures, we show that predictability is almost always higher in task than in rest states, which is consistent to other theoretical and empirical findings, providing additional evidence for the utility of the DTW approach.
McManamay, R.; Allen, M. R.; Piburn, J.; Sanyal, J.; Stewart, R.; Bhaduri, B. L.
2017-12-01
Characterizing interdependencies among land-energy-water sectors, their vulnerabilities, and tipping points, is challenging, especially if all sectors are simultaneously considered. Because such holistic system behavior is uncertain, largely unmodeled, and in need of testable hypotheses of system drivers, these dynamics are conducive to exploratory analytics of spatiotemporal patterns, powered by tools, such as Dynamic Time Warping (DTW). Here, we conduct a retrospective analysis (1950 - 2010) of temporal trends in land use, energy use, and water use within US counties to identify commonalities in resource consumption and adaptation strategies to resource limitations. We combine existing and derived data from statistical downscaling to synthesize a temporally comprehensive land-energy-water dataset at the US county level and apply DTW and subsequent hierarchical clustering to examine similar temporal trends in resource typologies for land, energy, and water sectors. As expected, we observed tradeoffs among water uses (e.g., public supply vs irrigation) and land uses (e.g., urban vs ag). Strong associations between clusters amongst sectors reveal tight system interdependencies, whereas weak associations suggest unique behaviors and potential for human adaptations towards disruptive technologies and less resource-dependent population growth. Our framework is useful for exploring complex human-environmental system dynamics and generating hypotheses to guide subsequent energy-water-nexus research.
Liu, Ya-Juan; André, Silvère; Saint Cristau, Lydia; Lagresle, Sylvain; Hannas, Zahia; Calvosa, Éric; Devos, Olivier; Duponchel, Ludovic
2017-02-01
Multivariate statistical process control (MSPC) is increasingly popular as the challenge provided by large multivariate datasets from analytical instruments such as Raman spectroscopy for the monitoring of complex cell cultures in the biopharmaceutical industry. However, Raman spectroscopy for in-line monitoring often produces unsynchronized data sets, resulting in time-varying batches. Moreover, unsynchronized data sets are common for cell culture monitoring because spectroscopic measurements are generally recorded in an alternate way, with more than one optical probe parallelly connecting to the same spectrometer. Synchronized batches are prerequisite for the application of multivariate analysis such as multi-way principal component analysis (MPCA) for the MSPC monitoring. Correlation optimized warping (COW) is a popular method for data alignment with satisfactory performance; however, it has never been applied to synchronize acquisition time of spectroscopic datasets in MSPC application before. In this paper we propose, for the first time, to use the method of COW to synchronize batches with varying durations analyzed with Raman spectroscopy. In a second step, we developed MPCA models at different time intervals based on the normal operation condition (NOC) batches synchronized by COW. New batches are finally projected considering the corresponding MPCA model. We monitored the evolution of the batches using two multivariate control charts based on Hotelling's T 2 and Q. As illustrated with results, the MSPC model was able to identify abnormal operation condition including contaminated batches which is of prime importance in cell culture monitoring We proved that Raman-based MSPC monitoring can be used to diagnose batches deviating from the normal condition, with higher efficacy than traditional diagnosis, which would save time and money in the biopharmaceutical industry. Copyright © 2016 Elsevier B.V. All rights reserved.
Special Issue on Time Scale Algorithms
2008-01-01
unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 IOP PUBLISHING METROLOGIA Metrologia 45 (2008) doi:10.1088/0026-1394/45/6/E01...special issue of Metrologia presents selected papers from the Fifth International Time Scale Algorithm Symposium (VITSAS), including some of the...scientists, and hosted by the Real Instituto y Observatorio de la Armada (ROA) in San Fernando, Spain, whose staff further enhanced their nation’s high
Yet one more dwell time algorithm
Haberl, Alexander; Rascher, Rolf
2017-06-01
The current demand of even more powerful and efficient microprocessors, for e.g. deep learning, has led to an ongoing trend of reducing the feature size of the integrated circuits. These processors are patterned with EUV-lithography which enables 7 nm chips [1]. To produce mirrors which satisfy the needed requirements is a challenging task. Not only increasing requirements on the imaging properties, but also new lens shapes, such as aspheres or lenses with free-form surfaces, require innovative production processes. However, these lenses need new deterministic sub-aperture polishing methods that have been established in the past few years. These polishing methods are characterized, by an empirically determined TIF and local stock removal. Such a deterministic polishing method is ion-beam-figuring (IBF). The beam profile of an ion beam is adjusted to a nearly ideal Gaussian shape by various parameters. With the known removal function, a dwell time profile can be generated for each measured error profile. Such a profile is always generated pixel-accurately to the predetermined error profile, with the aim always of minimizing the existing surface structures up to the cut-off frequency of the tool used [2]. The processing success of a correction-polishing run depends decisively on the accuracy of the previously computed dwell-time profile. So the used algorithm to calculate the dwell time has to accurately reflect the reality. But furthermore the machine operator should have no influence on the dwell-time calculation. Conclusively there mustn't be any parameters which have an influence on the calculation result. And lastly it should take a minimum of machining time to get a minimum of remaining error structures. Unfortunately current dwell time algorithm calculations are divergent, user-dependent, tending to create high processing times and need several parameters to bet set. This paper describes an, realistic, convergent and user independent dwell time algorithm. The
Fast algorithms for computing phylogenetic divergence time.
Crosby, Ralph W; Williams, Tiffani L
2017-12-06
The inference of species divergence time is a key step in most phylogenetic studies. Methods have been available for the last ten years to perform the inference, but the performance of the methods does not yet scale well to studies with hundreds of taxa and thousands of DNA base pairs. For example a study of 349 primate taxa was estimated to require over 9 months of processing time. In this work, we present a new algorithm, AncestralAge, that significantly improves the performance of the divergence time process. As part of AncestralAge, we demonstrate a new method for the computation of phylogenetic likelihood and our experiments show a 90% improvement in likelihood computation time on the aforementioned dataset of 349 primates taxa with over 60,000 DNA base pairs. Additionally, we show that our new method for the computation of the Bayesian prior on node ages reduces the running time for this computation on the 349 taxa dataset by 99%. Through the use of these new algorithms we open up the ability to perform divergence time inference on large phylogenetic studies.
Stannard, Warren B.
2018-05-01
Einstein’s two theories of relativity were introduced over 100 years ago. High school science students are seldom exposed to these revolutionary ideas as they are often perceived to be too difficult conceptually and mathematically. This paper brings together the two theories of relativity in a way that is logical and consistent and enables the teaching of relativity as a single subject. This paper introduces new models and analogies which are suitable for the teaching of Einstein’s relativity at a high school level, exposing students to our best understanding of time, space, matter and energy.
Is it sensible to “deform” dose? 3D experimental validation of dose-warping
International Nuclear Information System (INIS)
Yeo, U. J.; Taylor, M. L.; Supple, J. R.; Smith, R. L.; Dunn, L.; Kron, T.; Franich, R. D.
2012-01-01
Purpose: Strategies for dose accumulation in deforming anatomy are of interest in radiotherapy. Algorithms exist for the deformation of dose based on patient image sets, though these are sometimes contentious because not all such image calculations are constrained by physical laws. While tumor and organ motion has been a key area of study for a considerable amount of time, deformation is of increasing interest. In this work, we demonstrate a full 3D experimental validation of results from a range of dose deformation algorithms available in the public domain. Methods: We recently developed the first tissue-equivalent, full 3D deformable dosimetric phantom—“DEFGEL.” To assess the accuracy of dose-warping based on deformable image registration (DIR), we have measured doses in undeformed and deformed states of the DEFGEL dosimeter and compared these to planned doses and warped doses. In this way we have directly evaluated the accuracy of dose-warping calculations for 11 different algorithms. We have done this for a range of stereotactic irradiation schemes and types and magnitudes of deformation. Results: The original Horn and Schunck algorithm is shown to be the best performing of the 11 algorithms trialled. Comparing measured and dose-warped calculations for this method, it is found that for a 10 × 10 mm 2 square field, γ 3%/3mm = 99.9%; for a 20 × 20 mm 2 cross-shaped field, γ 3%/3mm = 99.1%; and for a multiple dynamic arc (0.413 cm 3 PTV) treatment adapted from a patient treatment plan, γ 3%/3mm = 95%. In each case, the agreement is comparable to—but consistently ∼1% less than—comparison between measured and calculated (planned) dose distributions in the absence of deformation. The magnitude of the deformation, as measured by the largest displacement experienced by any voxel in the volume, has the greatest influence on the accuracy of the warped dose distribution. Considering the square field case, the smallest deformation (∼9 mm) yields
EDITORIAL: Special issue on time scale algorithms
Matsakis, Demetrios; Tavella, Patrizia
2008-12-01
This special issue of Metrologia presents selected papers from the Fifth International Time Scale Algorithm Symposium (VITSAS), including some of the tutorials presented on the first day. The symposium was attended by 76 persons, from every continent except Antarctica, by students as well as senior scientists, and hosted by the Real Instituto y Observatorio de la Armada (ROA) in San Fernando, Spain, whose staff further enhanced their nation's high reputation for hospitality. Although a timescale can be simply defined as a weighted average of clocks, whose purpose is to measure time better than any individual clock, timescale theory has long been and continues to be a vibrant field of research that has both followed and helped to create advances in the art of timekeeping. There is no perfect timescale algorithm, because every one embodies a compromise involving user needs. Some users wish to generate a constant frequency, perhaps not necessarily one that is well-defined with respect to the definition of a second. Other users might want a clock which is as close to UTC or a particular reference clock as possible, or perhaps wish to minimize the maximum variation from that standard. In contrast to the steered timescales that would be required by those users, other users may need free-running timescales, which are independent of external information. While no algorithm can meet all these needs, every algorithm can benefit from some form of tuning. The optimal tuning, and even the optimal algorithm, can depend on the noise characteristics of the frequency standards, or of their comparison systems, the most precise and accurate of which are currently Two Way Satellite Time and Frequency Transfer (TWSTFT) and GPS carrier phase time transfer. The interest in time scale algorithms and its associated statistical methodology began around 40 years ago when the Allan variance appeared and when the metrological institutions started realizing ensemble atomic time using more than
The WARP Code: Modeling High Intensity Ion Beams
International Nuclear Information System (INIS)
Grote, D P; Friedman, A; Vay, J L; Haber, I
2004-01-01
The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP( ) summary.html
The calculation of warping spools of warp-knitting machines
Directory of Open Access Journals (Sweden)
Vitaliy V. Chaban
2014-12-01
Full Text Available The paper is devoted to the development of scientific bases of the knitting machine design, in particular, to the calculation of warping spools of warp-knitting machines. The method of calculating the operating parameters of warping spools and mode of winding is offered. A formula that is obtained allows to define relationship between the parameters of the threads wound on a warping spool, their pull, structural dimensions of spool barrel and the diameter of spooling. With the given spool design and the given value of permissible tension of the material of its barrel, the offered formula allows to determine the maximum tension of the threads in the process of their winding on a spool. By this formula the safe diameter of winding the threads onto the spool can be calculated at a given pull of the threads during winding.
On constructing optimistic simulation algorithms for the discrete event system specification
International Nuclear Information System (INIS)
Nutaro, James J.
2008-01-01
This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models
Quantum effects in warp drives
Directory of Open Access Journals (Sweden)
Finazzi Stefano
2013-09-01
Full Text Available Warp drives are interesting configurations that, at least theoretically, provide a way to travel at superluminal speed. Unfortunately, several issues seem to forbid their realization. First, a huge amount of exotic matter is required to build them. Second, the presence of quantum fields propagating in superluminal warp-drive geometries makes them semiclassically unstable. Indeed, a Hawking-like high-temperature flux of particles is generated inside the warp-drive bubble, which causes an exponential growth of the energy density measured at the front wall of the bubble by freely falling observers. Moreover, superluminal warp drives remain unstable even if the Lorentz symmetry is broken by the introduction of regulating higher order terms in the Lagrangian of the quantum field. If the dispersion relation of the quantum field is subluminal, a black-hole laser phenomenon yields an exponential amplification of the emitted flux. If it is superluminal, infrared effects cause a linear growth of this flux.
Han, Renmin; Li, Yu; Wang, Sheng; Gao, Xin
2017-01-01
Long-reads, point-of-care, and PCR-free are the promises brought by nanopore sequencing. Among various steps in nanopore data analysis, the global mapping between the raw electrical current signal sequence and the expected signal sequence from
Conformal boundaries of warped products
DEFF Research Database (Denmark)
Kokkendorff, Simon Lyngby
2006-01-01
In this note we prove a result on how to determine the conformal boundary of a type of warped product of two length spaces in terms of the individual conformal boundaries. In the situation, that we treat, the warping and conformal distortion functions are functions of distance to a base point....... The result is applied to produce examples of CAT(0)-spaces, where the conformal and ideal boundaries differ in interesting ways....
Closed timelike curves in asymmetrically warped brane universes
Päs, Heinrich; Pakvasa, Sandip; Dent, James; Weiler, Thomas J.
2009-08-01
In asymmetrically-warped spacetimes different warp factors are assigned to space and to time. We discuss causality properties of these warped brane universes and argue that scenarios with two extra dimensions may allow for timelike curves which can be closed via paths in the extra-dimensional bulk. In particular, necessary and sufficient conditions on the metric for the existence of closed timelike curves are presented. We find a six-dimensional warped metric which satisfies the CTC conditions, and where the null, weak and dominant energy conditions are satisfied on the brane (although only the former remains satisfied in the bulk). Such scenarios are interesting, since they open the possibility of experimentally testing the chronology protection conjecture by manipulating on our brane initial conditions of gravitons or hypothetical gauge-singlet fermions (“sterile neutrinos”) which then propagate in the extra dimensions.
Namaste (counterbalancing) technique: Overcoming warping in costal cartilage.
Agrawal, Kapil S; Bachhav, Manoj; Shrotriya, Raghav
2015-01-01
Indian noses are broader and lack projection as compared to other populations, hence very often need augmentation, that too by large volume. Costal cartilage remains the material of choice in large volume augmentations and repair of complex primary and secondary nasal deformities. One major disadvantage of costal cartilage grafts (CCG) which offsets all other advantages is the tendency to warp and become distorted over a period of time. We propose a simple technique to overcome this menace of warping. We present the data of 51 patients of rhinoplasty done using CCG with counterbalancing technique over a period of 4 years. No evidence of warping was found in any patient up to a maximum follow-up period of 4 years. Counterbalancing is a useful technique to overcome the problem of warping. It gives liberty to utilize even unbalanced cartilage safely to provide desired shape and use the cartilage without any wastage.
Namaste (counterbalancing technique: Overcoming warping in costal cartilage
Directory of Open Access Journals (Sweden)
Kapil S Agrawal
2015-01-01
Full Text Available Background: Indian noses are broader and lack projection as compared to other populations, hence very often need augmentation, that too by large volume. Costal cartilage remains the material of choice in large volume augmentations and repair of complex primary and secondary nasal deformities. One major disadvantage of costal cartilage grafts (CCG which offsets all other advantages is the tendency to warp and become distorted over a period of time. We propose a simple technique to overcome this menace of warping. Materials and Methods: We present the data of 51 patients of rhinoplasty done using CCG with counterbalancing technique over a period of 4 years. Results: No evidence of warping was found in any patient up to a maximum follow-up period of 4 years. Conclusion: Counterbalancing is a useful technique to overcome the problem of warping. It gives liberty to utilize even unbalanced cartilage safely to provide desired shape and use the cartilage without any wastage.
Correlation functions of warped CFT
Song, Wei; Xu, Jianfei
2018-04-01
Warped conformal field theory (WCFT) is a two dimensional quantum field theory whose local symmetry algebra consists of a Virasoro algebra and a U(1) Kac-Moody algebra. In this paper, we study correlation functions for primary operators in WCFT. Similar to conformal symmetry, warped conformal symmetry is very constraining. The form of the two and three point functions are determined by the global warped conformal symmetry while the four point functions can be determined up to an arbitrary function of the cross ratio. The warped conformal bootstrap equation are constructed by formulating the notion of crossing symmetry. In the large central charge limit, four point functions can be decomposed into global warped conformal blocks, which can be solved exactly. Furthermore, we revisit the scattering problem in warped AdS spacetime (WAdS), and give a prescription on how to match the bulk result to a WCFT retarded Green's function. Our result is consistent with the conjectured holographic dualities between WCFT and WAdS.
International Nuclear Information System (INIS)
Csaki, Csaba; Grossman, Yuval; Tanedo, Philip; Tsai, Yuhsin
2011-01-01
We present an analysis of the loop-induced magnetic dipole operator in the Randall-Sundrum model of a warped extra dimension with anarchic bulk fermions and an IR brane-localized Higgs. These operators are finite at one-loop order and we explicitly calculate the branching ratio for μ→eγ using the mixed position/momentum space formalism. The particular bound on the anarchic Yukawa and Kaluza-Klein (KK) scales can depend on the flavor structure of the anarchic matrices. It is possible for a generic model to either be ruled out or unaffected by these bounds without any fine-tuning. We quantify how these models realize this surprising behavior. We also review tree-level lepton flavor bounds in these models and show that these are on the verge of tension with the μ→eγ bounds from typical models with a 3 TeV Kaluza-Klein scale. Further, we illuminate the nature of the one-loop finiteness of these diagrams and show how to accurately determine the degree of divergence of a five-dimensional loop diagram using both the five-dimensional and KK formalism. This power counting can be obfuscated in the four-dimensional Kaluza-Klein formalism and we explicitly point out subtleties that ensure that the two formalisms agree. Finally, we remark on the existence of a perturbative regime in which these one-loop results give the dominant contribution.
Wormholes, warp drives and energy conditions
2017-01-01
Top researchers in the field of gravitation present the state-of-the-art topics outlined in this book, ranging from the stability of rotating wormholes solutions supported by ghost scalar fields, modified gravity applied to wormholes, the study of novel semi-classical and nonlinear energy conditions, to the applications of quantum effects and the superluminal version of the warp drive in modified spacetime. Based on Einstein's field equations, this cutting-edge research area explores the more far-fetched theoretical outcomes of General Relativity and relates them to quantum field theory. This includes quantum energy inequalities, flux energy conditions, and wormhole curvature, and sheds light on not just the theoretical physics but also on the possible applications to warp drives and time travel. This book extensively explores the physical properties and characteristics of these 'exotic spacetimes,' describing in detail the general relativistic geometries that generate closed timelike curves.
The WARP Code: Modeling High Intensity Ion Beams
International Nuclear Information System (INIS)
Grote, David P.; Friedman, Alex; Vay, Jean-Luc; Haber, Irving
2005-01-01
The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand
An explicit multi-time-stepping algorithm for aerodynamic flows
Niemann-Tuitman, B.E.; Veldman, A.E.P.
1997-01-01
An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for
Continuous Time Dynamic Contraflow Models and Algorithms
Directory of Open Access Journals (Sweden)
Urmila Pyakurel
2016-01-01
Full Text Available The research on evacuation planning problem is promoted by the very challenging emergency issues due to large scale natural or man-created disasters. It is the process of shifting the maximum number of evacuees from the disastrous areas to the safe destinations as quickly and efficiently as possible. Contraflow is a widely accepted model for good solution of evacuation planning problem. It increases the outbound road capacity by reversing the direction of roads towards the safe destination. The continuous dynamic contraflow problem sends the maximum number of flow as a flow rate from the source to the sink in every moment of time unit. We propose the mathematical model for the continuous dynamic contraflow problem. We present efficient algorithms to solve the maximum continuous dynamic contraflow and quickest continuous contraflow problems on single source single sink arbitrary networks and continuous earliest arrival contraflow problem on single source single sink series-parallel networks with undefined supply and demand. We also introduce an approximation solution for continuous earliest arrival contraflow problem on two-terminal arbitrary networks.
Superluminal warp drives are semiclassically unstable
Energy Technology Data Exchange (ETDEWEB)
Finazzi, S; Liberati, S [SISSA, via Beirut 2-4, Trieste 34151, Italy and INFN sezione di Trieste (Italy); Barcelo, C, E-mail: finazzi@sissa.i, E-mail: liberati@sissa.i, E-mail: carlos@iaa.e [Instituto de Astrofisica de AndalucIa, CSIC, Camino Bajo de Huetor 50, 18008 Granada (Spain)
2010-04-01
Warp drives are very interesting configurations of General Relativity: they provide a way to travel at superluminal speeds, albeit at the cost of requiring exotic matter to build them. Even if one succeeded in providing the necessary exotic matter, it would still be necessary to check whether they would survive to the switching on of quantum effects. Semiclassical corrections to warp-drive geometries created out of an initially flat spacetime have been analyzed in a previous work by the present authors in special locations, close to the wall of the bubble and in its center. Here, we present an exact numerical analysis of the renormalized stress-energy tensor (RSET) in the whole bubble. We find that the the RSET will exponentially grow in time close to the front wall of the superluminal bubble, after some transient terms have disappeared, hence strongly supporting our previous conclusion that the warp-drive geometries are unstable against semiclassical back-reaction. This result seems to implement the chronology protection conjecture, forbiddig the set up of a structure potentially dangerous for causality.
Sorting on STAR. [CDC computer algorithm timing comparison
Stone, H. S.
1978-01-01
Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.
Dynamic Programming Algorithms in Speech Recognition
Directory of Open Access Journals (Sweden)
Titus Felix FURTUNA
2008-01-01
Full Text Available In a system of speech recognition containing words, the recognition requires the comparison between the entry signal of the word and the various words of the dictionary. The problem can be solved efficiently by a dynamic comparison algorithm whose goal is to put in optimal correspondence the temporal scales of the two words. An algorithm of this type is Dynamic Time Warping. This paper presents two alternatives for implementation of the algorithm designed for recognition of the isolated words.
Algorithms for Brownian first-passage-time estimation
Adib, Artur B.
2009-09-01
A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.
Dynamics of warped flux compactifications
International Nuclear Information System (INIS)
Shiu, Gary; Underwood, Bret; Torroba, Gonzalo; Douglas, Michael R.
2008-01-01
We discuss the four dimensional effective action for type IIB flux compactifications, and obtain the quadratic terms taking warp effects into account. The analysis includes both the 4-d zero modes and their KK excitations, which become light at large warping. We identify an 'axial' type gauge for the supergravity fluctuations, which makes the four dimensional degrees of freedom manifest. The other key ingredient is the existence of constraints coming from the ten dimensional equations of motion. Applying these conditions leads to considerable simplifications, enabling us to obtain the low energy lagrangian explicitly. In particular, the warped Kaehler potential for metric moduli is computed and it is shown that there are no mixings with the KK fluctuations and the result differs from previous proposals. The four dimensional potential contains a generalization of the Gukov-Vafa-Witten term, plus usual mass terms for KK modes.
An algorithm for learning real-time automata
Verwer, S.E.; De Weerdt, M.M.; Witteveen, C.
2007-01-01
We describe an algorithm for learning simple timed automata, known as real-time automata. The transitions of real-time automata can have a temporal constraint on the time of occurrence of the current symbol relative to the previous symbol. The learning algorithm is similar to the redblue fringe
Modeling laser-driven electron acceleration using WARP with Fourier decomposition
Energy Technology Data Exchange (ETDEWEB)
Lee, P., E-mail: patrick.lee@u-psud.fr [LPGP, CNRS, Univ. Paris-Sud, Université Paris-Saclay, 91405 Orsay (France); Audet, T.L. [LPGP, CNRS, Univ. Paris-Sud, Université Paris-Saclay, 91405 Orsay (France); Lehe, R.; Vay, J.-L. [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Maynard, G.; Cros, B. [LPGP, CNRS, Univ. Paris-Sud, Université Paris-Saclay, 91405 Orsay (France)
2016-09-01
WARP is used with the recent implementation of the Fourier decomposition algorithm to model laser-driven electron acceleration in plasmas. Simulations were carried out to analyze the experimental results obtained on ionization-induced injection in a gas cell. The simulated results are in good agreement with the experimental ones, confirming the ability of the code to take into account the physics of electron injection and reduce calculation time. We present a detailed analysis of the laser propagation, the plasma wave generation and the electron beam dynamics.
Saving time in a space-efficient simulation algorithm
Markovski, J.
2011-01-01
We present an efficient algorithm for computing the simulation preorder and equivalence for labeled transition systems. The algorithm improves an existing space-efficient algorithm and improves its time complexity by employing a variant of the stability condition and exploiting properties of the
A Dynamic Fuzzy Cluster Algorithm for Time Series
Directory of Open Access Journals (Sweden)
Min Ji
2013-01-01
clustering time series by introducing the definition of key point and improving FCM algorithm. The proposed algorithm works by determining those time series whose class labels are vague and further partitions them into different clusters over time. The main advantage of this approach compared with other existing algorithms is that the property of some time series belonging to different clusters over time can be partially revealed. Results from simulation-based experiments on geographical data demonstrate the excellent performance and the desired results have been obtained. The proposed algorithm can be applied to solve other clustering problems in data mining.
An explicit multi-time-stepping algorithm for aerodynamic flows
Niemann-Tuitman, B.E.; Veldman, A.E.P.
1997-01-01
An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for aerodynamic turbulent flows. For two-dimensional flows speedups in the order of five with respect to single time stepping are obtained.
Effectiveness of firefly algorithm based neural network in time series ...
African Journals Online (AJOL)
Effectiveness of firefly algorithm based neural network in time series forecasting. ... In the experiments, three well known time series were used to evaluate the performance. Results obtained were compared with ... Keywords: Time series, Artificial Neural Network, Firefly Algorithm, Particle Swarm Optimization, Overfitting ...
International Nuclear Information System (INIS)
Wang Shijun; Yao Jianhua; Liu Jiamin; Petrick, Nicholas; Van Uitert, Robert L.; Periaswamy, Senthil; Summers, Ronald M.
2009-01-01
Purpose: In computed tomographic colonography (CTC), a patient will be scanned twice--Once supine and once prone--to improve the sensitivity for polyp detection. To assist radiologists in CTC reading, in this paper we propose an automated method for colon registration from supine and prone CTC scans. Methods: We propose a new colon centerline registration method for prone and supine CTC scans using correlation optimized warping (COW) and canonical correlation analysis (CCA) based on the anatomical structure of the colon. Four anatomical salient points on the colon are first automatically distinguished. Then correlation optimized warping is applied to the segments defined by the anatomical landmarks to improve the global registration based on local correlation of segments. The COW method was modified by embedding canonical correlation analysis to allow multiple features along the colon centerline to be used in our implementation. Results: We tested the COW algorithm on a CTC data set of 39 patients with 39 polyps (19 training and 20 test cases) to verify the effectiveness of the proposed COW registration method. Experimental results on the test set show that the COW method significantly reduces the average estimation error in a polyp location between supine and prone scans by 67.6%, from 46.27±52.97 to 14.98 mm±11.41 mm, compared to the normalized distance along the colon centerline algorithm (p<0.01). Conclusions: The proposed COW algorithm is more accurate for the colon centerline registration compared to the normalized distance along the colon centerline method and the dynamic time warping method. Comparison results showed that the feature combination of z-coordinate and curvature achieved lowest registration error compared to the other feature combinations used by COW. The proposed method is tolerant to centerline errors because anatomical landmarks help prevent the propagation of errors across the entire colon centerline.
Alcubierre's warp drive: Problems and prospects
International Nuclear Information System (INIS)
Broeck, Chris van den
2000-01-01
Alcubierre's warp drive geometry seemingly represents the ultimate dream for interstellar travel: there is no speed limit, the passengers are weightless whatever the acceleration, and there is no time dilation. However, in its original form, the proposal suffers from several fatal flaws, such as unreasonably high energies, energy moving in a locally spacelike direction, and a violation of the energy conditions of classical Einstein gravity. I present a possible solution for one of these problems, and I suggest ways to at least soften the others
Modulus stabilization in a non-flat warped braneworld scenario
Energy Technology Data Exchange (ETDEWEB)
Banerjee, Indrani [S.N. Bose National Centre for Basic Sciences, Department of Astrophysics and Cosmology, Kolkata (India); SenGupta, Soumitra [Indian Association for the Cultivation of Science, Department of Theoretical Physics, Kolkata (India)
2017-05-15
The stability of the modular field in a warped brane world scenario has been a subject of interest for a long time. Goldberger and Wise (GW) proposed a mechanism to achieve this by invoking a massive scalar field in the bulk space-time neglecting the back-reaction. In this work, we examine the possibility of stabilizing the modulus without bringing about any external scalar field. We show that instead of flat 3-branes as considered in Randall-Sundrum (RS) warped braneworld model, if one considers a more generalized version of warped geometry with de Sitter 3-brane, then the brane vacuum energy automatically leads to a modulus potential with a metastable minimum. Our result further reveals that in this scenario the gauge hierarchy problem can also be resolved for an appropriate choice of the brane's cosmological constant. (orig.)
Constraining the age of the NGC 4565 H I disk WARP: Determining the origin of gas WARPS
Energy Technology Data Exchange (ETDEWEB)
Radburn-Smith, David J.; Dalcanton, Julianne J.; Stilp, Adrienne M. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); De Jong, Roelof S.; Streich, David [Leibniz-Institut für Astrophysik Potsdam, D-14482 Potsdam (Germany); Bell, Eric F.; Monachesi, Antonela [Department of Astronomy, University of Michigan, Ann Arbor, MI 48109 (United States); Dolphin, Andrew E. [Raytheon, 1151 East Hermans Road, Tucson, AZ 85756 (United States); Holwerda, Benne W. [European Space Agency, ESTEC, 2200 AG Noordwijk (Netherlands); Bailin, Jeremy [Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487 (United States)
2014-01-01
We have mapped the distribution of young and old stars in the gaseous H I warp of NGC 4565. We find a clear correlation of young stars (<600 Myr) with the warp but no coincident old stars (>1 Gyr), which places an upper limit on the age of the structure. The formation rate of the young stars, which increased ∼300 Myr ago relative to the surrounding regions, is (6.3{sub −1.5}{sup +2.5})×10{sup −5} M {sub ☉} yr{sup –1} kpc{sup –2}. This implies a ∼60 ± 20 Gyr depletion time of the H I warp, similar to the timescales calculated for the outer H I disks of nearby spiral galaxies. While some stars associated with the warp fall into the asymptotic giant branch (AGB) region of the color-magnitude diagram, where stars could be as old as 1 Gyr, further investigation suggests that they may be interlopers rather than real AGB stars. We discuss the implications of these age constraints for the formation of H I warps and the gas fueling of disk galaxies.
Distributed Time Synchronization Algorithms and Opinion Dynamics
Manita, Anatoly; Manita, Larisa
2018-01-01
We propose new deterministic and stochastic models for synchronization of clocks in nodes of distributed networks. An external accurate time server is used to ensure convergence of the node clocks to the exact time. These systems have much in common with mathematical models of opinion formation in multiagent systems. There is a direct analogy between the time server/node clocks pair in asynchronous networks and the leader/follower pair in the context of social network models.
Human low vision image warping - Channel matching considerations
Juday, Richard D.; Smith, Alan T.; Loshin, David S.
1992-01-01
We are investigating the possibility that a video image may productively be warped prior to presentation to a low vision patient. This could form part of a prosthesis for certain field defects. We have done preliminary quantitative studies on some notions that may be valid in calculating the image warpings. We hope the results will help make best use of time to be spent with human subjects, by guiding the selection of parameters and their range to be investigated. We liken a warping optimization to opening the largest number of spatial channels between the pixels of an input imager and resolution cells in the visual system. Some important effects are not quantified that will require human evaluation, such as local 'squashing' of the image, taken as the ratio of eigenvalues of the Jacobian of the transformation. The results indicate that the method shows quantitative promise. These results have identified some geometric transformations to evaluate further with human subjects.
Time-Delay System Identification Using Genetic Algorithm
DEFF Research Database (Denmark)
Yang, Zhenyu; Seested, Glen Thane
2013-01-01
Due to the unknown dead-time coefficient, the time-delay system identification turns to be a non-convex optimization problem. This paper investigates the identification of a simple time-delay system, named First-Order-Plus-Dead-Time (FOPDT), by using the Genetic Algorithm (GA) technique. The qual......Due to the unknown dead-time coefficient, the time-delay system identification turns to be a non-convex optimization problem. This paper investigates the identification of a simple time-delay system, named First-Order-Plus-Dead-Time (FOPDT), by using the Genetic Algorithm (GA) technique...
Energy conservation in Newmark based time integration algorithms
DEFF Research Database (Denmark)
Krenk, Steen
2006-01-01
Energy balance equations are established for the Newmark time integration algorithm, and for the derived algorithms with algorithmic damping introduced via averaging, the so-called a-methods. The energy balance equations form a sequence applicable to: Newmark integration of the undamped equations...... of motion, an extended form including structural damping, and finally the generalized form including structural as well as algorithmic damping. In all three cases the expression for energy, appearing in the balance equation, is the mechanical energy plus some additional terms generated by the discretization...
Atlas warping for brain morphometry
Machado, Alexei M. C.; Gee, James C.
1998-06-01
In this work, we describe an automated approach to morphometry based on spatial normalizations of the data, and demonstrate its application to the analysis of gender differences in the human corpus callosum. The purpose is to describe a population by a reduced and representative set of variables, from which a prior model can be constructed. Our approach is rooted in the assumption that individual anatomies can be considered as quantitative variations on a common underlying qualitative plane. We can therefore imagine that a given individual's anatomy is a warped version of some referential anatomy, also known as an atlas. The spatial warps which transform a labeled atlas into anatomic alignment with a population yield immediate knowledge about organ size and shape in the group. Furthermore, variation within the set of spatial warps is directly related to the anatomic variation among the subjects. Specifically, the shape statistics--mean and variance of the mappings--for the population can be calculated in a special basis, and an eigendecomposition of the variance performed to identify the most significant modes of shape variation. The results obtained with the corpus callosum study confirm the existence of substantial anatomical differences between males and females, as reported in previous experimental work.
Vehicle routing problem with time windows using natural inspired algorithms
Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.
2018-03-01
Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.
Warped Extra-Dimensional Opportunities and Signatures (1/3)
CERN. Geneva
2008-01-01
I plan to discuss ways of searching for warped geometry and other extra-dimensional scenarios, with emphasis on the general lessons for search strategies. We will consider RS geometry on the brane and in the bulk, as well as possible black hole or quantum gravity signatures. If time permits, we will also consider fermion masses and/or precision Higgs measurements.
Warped Extra-Dimensional Opportunities and Signatures (3/3)
CERN. Geneva
2008-01-01
I plan to discuss ways of searching for warped geometry and other extra-dimensional scenarios, with emphasis on the general lessons for search strategies. We will consider RS geometry on the brane and in the bulk, as well as possible black hole or quantum gravity signatures. If time permits, we will also consider fermion masses and/or precision Higgs measurements.
Warped Extra-Dimensional Opportunities and Signatures (2/3)
CERN. Geneva
2008-01-01
I plan to discuss ways of searching for warped geometry and other extra-dimensional scenarios, with emphasis on the general lessons for search strategies. We will consider RS geometry on the brane and in the bulk, as well as possible black hole or quantum gravity signatures. If time permits, we will also consider fermion masses and/or precision Higgs measurements.
Cosmic radiation algorithm utilizing flight time tables
International Nuclear Information System (INIS)
Katja Kojo, M.Sc.; Mika Helminen, M.Sc.; Anssi Auvinen, M.D.Ph.D.; Katja Kojo, M.Sc.; Anssi Auvinen, M.D.Ph.D.; Gerhard Leuthold, D.Sc.
2006-01-01
Cosmic radiation is considerably higher on cruising altitudes used in aviation than at ground level. Exposure to cosmic radiation may increase cancer risk among pilots and cabin crew. The International Commission on Radiation Protection (ICRP) has recommended that air crew should be classified as radiation workers. Quantification of cosmic radiation doses is necessary for assessment of potential health effects of such occupational exposure. For Finnair cabin crew (cabin attendants and stewards), flight history is not available for years prior to 1991 and therefore, other sources of information on number and type of flights have to be used. The lack of systematically recorded information is a problem for dose estimation for many other flight companies personnel as well. Several cosmic radiation dose estimations for cabin crew have been performed using different methods (e.g. 2-5), but they have suffered from various shortcomings. Retrospective exposure estimation is not possible with personal portable dosimeters. Methods that employ survey data for occupational dose assessment are prone to non-differential measurement error i.e. the cabin attendants do not remember correctly the number of past flights. Assessment procedures that utilize surrogate measurement methods i.e. the duration of employment, lack precision. The aim of the present study was to develop an assessment method for individual occupational exposure to cosmic radiation based on flight time tables. Our method provides an assessment method that does not require survey data or systematic recording of flight history, and it is rather quick, inexpensive, and possible to carry out in all other flight companies whose past time tables for the past periods exist. Dose assessment methods that employ survey data are prone to random error i.e. the cabin attendants do not remember correctly the number or types of routes that they have flown during the past. Our method avoids this since survey data are not needed
Time reversibility, computer simulation, algorithms, chaos
Hoover, William Graham
2012-01-01
A small army of physicists, chemists, mathematicians, and engineers has joined forces to attack a classic problem, the "reversibility paradox", with modern tools. This book describes their work from the perspective of computer simulation, emphasizing the author's approach to the problem of understanding the compatibility, and even inevitability, of the irreversible second law of thermodynamics with an underlying time-reversible mechanics. Computer simulation has made it possible to probe reversibility from a variety of directions and "chaos theory" or "nonlinear dynamics" has supplied a useful vocabulary and a set of concepts, which allow a fuller explanation of irreversibility than that available to Boltzmann or to Green, Kubo and Onsager. Clear illustration of concepts is emphasized throughout, and reinforced with a glossary of technical terms from the specialized fields which have been combined here to focus on a common theme. The book begins with a discussion, contrasting the idealized reversibility of ba...
A real time sorting algorithm to time sort any deterministic time disordered data stream
Saini, J.; Mandal, S.; Chakrabarti, A.; Chattopadhyay, S.
2017-12-01
In new generation high intensity high energy physics experiments, millions of free streaming high rate data sources are to be readout. Free streaming data with associated time-stamp can only be controlled by thresholds as there is no trigger information available for the readout. Therefore, these readouts are prone to collect large amount of noise and unwanted data. For this reason, these experiments can have output data rate of several orders of magnitude higher than the useful signal data rate. It is therefore necessary to perform online processing of the data to extract useful information from the full data set. Without trigger information, pre-processing on the free streaming data can only be done with time based correlation among the data set. Multiple data sources have different path delays and bandwidth utilizations and therefore the unsorted merged data requires significant computational efforts for real time manifestation of sorting before analysis. Present work reports a new high speed scalable data stream sorting algorithm with its architectural design, verified through Field programmable Gate Array (FPGA) based hardware simulation. Realistic time based simulated data likely to be collected in an high energy physics experiment have been used to study the performance of the algorithm. The proposed algorithm uses parallel read-write blocks with added memory management and zero suppression features to make it efficient for high rate data-streams. This algorithm is best suited for online data streams with deterministic time disorder/unsorting on FPGA like hardware.
Fundamental limitations on 'warp drive' spacetimes
International Nuclear Information System (INIS)
Lobo, Francisco S N; Visser, Matt
2004-01-01
'Warp drive' spacetimes are useful as 'gedanken-experiments' that force us to confront the foundations of general relativity, and among other things, to precisely formulate the notion of 'superluminal' communication. After carefully formulating the Alcubierre and Natario warp drive spacetimes, and verifying their non-perturbative violation of the classical energy conditions, we consider a more modest question and apply linearized gravity to the weak-field warp drive, testing the energy conditions to first and second orders of the warp-bubble velocity, v. Since we take the warp-bubble velocity to be non-relativistic, v << c, we are not primarily interested in the 'superluminal' features of the warp drive. Instead we focus on a secondary feature of the warp drive that has not previously been remarked upon-the warp drive (if it could be built) would be an example of a 'reaction-less drive'. For both the Alcubierre and Natario warp drives we find that the occurrence of significant energy condition violations is not just a high-speed effect, but that the violations persist even at arbitrarily low speeds. A particularly interesting feature of this construction is that it is now meaningful to think of placing a finite mass spaceship at the centre of the warp bubble, and then see how the energy in the warp field compares with the mass-energy of the spaceship. There is no hope of doing this in Alcubierre's original version of the warp field, since by definition the point at the centre of the warp bubble moves on a geodesic and is 'massless'. That is, in Alcubierre's original formalism and in the Natario formalism the spaceship is always treated as a test particle, while in the linearized theory we can treat the spaceship as a finite mass object. For both the Alcubierre and Natario warp drives we find that even at low speeds the net (negative) energy stored in the warp fields must be a significant fraction of the mass of the spaceship
Warped models in string theory
International Nuclear Information System (INIS)
Acharya, B.S.; Benini, F.; Valandro, R.
2006-12-01
Warped models, originating with the ideas of Randall and Sundrum, provide a fascinating extension of the standard model with interesting consequences for the LHC. We investigate in detail how string theory realises such models, with emphasis on fermion localisation and the computation of Yukawa couplings. We find, in contrast to the 5d models, that fermions can be localised anywhere in the extra dimension, and that there are new mechanisms to generate exponential hierarchies amongst the Yukawa couplings. We also suggest a way to distinguish these string theory models with data from the LHC. (author)
A distributed scheduling algorithm for heterogeneous real-time systems
Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi
1991-01-01
Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.
Linear Time Local Approximation Algorithm for Maximum Stable Marriage
Directory of Open Access Journals (Sweden)
Zoltán Király
2013-08-01
Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.
ALGORITHMIC CONSTRUCTION SCHEDULES IN CONDITIONS OF TIMING CONSTRAINTS
Directory of Open Access Journals (Sweden)
Alexey S. Dobrynin
2014-01-01
Full Text Available Tasks of time-schedule construction (JSSP in various fields of human activities have an important theoretical and practical significance. The main feature of these tasks is a timing requirement, describing allowed planning time periods and periods of downtime. This article describes implementation variations of the work scheduling algorithm under timing requirements for the tasks of industrial time-schedules construction, and service activities.
International Nuclear Information System (INIS)
Anninos, Dionysios; Li Wei; Padi, Megha; Song Wei; Strominger, Andrew
2009-01-01
Three dimensional topologically massive gravity (TMG) with a negative cosmological constant -l -2 and positive Newton constant G admits an AdS 3 vacuum solution for any value of the graviton mass μ. These are all known to be perturbatively unstable except at the recently explored chiral point μl = 1. However we show herein that for every value of μl ≠ 3 there are two other (potentially stable) vacuum solutions given by SL(2,R) x U(1)-invariant warped AdS 3 geometries, with a timelike or spacelike U(1) isometry. Critical behavior occurs at μl = 3, where the warping transitions from a stretching to a squashing, and there are a pair of warped solutions with a null U(1) isometry. For μl > 3, there are known warped black hole solutions which are asymptotic to warped AdS 3 . We show that these black holes are discrete quotients of warped AdS 3 just as BTZ black holes are discrete quotients of ordinary AdS 3 . Moreover new solutions of this type, relevant to any theory with warped AdS 3 solutions, are exhibited. Finally we note that the black hole thermodynamics is consistent with the hypothesis that, for μl > 3, the warped AdS 3 ground state of TMG is holographically dual to a 2D boundary CFT with central charges c R -formula and c L -formula.
The geometry of warped product singularities
Stoica, Ovidiu Cristinel
In this article, the degenerate warped products of singular semi-Riemannian manifolds are studied. They were used recently by the author to handle singularities occurring in General Relativity, in black holes and at the big-bang. One main result presented here is that a degenerate warped product of semi-regular semi-Riemannian manifolds with the warping function satisfying a certain condition is a semi-regular semi-Riemannian manifold. The connection and the Riemann curvature of the warped product are expressed in terms of those of the factor manifolds. Examples of singular semi-Riemannian manifolds which are semi-regular are constructed as warped products. Applications include cosmological models and black holes solutions with semi-regular singularities. Such singularities are compatible with a certain reformulation of the Einstein equation, which in addition holds at semi-regular singularities too.
FPGA Implementation of the Coupled Filtering Method and the Affine Warping Method.
Zhang, Chen; Liang, Tianzhu; Mok, Philip K T; Yu, Weichuan
2017-07-01
In ultrasound image analysis, the speckle tracking methods are widely applied to study the elasticity of body tissue. However, "feature-motion decorrelation" still remains as a challenge for the speckle tracking methods. Recently, a coupled filtering method and an affine warping method were proposed to accurately estimate strain values, when the tissue deformation is large. The major drawback of these methods is the high computational complexity. Even the graphics processing unit (GPU)-based program requires a long time to finish the analysis. In this paper, we propose field-programmable gate array (FPGA)-based implementations of both methods for further acceleration. The capability of FPGAs on handling different image processing components in these methods is discussed. A fast and memory-saving image warping approach is proposed. The algorithms are reformulated to build a highly efficient pipeline on FPGA. The final implementations on a Xilinx Virtex-7 FPGA are at least 13 times faster than the GPU implementation on the NVIDIA graphic card (GeForce GTX 580).
Magnetotelluric inversion via reverse time migration algorithm of seismic data
International Nuclear Information System (INIS)
Ha, Taeyoung; Shin, Changsoo
2007-01-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data
A Harmony Search Algorithm approach for optimizing traffic signal timings
Directory of Open Access Journals (Sweden)
Mauro Dell'Orco
2013-07-01
Full Text Available In this study, a bi-level formulation is presented for solving the Equilibrium Network Design Problem (ENDP. The optimisation of the signal timing has been carried out at the upper-level using the Harmony Search Algorithm (HSA, whilst the traffic assignment has been carried out through the Path Flow Estimator (PFE at the lower level. The results of HSA have been first compared with those obtained using the Genetic Algorithm, and the Hill Climbing on a two-junction network for a fixed set of link flows. Secondly, the HSA with PFE has been applied to the medium-sized network to show the applicability of the proposed algorithm in solving the ENDP. Additionally, in order to test the sensitivity of perceived travel time error, we have used the HSA with PFE with various level of perceived travel time. The results showed that the proposed method is quite simple and efficient in solving the ENDP.
Meyer, C R; Boes, J L; Kim, B; Bland, P H; Zasadny, K R; Kison, P V; Koral, K; Frey, K A; Wahl, R L
1997-04-01
This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.
Computing return times or return periods with rare event algorithms
Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy
2018-04-01
The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.
HMC algorithm with multiple time scale integration and mass preconditioning
Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
Detecting structural breaks in time series via genetic algorithms
DEFF Research Database (Denmark)
Doerr, Benjamin; Fischer, Paul; Hilbert, Astrid
2016-01-01
of the time series under consideration is available. Therefore, a black-box optimization approach is our method of choice for detecting structural breaks. We describe a genetic algorithm framework which easily adapts to a large number of statistical settings. To evaluate the usefulness of different crossover...... and mutation operations for this problem, we conduct extensive experiments to determine good choices for the parameters and operators of the genetic algorithm. One surprising observation is that use of uniform and one-point crossover together gave significantly better results than using either crossover...... operator alone. Moreover, we present a specific fitness function which exploits the sparse structure of the break points and which can be evaluated particularly efficiently. The experiments on artificial and real-world time series show that the resulting algorithm detects break points with high precision...
Seesaw mechanism in warped geometry
International Nuclear Information System (INIS)
Huber, S.J.; Shafi, Q.
2003-09-01
We show how the seesaw mechanism for neutrino masses can be realized within a five dimensional (5D) warped geometry framework. Intermediate scale standard model (SM) singlet neutrino masses, needed to explain the atmospheric and solar neutrino oscillations, are shown to be proportional to M P1 .exp((2c-1)πkR), where c denotes the coefficient of the 5D Dirac mass term for the singlet neutrino which also has a Planck scale Majorana mass localized on the Planck-brane, and kR∼11 in order to resolve the gauge hierarchy problem. The case with a bulk 5D Majorana mass term for the singlet neutrino is briefly discussed. (orig.)
Seesaw mechanism in warped geometry
International Nuclear Information System (INIS)
Huber, Stephan J.; Shafi, Qaisar
2004-01-01
We show how the seesaw mechanism for neutrino masses can be realized within a five-dimensional (5D) warped geometry framework. Intermediate scale standard model (SM) singlet neutrino masses, needed to explain the atmospheric and solar neutrino oscillations, are shown to be proportional to M Pl exp((2c-1)πkR), where c denotes the coefficient of the 5D Dirac mass term for the singlet neutrino which also has a Planck scale Majorana mass localized on the Planck-brane, and kR∼11 in order to resolve the gauge hierarchy problem. The case with a bulk 5D Majorana mass term for the singlet neutrino is briefly discussed
Evolutionary algorithms for the Vehicle Routing Problem with Time Windows
Bräysy, Olli; Dullaert, Wout; Gendreau, Michel
2004-01-01
This paper surveys the research on evolutionary algorithms for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW can be described as the problem of designing least cost routes from a single depot to a set of geographically scattered points. The routes must be designed in such a way
An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains
Directory of Open Access Journals (Sweden)
Qihong Duan
2010-01-01
Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.
Real coded genetic algorithm for fuzzy time series prediction
Jain, Shilpa; Bisht, Dinesh C. S.; Singh, Phool; Mathpal, Prakash C.
2017-10-01
Genetic Algorithm (GA) forms a subset of evolutionary computing, rapidly growing area of Artificial Intelligence (A.I.). Some variants of GA are binary GA, real GA, messy GA, micro GA, saw tooth GA, differential evolution GA. This research article presents a real coded GA for predicting enrollments of University of Alabama. Data of Alabama University is a fuzzy time series. Here, fuzzy logic is used to predict enrollments of Alabama University and genetic algorithm optimizes fuzzy intervals. Results are compared to other eminent author works and found satisfactory, and states that real coded GA are fast and accurate.
Algorithmic Approach to Abstracting Linear Systems by Timed Automata
DEFF Research Database (Denmark)
Sloth, Christoffer; Wisniewski, Rafael
2011-01-01
This paper proposes an LMI-based algorithm for abstracting dynamical systems by timed automata, which enables automatic formal verification of linear systems. The proposed abstraction is based on partitioning the state space of the system using positive invariant sets, generated by Lyapunov...... functions. This partitioning ensures that the vector field of the dynamical system is transversal to all facets of the cells, which induces some desirable properties of the abstraction. The algorithm is based on identifying intersections of level sets of quadratic Lyapunov functions, and determining...
Reducing the time requirement of k-means algorithm.
Osamor, Victor Chukwudi; Adebiyi, Ezekiel Femi; Oyelade, Jelilli Olarenwaju; Doumbia, Seydou
2012-01-01
Traditional k-means and most k-means variants are still computationally expensive for large datasets, such as microarray data, which have large datasets with large dimension size d. In k-means clustering, we are given a set of n data points in d-dimensional space R(d) and an integer k. The problem is to determine a set of k points in R(d), called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this work, we develop a novel k-means algorithm, which is simple but more efficient than the traditional k-means and the recent enhanced k-means. Our new algorithm is based on the recently established relationship between principal component analysis and the k-means clustering. We provided the correctness proof for this algorithm. Results obtained from testing the algorithm on three biological data and six non-biological data (three of these data are real, while the other three are simulated) also indicate that our algorithm is empirically faster than other known k-means algorithms. We assessed the quality of our algorithm clusters against the clusters of a known structure using the Hubert-Arabie Adjusted Rand index (ARI(HA)). We found that when k is close to d, the quality is good (ARI(HA)>0.8) and when k is not close to d, the quality of our new k-means algorithm is excellent (ARI(HA)>0.9). In this paper, emphases are on the reduction of the time requirement of the k-means algorithm and its application to microarray data due to the desire to create a tool for clustering and malaria research. However, the new clustering algorithm can be used for other clustering needs as long as an appropriate measure of distance between the centroids and the members is used. This has been demonstrated in this work on six non-biological data.
Method for adjusting warp measurements to a different board dimension
William T. Simpson; John R. Shelly
2000-01-01
Warp in lumber is a common problem that occurs while lumber is being dried. In research or other testing programs, it is sometimes necessary to compare warp of different species or warp caused by different process variables. If lumber dimensions are not the same, then direct comparisons are not possible, and adjusting warp to a common dimension would be desirable so...
Conformal Vector Fields on Doubly Warped Product Manifolds and Applications
Directory of Open Access Journals (Sweden)
H. K. El-Sayied
2016-01-01
Full Text Available This article aimed to study and explore conformal vector fields on doubly warped product manifolds as well as on doubly warped spacetime. Then we derive sufficient conditions for matter and Ricci collineations on doubly warped product manifolds. A special attention is paid to concurrent vector fields. Finally, Ricci solitons on doubly warped product spacetime admitting conformal vector fields are considered.
Time-advance algorithms based on Hamilton's principle
International Nuclear Information System (INIS)
Lewis, H.R.; Kostelec, P.J.
1993-01-01
Time-advance algorithms based on Hamilton's variational principle are being developed for application to problems in plasma physics and other areas. Hamilton's principle was applied previously to derive a system of ordinary differential equations in time whose solution provides an approximation to the evolution of a plasma described by the Vlasov-Maxwell equations. However, the variational principle was not used to obtain an algorithm for solving the ordinary differential equations numerically. The present research addresses the numerical solution of systems of ordinary differential equations via Hamilton's principle. The basic idea is first to choose a class of functions for approximating the solution of the ordinary differential equations over a specific time interval. Then the parameters in the approximating function are determined by applying Hamilton's principle exactly within the class of approximating functions. For example, if an approximate solution is desired between time t and time t + Δ t, the class of approximating functions could be polynomials in time up to some degree. The issue of how to choose time-advance algorithms is very important for achieving efficient, physically meaningful computer simulations. The objective is to reliably simulate those characteristics of an evolving system that are scientifically most relevant. Preliminary numerical results are presented, including comparisons with other computational methods
Feature Selection Criteria for Real Time EKF-SLAM Algorithm
Directory of Open Access Journals (Sweden)
Fernando Auat Cheein
2010-02-01
Full Text Available This paper presents a seletion procedure for environmet features for the correction stage of a SLAM (Simultaneous Localization and Mapping algorithm based on an Extended Kalman Filter (EKF. This approach decreases the computational time of the correction stage which allows for real and constant-time implementations of the SLAM. The selection procedure consists in chosing the features the SLAM system state covariance is more sensible to. The entire system is implemented on a mobile robot equipped with a range sensor laser. The features extracted from the environment correspond to lines and corners. Experimental results of the real time SLAM algorithm and an analysis of the processing-time consumed by the SLAM with the feature selection procedure proposed are shown. A comparison between the feature selection approach proposed and the classical sequential EKF-SLAM along with an entropy feature selection approach is also performed.
Distributed Scheduling in Time Dependent Environments: Algorithms and Analysis
Shmuel, Ori; Cohen, Asaf; Gurewitz, Omer
2017-01-01
Consider the problem of a multiple access channel in a time dependent environment with a large number of users. In such a system, mostly due to practical constraints (e.g., decoding complexity), not all users can be scheduled together, and usually only one user may transmit at any given time. Assuming a distributed, opportunistic scheduling algorithm, we analyse the system's properties, such as delay, QoS and capacity scaling laws. Specifically, we start with analyzing the performance while \\...
Transformation Algorithm of Dielectric Response in Time-Frequency Domain
Directory of Open Access Journals (Sweden)
Ji Liu
2014-01-01
Full Text Available A transformation algorithm of dielectric response from time domain to frequency domain is presented. In order to shorten measuring time of low or ultralow frequency dielectric response characteristics, the transformation algorithm is used in this paper to transform the time domain relaxation current to frequency domain current for calculating the low frequency dielectric dissipation factor. In addition, it is shown from comparing the calculation results with actual test data that there is a coincidence for both results over a wide range of low frequencies. Meanwhile, the time domain test data of depolarization currents in dry and moist pressboards are converted into frequency domain results on the basis of the transformation. The frequency domain curves of complex capacitance and dielectric dissipation factor at the low frequency range are obtained. Test results of polarization and depolarization current (PDC in pressboards are also given at the different voltage and polarization time. It is demonstrated from the experimental results that polarization and depolarization current are affected significantly by moisture contents of the test pressboards, and the transformation algorithm is effective in ultralow frequency of 10−3 Hz. Data analysis and interpretation of the test results conclude that analysis of time-frequency domain dielectric response can be used for assessing insulation system in power transformer.
A decentralized scheduling algorithm for time synchronized channel hopping
Directory of Open Access Journals (Sweden)
Andrew Tinka
2011-09-01
Full Text Available Time Synchronized Channel Hopping (TSCH is an existing Medium Access Control scheme which enables robust communication through channel hopping and high data rates through synchronization. It is based on a time-slotted architecture, and its correct functioning depends on a schedule which is typically computed by a central node. This paper presents, to our knowledge, the first scheduling algorithm for TSCH networks which both is distributed and which copes with mobile nodes. Two variations on scheduling algorithms are presented. Aloha-based scheduling allocates one channel for broadcasting advertisements for new neighbors. Reservation- based scheduling augments Aloha-based scheduling with a dedicated timeslot for targeted advertisements based on gossip information. A mobile ad hoc motorized sensor network with frequent connectivity changes is studied, and the performance of the two proposed algorithms is assessed. This performance analysis uses both simulation results and the results of a field deployment of floating wireless sensors in an estuarial canal environment. Reservation-based scheduling performs significantly better than Aloha-based scheduling, suggesting that the improved network reactivity is worth the increased algorithmic complexity and resource consumption.
Efficient Algorithms for Segmentation of Item-Set Time Series
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
Bouncing cosmology from warped extra dimensional scenario
Energy Technology Data Exchange (ETDEWEB)
Das, Ashmita; Maity, Debaprasad [Indian Institute of Technology, Department of Physics, Guwahati, Assam (India); Paul, Tanmoy; SenGupta, Soumitra [Indian Association for the Cultivation of Science, Department of Theoretical Physics, Kolkata (India)
2017-12-15
From the perspective of four dimensional effective theory on a two brane warped geometry model, we examine the possibility of ''bouncing phenomena''on our visible brane. Our results reveal that the presence of a warped extra dimension lead to a non-singular bounce on the brane scale factor and hence can remove the ''big-bang singularity''. We also examine the possible parametric regions for which this bouncing is possible. (orig.)
Bouncing cosmology from warped extra dimensional scenario
Das, Ashmita; Maity, Debaprasad; Paul, Tanmoy; SenGupta, Soumitra
2017-12-01
From the perspective of four dimensional effective theory on a two brane warped geometry model, we examine the possibility of "bouncing phenomena"on our visible brane. Our results reveal that the presence of a warped extra dimension lead to a non-singular bounce on the brane scale factor and hence can remove the "big-bang singularity". We also examine the possible parametric regions for which this bouncing is possible.
Namaste (counterbalancing) technique: Overcoming warping in costal cartilage
Kapil S Agrawal; Manoj Bachhav; Raghav Shrotriya
2015-01-01
Background: Indian noses are broader and lack projection as compared to other populations, hence very often need augmentation, that too by large volume. Costal cartilage remains the material of choice in large volume augmentations and repair of complex primary and secondary nasal deformities. One major disadvantage of costal cartilage grafts (CCG) which offsets all other advantages is the tendency to warp and become distorted over a period of time. We propose a simple technique to overcome th...
Efficient quantum algorithm for computing n-time correlation functions.
Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E
2014-07-11
We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.
A Placement Algorithm for Capital Items that Depreciate with Time
International Nuclear Information System (INIS)
Wweru, R.M
1999-01-01
The replacement algorithm is centred on the prediction of the replacement cost and the determination of the most economical replacement policy. For items whose efficiency depreciates over their life spans e.g. machine tools, vehicles et.c; the prediction of costs involves those factors which contribute to increase operating cost, forced idle time, increase scrap, increased repair cost etc. The alternative to increased cost of operating an aging equipment is the cost of replacing the old equipment with a new one. There is some age at which the replacement of the old equipment is more economical than continuation (of the old one) at the increased operating cost (Johnson R D, Siskin B R, 1989). This algorithm uses certain cost relationships that are vital in minimization of total costs and is focused on capital equipment that depreciates with time as opposed to items with a probabilistic life span
A RECURSIVE ALGORITHM SUITABLE FOR REAL-TIME MEASUREMENT
Directory of Open Access Journals (Sweden)
Giovanni Bucci
1995-12-01
Full Text Available This paper deals with a recursive algorithm suitable for realtime measurement applications, based on an indirect technique, useful in those applications where the required quantities cannot be measured in a straightforward way. To cope with time constraints a parallel formulation of it, suitable to be implemented on multiprocessor systems, is presented. The adopted concurrent implementation is based on factorization techniques. Some experimental results related to the application of the system for carrying out measurements on synchronous motors are included.
FPGA implementation of image dehazing algorithm for real time applications
Kumar, Rahul; Kaushik, Brajesh Kumar; Balasubramanian, R.
2017-09-01
Weather degradation such as haze, fog, mist, etc. severely reduces the effective range of visual surveillance. This degradation is a spatially varying phenomena, which makes this problem non trivial. Dehazing is an essential preprocessing stage in applications such as long range imaging, border security, intelligent transportation system, etc. However, these applications require low latency of the preprocessing block. In this work, single image dark channel prior algorithm is modified and implemented for fast processing with comparable visual quality of the restored image/video. Although conventional single image dark channel prior algorithm is computationally expensive, it yields impressive results. Moreover, a two stage image dehazing architecture is introduced, wherein, dark channel and airlight are estimated in the first stage. Whereas, transmission map and intensity restoration are computed in the next stages. The algorithm is implemented using Xilinx Vivado software and validated by using Xilinx zc702 development board, which contains an Artix7 equivalent Field Programmable Gate Array (FPGA) and ARM Cortex A9 dual core processor. Additionally, high definition multimedia interface (HDMI) has been incorporated for video feed and display purposes. The results show that the dehazing algorithm attains 29 frames per second for the image resolution of 1920x1080 which is suitable of real time applications. The design utilizes 9 18K_BRAM, 97 DSP_48, 6508 FFs and 8159 LUTs.
Continuous-time quantum algorithms for unstructured problems
International Nuclear Information System (INIS)
Hen, Itay
2014-01-01
We consider a family of unstructured optimization problems, for which we propose a method for constructing analogue, continuous-time (not necessarily adiabatic) quantum algorithms that are faster than their classical counterparts. In this family of problems, which we refer to as ‘scrambled input’ problems, one has to find a minimum-cost configuration of a given integer-valued n-bit black-box function whose input values have been scrambled in some unknown way. Special cases within this set of problems are Grover’s search problem of finding a marked item in an unstructured database, certain random energy models, and the functions of the Deutsch–Josza problem. We consider a couple of examples in detail. In the first, we provide an O(1) deterministic analogue quantum algorithm to solve the seminal problem of Deutsch and Josza, in which one has to determine whether an n-bit boolean function is constant (gives 0 on all inputs or 1 on all inputs) or balanced (returns 0 on half the input states and 1 on the other half). We also study one variant of the random energy model, and show that, as one might expect, its minimum energy configuration can be found quadratically faster with a quantum adiabatic algorithm than with classical algorithms. (paper)
Parallel pipeline algorithm of real time star map preprocessing
Wang, Hai-yong; Qin, Tian-mu; Liu, Jia-qi; Li, Zhi-feng; Li, Jian-hua
2016-03-01
To improve the preprocessing speed of star map and reduce the resource consumption of embedded system of star tracker, a parallel pipeline real-time preprocessing algorithm is presented. The two characteristics, the mean and the noise standard deviation of the background gray of a star map, are firstly obtained dynamically by the means that the intervene of the star image itself to the background is removed in advance. The criterion on whether or not the following noise filtering is needed is established, then the extraction threshold value is assigned according to the level of background noise, so that the centroiding accuracy is guaranteed. In the processing algorithm, as low as two lines of pixel data are buffered, and only 100 shift registers are used to record the connected domain label, by which the problems of resources wasting and connected domain overflow are solved. The simulating results show that the necessary data of the selected bright stars could be immediately accessed in a delay time as short as 10us after the pipeline processing of a 496×496 star map in 50Mb/s is finished, and the needed memory and registers resource total less than 80kb. To verify the accuracy performance of the algorithm proposed, different levels of background noise are added to the processed ideal star map, and the statistic centroiding error is smaller than 1/23 pixel under the condition that the signal to noise ratio is greater than 1. The parallel pipeline algorithm of real time star map preprocessing helps to increase the data output speed and the anti-dynamic performance of star tracker.
Time Optimized Algorithm for Web Document Presentation Adaptation
DEFF Research Database (Denmark)
Pan, Rong; Dolog, Peter
2010-01-01
Currently information on the web is accessed through different devices. Each device has its own properties such as resolution, size, and capabilities to display information in different format and so on. This calls for adaptation of information presentation for such platforms. This paper proposes...... content-optimized and time-optimized algorithms for information presentation adaptation for different devices based on its hierarchical model. The model is formalized in order to experiment with different algorithms.......Currently information on the web is accessed through different devices. Each device has its own properties such as resolution, size, and capabilities to display information in different format and so on. This calls for adaptation of information presentation for such platforms. This paper proposes...
Non-linear dynamics in galactic disks: the spiral-warps connection
International Nuclear Information System (INIS)
Masset, Frederic
1997-01-01
After a recall on warp theories and on warp waves, this research thesis reports a linear study of warp waves with an assessment of the role of gas compressibility when taking the galactic disk thickness into account. Then, the author reports an analytical study of the non-linear coupling between warp waves and density waves, in order to calculate coupling efficiency, to identify areas of the galactic disk in which it is efficient, and to discuss concurrent physical processes (such as Landau absorption) and the validity of assumptions made to perform the calculations. The next part reports numerical simulations which have been performed to check the coupling mechanism. The author notably comments evolutions brought to existing codes, and finally presents the three-dimensional version of the developed code, and discusses choices made for this code (presence of gas, choice of hydrodynamics algorithms and of gas mesh geometry, and so on). Numerical results are then presented and discussed: they actually show the existence of a coupling between density waves and warp waves [fr
Directory of Open Access Journals (Sweden)
Jonny Karlsson
2013-05-01
Full Text Available Traversal time and hop count analysis (TTHCA is a recent wormhole detection algorithm for mobile ad hoc networks (MANET which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm.
Real time tracking by LOPF algorithm with mixture model
Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo
2007-11-01
A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.
A class of kernel based real-time elastography algorithms.
Kibria, Md Golam; Hasan, Md Kamrul
2015-08-01
In this paper, a novel real-time kernel-based and gradient-based Phase Root Seeking (PRS) algorithm for ultrasound elastography is proposed. The signal-to-noise ratio of the strain image resulting from this method is improved by minimizing the cross-correlation discrepancy between the pre- and post-compression radio frequency signals with an adaptive temporal stretching method and employing built-in smoothing through an exponentially weighted neighborhood kernel in the displacement calculation. Unlike conventional PRS algorithms, displacement due to tissue compression is estimated from the root of the weighted average of the zero-lag cross-correlation phases of the pair of corresponding analytic pre- and post-compression windows in the neighborhood kernel. In addition to the proposed one, the other time- and frequency-domain elastography algorithms (Ara et al., 2013; Hussain et al., 2012; Hasan et al., 2012) proposed by our group are also implemented in real-time using Java where the computations are serially executed or parallely executed in multiple processors with efficient memory management. Simulation results using finite element modeling simulation phantom show that the proposed method significantly improves the strain image quality in terms of elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe) and mean structural similarity (MSSIM) for strains as high as 4% as compared to other reported techniques in the literature. Strain images obtained for the experimental phantom as well as in vivo breast data of malignant or benign masses also show the efficacy of our proposed method over the other reported techniques in the literature. Copyright © 2015 Elsevier B.V. All rights reserved.
Inflationary scenario from higher curvature warped spacetime
International Nuclear Information System (INIS)
Banerjee, Narayan; Paul, Tanmoy
2017-01-01
We consider a five dimensional warped spacetime, in presence of the higher curvature term like F(R) = R + αR 2 in the bulk, in the context of the two-brane model. Our universe is identified with the TeV scale brane and emerges as a four dimensional effective theory. From the perspective of this effective theory, we examine the possibility of ''inflationary scenario'' by considering the on-brane metric ansatz as an FRW one. Our results reveal that the higher curvature term in the five dimensional bulk spacetime generates a potential term for the radion field. Due to the presence of radion potential, the very early universe undergoes a stage of accelerated expansion and, moreover, the accelerating period of the universe terminates in a finite time. We also find the spectral index of curvature perturbation (n s ) and the tensor to scalar ratio (r) in the present context, which match with the observational results based on the observations of Planck (Astron. Astrophys. 594, A20, 2016). (orig.)
Inflationary scenario from higher curvature warped spacetime
Energy Technology Data Exchange (ETDEWEB)
Banerjee, Narayan [Indian Institute of Science Education and Research Kolkata, Department of Physical Sciences, Nadia, West Bengal (India); Paul, Tanmoy [Indian Association for the Cultivation of Science, Department of Theoretical Physics, Kolkata (India)
2017-10-15
We consider a five dimensional warped spacetime, in presence of the higher curvature term like F(R) = R + αR{sup 2} in the bulk, in the context of the two-brane model. Our universe is identified with the TeV scale brane and emerges as a four dimensional effective theory. From the perspective of this effective theory, we examine the possibility of ''inflationary scenario'' by considering the on-brane metric ansatz as an FRW one. Our results reveal that the higher curvature term in the five dimensional bulk spacetime generates a potential term for the radion field. Due to the presence of radion potential, the very early universe undergoes a stage of accelerated expansion and, moreover, the accelerating period of the universe terminates in a finite time. We also find the spectral index of curvature perturbation (n{sub s}) and the tensor to scalar ratio (r) in the present context, which match with the observational results based on the observations of Planck (Astron. Astrophys. 594, A20, 2016). (orig.)
Flavor structure of warped extra dimension models
International Nuclear Information System (INIS)
Agashe, Kaustubh; Perez, Gilad; Soni, Amarjit
2005-01-01
We recently showed that warped extra-dimensional models with bulk custodial symmetry and few TeV Kaluza-Klein (KK) masses lead to striking signals at B factories. In this paper, using a spurion analysis, we systematically study the flavor structure of models that belong to the above class. In particular we find that the profiles of the zero modes, which are similar in all these models, essentially control the underlying flavor structure. This implies that our results are robust and model independent in this class of models. We discuss in detail the origin of the signals in B physics. We also briefly study other new physics signatures that arise in rare K decays (K→πνν), in rare top decays [t→cγ(Z,gluon)], and the possibility of CP asymmetries in D 0 decays to CP eigenstates such as K S π 0 and others. Finally we demonstrate that with light KK masses, ∼3 TeV, the above class of models with anarchic 5D Yukawas has a 'CP problem' since contributions to the neutron electric dipole moment are roughly 20 times larger than the current experimental bound. Using AdS/CFT correspondence, these extra-dimensional models are dual to a purely 4D strongly coupled conformal Higgs sector thus enhancing their appeal
Flavor Structure of Warped Extra Dimension Models
International Nuclear Information System (INIS)
Agashe, Kaustubh; Perez, Gilad; Soni, Amarjit
2004-01-01
We recently showed, in HEP-PH--0406101, that warped extra dimensional models with bulk custodial symmetry and few TeV KK masses lead to striking signals at B-factories. In this paper, using a spurion analysis, we systematically study the flavor structure of models that belong to the above class. In particular we find that the profiles of the zero modes, which are similar in all these models, essentially control the underlying flavor structure. This implies that our results are robust and model independent in this class of models. We discuss in detail the origin of the signals in B-physics. We also briefly study other NP signatures that arise in rare K decays (K → πνν), in rare top decays [t → cγ(Z, gluon)] and the possibility of CP asymmetries in D 0 decays to CP eigenstates such as K s π 0 and others. Finally we demonstrate that with light KK masses, ∼ 3 TeV, the above class of models with anarchic 5D Yukawas has a ''CP problem'' since contributions to the neutron electric dipole moment are roughly 20 times larger than the current experimental bound. Using AdS/CFT correspondence, these extra-dimensional models are dual to a purely 4D strongly coupled conformal Higgs sector thus enhancing their appeal
Directory of Open Access Journals (Sweden)
Tanti Octavia
2003-01-01
Full Text Available A Modified Giffler and Thompson algorithm combined with dynamic slack time is used to allocate machines resources in dynamic nature. It was compared with a Real Time Order Promising (RTP algorithm. The performance of modified Giffler and Thompson and RTP algorithms are measured by mean tardiness. The result shows that modified Giffler and Thompson algorithm combined with dynamic slack time provides significantly better result compared with RTP algorithm in terms of mean tardiness.
Seamless warping of diffusion tensor fields
DEFF Research Database (Denmark)
Xu, Dongrong; Hao, Xuejun; Bansal, Ravi
2008-01-01
deformations in an attempt to ensure that the local deformations in the warped image remains true to the orientation of the underlying fibers; forward mapping, however, can also create "seams" or gaps and consequently artifacts in the warped image by failing to define accurately the voxels in the template...... space where the magnitude of the deformation is large (e.g., |Jacobian| > 1). Backward mapping, in contrast, defines voxels in the template space by mapping them back to locations in the original imaging space. Backward mapping allows every voxel in the template space to be defined without the creation...
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Arbitrary Phase Vocoders by means of Warping
Directory of Open Access Journals (Sweden)
Gianpaolo Evangelista
2013-08-01
Full Text Available The Phase Vocoder plays a central role in sound analysis and synthesis, allowing us to represent a sound signal in both time and frequency, similar to a music score – but possibly at much finer time and frequency scales – describing the evolution of sound events. According to the uncertainty principle, time and frequency are not independent variables so that any time-frequency representation is the result of a compromise between time and frequency resolutions, the product of which cannot be smaller than a given constant. Therefore, finer frequency resolution can only be achieved with coarser time resolution and, similarly, finer time resolution results in coarser frequency resolution.While most of the conventional methods for time-frequency representations are based on uniform time and uniform frequency resolutions, perception and physical characteristics of sound signals suggest the need for nonuniform analysis and synthesis. As the results of psycho-acoustic research show, human hearing is naturally organized in nonuniform frequency bands. On the physical side, the sounds of percussive instruments as well as piano in the low register, show partials whose frequencies are not uniformly spaced, as opposed to the uniformly spaced partial frequencies found in harmonic sounds. Moreover, the different characteristics of sound signals at the onset transients with respect to stationary segments suggest the need for nonuniform time resolution. In the effort to exploit the time-frequency resolution compromise at its best, a tight time-frequency suit should be tailored to snuggly fit the sound body.In this paper we overview flexible design methods for phase vocoders with nonuniform resolutions. The methods are based on remapping the time or the frequency axis, or both, by employing suitable functions acting as warping maps, which locally change the characteristics of the time-frequency plane. As a result, the sliding windows may have time dependent
A time domain phase-gradient based ISAR autofocus algorithm
CSIR Research Space (South Africa)
Nel, W
2011-10-01
Full Text Available . Results on simulated and measured data show that the algorithm performs well. Unlike many other ISAR autofocus techniques, the algorithm does not make use of several computationally intensive iterations between the data and image domains as part...
A Feedback Optimal Control Algorithm with Optimal Measurement Time Points
Directory of Open Access Journals (Sweden)
Felix Jost
2017-02-01
Full Text Available Nonlinear model predictive control has been established as a powerful methodology to provide feedback for dynamic processes over the last decades. In practice it is usually combined with parameter and state estimation techniques, which allows to cope with uncertainty on many levels. To reduce the uncertainty it has also been suggested to include optimal experimental design into the sequential process of estimation and control calculation. Most of the focus so far was on dual control approaches, i.e., on using the controls to simultaneously excite the system dynamics (learning as well as minimizing a given objective (performing. We propose a new algorithm, which sequentially solves robust optimal control, optimal experimental design, state and parameter estimation problems. Thus, we decouple the control and the experimental design problems. This has the advantages that we can analyze the impact of measurement timing (sampling independently, and is practically relevant for applications with either an ethical limitation on system excitation (e.g., chemotherapy treatment or the need for fast feedback. The algorithm shows promising results with a 36% reduction of parameter uncertainties for the Lotka-Volterra fishing benchmark example.
Chaos Time Series Prediction Based on Membrane Optimization Algorithms
Directory of Open Access Journals (Sweden)
Meng Li
2015-01-01
Full Text Available This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ,m and least squares support vector machine (LS-SVM (γ,σ by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE, root mean square error (RMSE, and mean absolute percentage error (MAPE.
Statistics of galaxy warps in the HDF North and South
Reshetnikov, [No Value; Battaner, E; Combes, F; Jimenez-Vicente, J
We present a statistical study of the presence of galaxy warps in the Hubble deep fields. Among a complete sample of 45 edge-on galaxies above a diameter of 1."3, we find 5 galaxies to be certainly warped and 6 galaxies as good candidates. In addition, 4 galaxies reveal a characteristic U-warp.
Overlay improvements using a real time machine learning algorithm
Schmitt-Weaver, Emil; Kubis, Michael; Henke, Wolfgang; Slotboom, Daan; Hoogenboom, Tom; Mulkens, Jan; Coogans, Martyn; ten Berge, Peter; Verkleij, Dick; van de Mast, Frank
2014-04-01
While semiconductor manufacturing is moving towards the 14nm node using immersion lithography, the overlay requirements are tightened to below 5nm. Next to improvements in the immersion scanner platform, enhancements in the overlay optimization and process control are needed to enable these low overlay numbers. Whereas conventional overlay control methods address wafer and lot variation autonomously with wafer pre exposure alignment metrology and post exposure overlay metrology, we see a need to reduce these variations by correlating more of the TWINSCAN system's sensor data directly to the post exposure YieldStar metrology in time. In this paper we will present the results of a study on applying a real time control algorithm based on machine learning technology. Machine learning methods use context and TWINSCAN system sensor data paired with post exposure YieldStar metrology to recognize generic behavior and train the control system to anticipate on this generic behavior. Specific for this study, the data concerns immersion scanner context, sensor data and on-wafer measured overlay data. By making the link between the scanner data and the wafer data we are able to establish a real time relationship. The result is an inline controller that accounts for small changes in scanner hardware performance in time while picking up subtle lot to lot and wafer to wafer deviations introduced by wafer processing.
Real-Time Demand Side Management Algorithm Using Stochastic Optimization
Directory of Open Access Journals (Sweden)
Moses Amoasi Acquah
2018-05-01
Full Text Available A demand side management technique is deployed along with battery energy-storage systems (BESS to lower the electricity cost by mitigating the peak load of a building. Most of the existing methods rely on manual operation of the BESS, or even an elaborate building energy-management system resorting to a deterministic method that is susceptible to unforeseen growth in demand. In this study, we propose a real-time optimal operating strategy for BESS based on density demand forecast and stochastic optimization. This method takes into consideration uncertainties in demand when accounting for an optimal BESS schedule, making it robust compared to the deterministic case. The proposed method is verified and tested against existing algorithms. Data obtained from a real site in South Korea is used for verification and testing. The results show that the proposed method is effective, even for the cases where the forecasted demand deviates from the observed demand.
Teshima, Tara Lynn; Cheng, Homan; Pakdel, Amir; Kiss, Alex; Fialkov, Jeffrey A
2016-01-01
Costal cartilage is an important reconstructive tissue for correcting nasal deformities. Warping of costal cartilage, a recognized complication, can lead to significant functional and aesthetic problems. The authors present a technique to prevent warping that involves transverse slicing of the sixth-seventh costal cartilaginous junction, that when sliced perpendicular to the long axis of the rib, provides multiple long, narrow, clinically useful grafts with balanced cross-sections. The aim was to measure differences in cartilage warp between this technique (TJS) and traditional carving techniques. Costal cartilage was obtained from human subjects and cut to clinically relevant dimensions using a custom cutting jig. The sixth-seventh costal cartilaginous junction was sliced transversely leaving the outer surface intact. The adjacent sixth rib cartilage was carved concentrically and eccentrically. The samples were incubated and standardized serial photography was performed over time up to 4 weeks. Warp was quantified by measuring nonlinearity of the grafts using least-squares regression and compared between carving techniques. TJS grafts (n = 10) resulted in significantly less warp than both eccentrically (n = 3) and concentrically carved grafts (n = 3) (P < 0.0001). Warp was significantly higher with eccentric carving compared with concentric carving (P < 0.0001). Warp increased significantly with time for both eccentric (P = 0002) and concentric (P = 0.0007) techniques while TJS warp did not (P = 0.56). The technique of transverse slicing costal cartilage from the sixth-seventh junction minimizes warp compared with traditional carving methods providing ample grafts of adequate length and versatility for reconstructive requirements.
Needle bar for warp knitting machines
Hagel, Adolf; Thumling, Manfred
1979-01-01
Needle bar for warp knitting machines with a number of needles individually set into slits of the bar and having shafts cranked to such an extent that the head section of each needle is in alignment with the shaft section accommodated by the slit. Slackening of the needles will thus not influence the needle spacing.
Parareal algorithms with local time-integrators for time fractional differential equations
Wu, Shu-Lin; Zhou, Tao
2018-04-01
It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.
A novel time-domain signal processing algorithm for real time ventricular fibrillation detection
International Nuclear Information System (INIS)
Monte, G E; Scarone, N C; Liscovsky, P O; Rotter, P
2011-01-01
This paper presents an application of a novel algorithm for real time detection of ECG pathologies, especially ventricular fibrillation. It is based on segmentation and labeling process of an oversampled signal. After this treatment, analyzing sequence of segments, global signal behaviours are obtained in the same way like a human being does. The entire process can be seen as a morphological filtering after a smart data sampling. The algorithm does not require any ECG digital signal pre-processing, and the computational cost is low, so it can be embedded into the sensors for wearable and permanent applications. The proposed algorithms could be the input signal description to expert systems or to artificial intelligence software in order to detect other pathologies.
A novel time-domain signal processing algorithm for real time ventricular fibrillation detection
Monte, G. E.; Scarone, N. C.; Liscovsky, P. O.; Rotter S/N, P.
2011-12-01
This paper presents an application of a novel algorithm for real time detection of ECG pathologies, especially ventricular fibrillation. It is based on segmentation and labeling process of an oversampled signal. After this treatment, analyzing sequence of segments, global signal behaviours are obtained in the same way like a human being does. The entire process can be seen as a morphological filtering after a smart data sampling. The algorithm does not require any ECG digital signal pre-processing, and the computational cost is low, so it can be embedded into the sensors for wearable and permanent applications. The proposed algorithms could be the input signal description to expert systems or to artificial intelligence software in order to detect other pathologies.
Space-time spectral collocation algorithm for solving time-fractional Tricomi-type equations
Directory of Open Access Journals (Sweden)
Abdelkawy M.A.
2016-01-01
Full Text Available We introduce a new numerical algorithm for solving one-dimensional time-fractional Tricomi-type equations (T-FTTEs. We used the shifted Jacobi polynomials as basis functions and the derivatives of fractional is evaluated by the Caputo definition. The shifted Jacobi Gauss-Lobatt algorithm is used for the spatial discretization, while the shifted Jacobi Gauss-Radau algorithmis applied for temporal approximation. Substituting these approximations in the problem leads to a system of algebraic equations that greatly simplifies the problem. The proposed algorithm is successfully extended to solve the two-dimensional T-FTTEs. Extensive numerical tests illustrate the capability and high accuracy of the proposed methodologies.
Real time equilibrium reconstruction algorithm in EAST tokamak
International Nuclear Information System (INIS)
Wang Huazhong; Luo Jiarong; Huang Qinchao
2004-01-01
The EAST (HT-7U) superconducting tokamak is a national project of China on fusion research, with a capability of long-pulse (∼1000 s) operation. In order to realize a long-duration steady-state operation of EAST, some significant capability of real-time control is required. It would be very crucial to obtain the current profile parameters and the plasma shapes in real time by a flexible control system. As those discharge parameters cannot be directly measured, so a current profile consistent with the magnetohydrodynamic equilibrium should be evaluated from external magnetic measurements, based on a linearized iterative least square method, which can meet the requirements of the measurements. The arithmetic that the EFIT (equilibrium fitting code) is used for reference will be given in this paper and the computational efforts are reduced by parameterizing the current profile linearly in terms of a number of physical parameters. In order to introduce this reconstruction algorithm clearly, the main hardware design will be listed also. (authors)
Cable Damage Detection System and Algorithms Using Time Domain Reflectometry
Energy Technology Data Exchange (ETDEWEB)
Clark, G A; Robbins, C L; Wade, K A; Souza, P R
2009-03-24
This report describes the hardware system and the set of algorithms we have developed for detecting damage in cables for the Advanced Development and Process Technologies (ADAPT) Program. This program is part of the W80 Life Extension Program (LEP). The system could be generalized for application to other systems in the future. Critical cables can undergo various types of damage (e.g. short circuits, open circuits, punctures, compression) that manifest as changes in the dielectric/impedance properties of the cables. For our specific problem, only one end of the cable is accessible, and no exemplars of actual damage are available. This work addresses the detection of dielectric/impedance anomalies in transient time domain reflectometry (TDR) measurements on the cables. The approach is to interrogate the cable using time domain reflectometry (TDR) techniques, in which a known pulse is inserted into the cable, and reflections from the cable are measured. The key operating principle is that any important cable damage will manifest itself as an electrical impedance discontinuity that can be measured in the TDR response signal. Machine learning classification algorithms are effectively eliminated from consideration, because only a small number of cables is available for testing; so a sufficient sample size is not attainable. Nonetheless, a key requirement is to achieve very high probability of detection and very low probability of false alarm. The approach is to compare TDR signals from possibly damaged cables to signals or an empirical model derived from reference cables that are known to be undamaged. This requires that the TDR signals are reasonably repeatable from test to test on the same cable, and from cable to cable. Empirical studies show that the repeatability issue is the 'long pole in the tent' for damage detection, because it is has been difficult to achieve reasonable repeatability. This one factor dominated the project. The two-step model
A general algorithm for computing distance transforms in linear time
Meijster, A.; Roerdink, J.B.T.M.; Hesselink, W.H.; Goutsias, J; Vincent, L; Bloomberg, DS
2000-01-01
A new general algorithm fur computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Efficient On-the-fly Algorithms for the Analysis of Timed Games
DEFF Research Database (Denmark)
Cassez, Franck; David, Alexandre; Fleury, Emmanuel
2005-01-01
In this paper, we propose the first efficient on-the-fly algorithm for solving games based on timed game automata with respect to reachability and safety properties The algorithm we propose is a symbolic extension of the on-the-fly algorithm suggested by Liu & Smolka [15] for linear-time model-ch...... symbolic algorithm are proposed as well as methods for obtaining time-optimal winning strategies (for reachability games). Extensive evaluation of an experimental implementation of the algorithm yields very encouraging performance results.......In this paper, we propose the first efficient on-the-fly algorithm for solving games based on timed game automata with respect to reachability and safety properties The algorithm we propose is a symbolic extension of the on-the-fly algorithm suggested by Liu & Smolka [15] for linear-time model...
An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.
Gonzales, Michael G.
1984-01-01
Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)
Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R
2006-12-15
We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.
International Nuclear Information System (INIS)
Zhou Shumin; Sun Yamin; Tang Bin
2007-01-01
In order to enhance the time synchronization quality of the distributed system, a time synchronization algorithm of distributed system based on server time-revise and workstation self-adjust is proposed. The time-revise cycle and self-adjust process is introduced in the paper. The algorithm reduces network flow effectively and enhances the quality of clock-synchronization. (authors)
Warped Kähler potentials and fluxes
Energy Technology Data Exchange (ETDEWEB)
Martucci, Luca [Dipartimento di Fisica ed Astronomia “Galileo Galilei' ,Università di Padova & I.N.F.N. Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)
2017-01-13
The four-dimensional effective theory for type IIB warped flux compactifications proposed in https://www.doi.org/10.1007/JHEP03(2015)067 is completed by taking into account the backreaction of the Kähler moduli on the three-form fluxes. The only required modification consists in a flux-dependent contribution to the chiral fields parametrising the Kähler moduli. The resulting supersymmetric effective theory satisfies the no-scale condition and consistently combines previous partial results present in the literature. Similar results hold for M-theory warped compactifications on Calabi-Yau fourfolds, whose effective field theory and Kähler potential are also discussed.
Warped Kähler potentials and fluxes
International Nuclear Information System (INIS)
Martucci, Luca
2017-01-01
The four-dimensional effective theory for type IIB warped flux compactifications proposed in https://www.doi.org/10.1007/JHEP03(2015)067 is completed by taking into account the backreaction of the Kähler moduli on the three-form fluxes. The only required modification consists in a flux-dependent contribution to the chiral fields parametrising the Kähler moduli. The resulting supersymmetric effective theory satisfies the no-scale condition and consistently combines previous partial results present in the literature. Similar results hold for M-theory warped compactifications on Calabi-Yau fourfolds, whose effective field theory and Kähler potential are also discussed.
Flavor universal resonances and warped gravity
Energy Technology Data Exchange (ETDEWEB)
Agashe, Kaustubh; Du, Peizhi; Hong, Sungwoo; Sundrum, Raman [Maryland Center for Fundamental Physics, Department of Physics,University of Maryland, College Park, MD 20742 (United States)
2017-01-04
Warped higher-dimensional compactifications with “bulk” standard model, or their AdS/CFT dual as the purely 4D scenario of Higgs compositeness and partial compositeness, offer an elegant approach to resolving the electroweak hierarchy problem as well as the origins of flavor structure. However, low-energy electroweak/flavor/CP constraints and the absence of non-standard physics at LHC Run 1 suggest that a “little hierarchy problem” remains, and that the new physics underlying naturalness may lie out of LHC reach. Assuming this to be the case, we show that there is a simple and natural extension of the minimal warped model in the Randall-Sundrum framework, in which matter, gauge and gravitational fields propagate modestly different degrees into the IR of the warped dimension, resulting in rich and striking consequences for the LHC (and beyond). The LHC-accessible part of the new physics is AdS/CFT dual to the mechanism of “vectorlike confinement”, with TeV-scale Kaluza-Klein excitations of the gauge and gravitational fields dual to spin-0,1,2 composites. Unlike the minimal warped model, these low-lying excitations have predominantly flavor-blind and flavor/CP-safe interactions with the standard model. Remarkably, this scenario also predicts small deviations from flavor-blindness originating from virtual effects of Higgs/top compositeness at ∼O(10) TeV, with subdominant resonance decays into Higgs/top-rich final states, giving the LHC an early “preview” of the nature of the resolution of the hierarchy problem. Discoveries of this type at LHC Run 2 would thereby anticipate (and set a target for) even more explicit explorations of Higgs compositeness at a 100 TeV collider, or for next-generation flavor tests.
Lorentz Violation in Warped Extra Dimensions
International Nuclear Information System (INIS)
Rizzo, Thomas G.
2011-01-01
Higher dimensional theories which address some of the problematic issues of the Standard Model(SM) naturally involve some form of D = 4 + n-dimensional Lorentz invariance violation (LIV). In such models the fundamental physics which leads to, e.g., field localization, orbifolding, the existence of brane terms and the compactification process all can introduce LIV in the higher dimensional theory while still preserving 4-d Lorentz invariance. In this paper, attempting to capture some of this physics, we extend our previous analysis of LIV in 5-d UED-type models to those with 5- d warped extra dimensions. To be specific, we employ the 5-d analog of the SM Extension of Kostelecky et al. which incorporates a complete set of operators arising from spontaneous LIV. We show that while the response of the bulk scalar, fermion and gauge fields to the addition of LIV operators in warped models is qualitatively similar to what happens in the flat 5-d UED case, the gravity sector of these models reacts very differently than in flat space. Specifically, we show that LIV in this warped case leads to a non-zero bulk mass for the 5-d graviton and so the would-be zero mode, which we identify as the usual 4-d graviton, must necessarily become massive. The origin of this mass term is the simultaneous existence of the constant non-zero AdS 5 curvature and the loss of general co-ordinate invariance via LIV in the 5-d theory. Thus warped 5-d models with LIV in the gravity sector are not phenomenologically viable.
Application of dynamical system methods to galactic dynamics : from warps to double bars
Sánchez Martín, Patricia
2015-01-01
Most galaxies have a warped shape when they are seen from an edge-on point of view. In this work we apply dynamical system methods to find an explanation of this phenomenon that agrees with its abundance among galaxies, its persistence in time and the angular size of observed warps. Starting from a simple, but realistic, 3D galaxy model formed by a bar and a flat disc, we study the effect produced by a small misalignment between the angular momentum of the system and its angular velocity. ...
Probabilistic wind power forecasting with online model selection and warped gaussian process
International Nuclear Information System (INIS)
Kou, Peng; Liang, Deliang; Gao, Feng; Gao, Lin
2014-01-01
Highlights: • A new online ensemble model for the probabilistic wind power forecasting. • Quantifying the non-Gaussian uncertainties in wind power. • Online model selection that tracks the time-varying characteristic of wind generation. • Dynamically altering the input features. • Recursive update of base models. - Abstract: Based on the online model selection and the warped Gaussian process (WGP), this paper presents an ensemble model for the probabilistic wind power forecasting. This model provides the non-Gaussian predictive distributions, which quantify the non-Gaussian uncertainties associated with wind power. In order to follow the time-varying characteristics of wind generation, multiple time dependent base forecasting models and an online model selection strategy are established, thus adaptively selecting the most probable base model for each prediction. WGP is employed as the base model, which handles the non-Gaussian uncertainties in wind power series. Furthermore, a regime switch strategy is designed to modify the input feature set dynamically, thereby enhancing the adaptiveness of the model. In an online learning framework, the base models should also be time adaptive. To achieve this, a recursive algorithm is introduced, thus permitting the online updating of WGP base models. The proposed model has been tested on the actual data collected from both single and aggregated wind farms
Time scale algorithm: Definition of ensemble time and possible uses of the Kalman filter
Tavella, Patrizia; Thomas, Claudine
1990-01-01
The comparative study of two time scale algorithms, devised to satisfy different but related requirements, is presented. They are ALGOS(BIPM), producing the international reference TAI at the Bureau International des Poids et Mesures, and AT1(NIST), generating the real-time time scale AT1 at the National Institute of Standards and Technology. In each case, the time scale is a weighted average of clock readings, but the weight determination and the frequency prediction are different because they are adapted to different purposes. The possibility of using a mathematical tool, such as the Kalman filter, together with the definition of the time scale as a weighted average, is also analyzed. Results obtained by simulation are presented.
Real time algorithms for sharp wave ripple detection.
Sethi, Ankit; Kemere, Caleb
2014-01-01
Neural activity during sharp wave ripples (SWR), short bursts of co-ordinated oscillatory activity in the CA1 region of the rodent hippocampus, is implicated in a variety of memory functions from consolidation to recall. Detection of these events in an algorithmic framework, has thus far relied on simple thresholding techniques with heuristically derived parameters. This study is an investigation into testing and improving the current methods for detection of SWR events in neural recordings. We propose and profile methods to reduce latency in ripple detection. Proposed algorithms are tested on simulated ripple data. The findings show that simple realtime algorithms can improve upon existing power thresholding methods and can detect ripple activity with latencies in the range of 10-20 ms.
Directory of Open Access Journals (Sweden)
Madeira Sara C
2009-06-01
Full Text Available Abstract Background The ability to monitor the change in expression patterns over time, and to observe the emergence of coherent temporal responses using gene expression time series, obtained from microarray experiments, is critical to advance our understanding of complex biological processes. In this context, biclustering algorithms have been recognized as an important tool for the discovery of local expression patterns, which are crucial to unravel potential regulatory mechanisms. Although most formulations of the biclustering problem are NP-hard, when working with time series expression data the interesting biclusters can be restricted to those with contiguous columns. This restriction leads to a tractable problem and enables the design of efficient biclustering algorithms able to identify all maximal contiguous column coherent biclusters. Methods In this work, we propose e-CCC-Biclustering, a biclustering algorithm that finds and reports all maximal contiguous column coherent biclusters with approximate expression patterns in time polynomial in the size of the time series gene expression matrix. This polynomial time complexity is achieved by manipulating a discretized version of the original matrix using efficient string processing techniques. We also propose extensions to deal with missing values, discover anticorrelated and scaled expression patterns, and different ways to compute the errors allowed in the expression patterns. We propose a scoring criterion combining the statistical significance of expression patterns with a similarity measure between overlapping biclusters. Results We present results in real data showing the effectiveness of e-CCC-Biclustering and its relevance in the discovery of regulatory modules describing the transcriptomic expression patterns occurring in Saccharomyces cerevisiae in response to heat stress. In particular, the results show the advantage of considering approximate patterns when compared to state of
Linear-time general decoding algorithm for the surface code
Darmawan, Andrew S.; Poulin, David
2018-05-01
A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.
Directory of Open Access Journals (Sweden)
Mingjian Sun
2015-01-01
Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.
Four-flux and warped heterotic M-theory compactifications
International Nuclear Information System (INIS)
Curio, Gottfried; Krause, Axel
2001-01-01
In the framework of heterotic M-theory compactified on a Calabi-Yau threefold 'times' an interval, the relation between geometry and four-flux is derived beyond first order. Besides the case with general flux which cannot be described by a warped geometry one is naturally led to consider two special types of four-flux in detail. One choice shows how the M-theory relation between warped geometry and flux reproduces the analogous one of the weakly coupled heterotic string with torsion. The other one leads to a quadratic dependence of the Calabi-Yau volume with respect to the orbifold direction which avoids the problem with negative volume of the first order approximation. As in the first order analysis we still find that Newton's constant is bounded from below at just the phenomenologically relevant value. However, the bound does not require an ad hoc truncation of the orbifold-size any longer. Finally we demonstrate explicitly that to leading order in κ 2/3 no Cosmological constant is induced in the four-dimensional low-energy action. This is in accord with what one can expect from supersymmetry
Warped conformal field theory as lower spin gravity
Hofman, Diego M.; Rollier, Blaise
2015-08-01
Two dimensional Warped Conformal Field Theories (WCFTs) may represent the simplest examples of field theories without Lorentz invariance that can be described holographically. As such they constitute a natural window into holography in non-AdS space-times, including the near horizon geometry of generic extremal black holes. It is shown in this paper that WCFTs posses a type of boost symmetry. Using this insight, we discuss how to couple these theories to background geometry. This geometry is not Riemannian. We call it Warped Geometry and it turns out to be a variant of a Newton-Cartan structure with additional scaling symmetries. With this formalism the equivalent of Weyl invariance in these theories is presented and we write two explicit examples of WCFTs. These are free fermionic theories. Lastly we present a systematic description of the holographic duals of WCFTs. It is argued that the minimal setup is not Einstein gravity but an SL (2, R) × U (1) Chern-Simons Theory, which we call Lower Spin Gravity. This point of view makes manifest the definition of boundary for these non-AdS geometries. This case represents the first step towards understanding a fully invariant formalism for WN field theories and their holographic duals.
Warped conformal field theory as lower spin gravity
Directory of Open Access Journals (Sweden)
Diego M. Hofman
2015-08-01
Full Text Available Two dimensional Warped Conformal Field Theories (WCFTs may represent the simplest examples of field theories without Lorentz invariance that can be described holographically. As such they constitute a natural window into holography in non-AdS space–times, including the near horizon geometry of generic extremal black holes. It is shown in this paper that WCFTs posses a type of boost symmetry. Using this insight, we discuss how to couple these theories to background geometry. This geometry is not Riemannian. We call it Warped Geometry and it turns out to be a variant of a Newton–Cartan structure with additional scaling symmetries. With this formalism the equivalent of Weyl invariance in these theories is presented and we write two explicit examples of WCFTs. These are free fermionic theories. Lastly we present a systematic description of the holographic duals of WCFTs. It is argued that the minimal setup is not Einstein gravity but an SL(2,R×U(1 Chern–Simons Theory, which we call Lower Spin Gravity. This point of view makes manifest the definition of boundary for these non-AdS geometries. This case represents the first step towards understanding a fully invariant formalism for WN field theories and their holographic duals.
A Note on "A polynomial-time algorithm for global value numbering"
Nabeezath, Saleena; Paleri, Vineeth
2013-01-01
Global Value Numbering(GVN) is a popular method for detecting redundant computations. A polynomial time algorithm for GVN is presented by Gulwani and Necula(2006). Here we present two limitations of this GVN algorithm due to which detection of certain kinds of redundancies can not be done using this algorithm. The first one is concerning the use of this algorithm in detecting some instances of the classical global common subexpressions, and the second is concerning its use in the detection of...
A linear-time algorithm for Euclidean feature transform sets
Hesselink, Wim H.
2007-01-01
The Euclidean distance transform of a binary image is the function that assigns to every pixel the Euclidean distance to the background. The Euclidean feature transform is the function that assigns to every pixel the set of background pixels with this distance. We present an algorithm to compute the
A linear time layout algorithm for business process models
Gschwind, T.; Pinggera, J.; Zugal, S.; Reijers, H.A.; Weber, B.
2014-01-01
The layout of a business process model influences how easily it can beunderstood. Existing layout features in process modeling tools often rely on graph representations, but do not take the specific properties of business process models into account. In this paper, we propose an algorithm that is
An improved exponential-time algorithm for k-SAT
Czech Academy of Sciences Publication Activity Database
Pudlák, Pavel
2005-01-01
Roč. 52, č. 3 (2005), s. 337-364 ISSN 0004-5411 R&D Projects: GA AV ČR(CZ) IAA1019901 Institutional research plan: CEZ:AV0Z10190503 Keywords : CNF sat isfiability * randomized algorithms Subject RIV: BA - General Mathematics Impact factor: 2.197, year: 2005
Time-Delay System Identification Using Genetic Algorithm
DEFF Research Database (Denmark)
Yang, Zhenyu; Seested, Glen Thane
2013-01-01
problem through an identification approach using the real coded Genetic Algorithm (GA). The desired FOPDT/SOPDT model is directly identified based on the measured system's input and output data. In order to evaluate the quality and performance of this GA-based approach, the proposed method is compared...
Low-Energy Real-Time OS Using Voltage Scheduling Algorithm for Variable Voltage Processors
Okuma, Takanori; Yasuura, Hiroto
2001-01-01
This paper presents a real-time OS based on $ mu $ITRON using proposed voltage scheduling algorithm for variable voltage processors which can vary supply voltage dynamically. The proposed voltage scheduling algorithms assign voltage level for each task dynamically in order to minimize energy consumption under timing constraints. Using the presented real-time OS, running tasks with low supply voltage leads to drastic energy reduction. In addition, the presented voltage scheduling algorithm is ...
Zhu, Dechao; Deng, Zhongmin; Wang, Xingwei
2001-08-01
In the present paper, a series of hierarchical warping functions is developed to analyze the static and dynamic problems of thin walled composite laminated helicopter rotors composed of several layers with single closed cell. This method is the development and extension of the traditional constrained warping theory of thin walled metallic beams, which had been proved very successful since 1940s. The warping distribution along the perimeter of each layer is expanded into a series of successively corrective warping functions with the traditional warping function caused by free torsion or free bending as the first term, and is assumed to be piecewise linear along the thickness direction of layers. The governing equations are derived based upon the variational principle of minimum potential energy for static analysis and Rayleigh Quotient for free vibration analysis. Then the hierarchical finite element method is introduced to form a numerical algorithm. Both static and natural vibration problems of sample box beams are analyzed with the present method to show the main mechanical behavior of the thin walled composite laminated helicopter rotor.
A Comparison of Evolutionary Algorithms for Tracking Time-Varying Recursive Systems
Directory of Open Access Journals (Sweden)
White Michael S
2003-01-01
Full Text Available A comparison is made of the behaviour of some evolutionary algorithms in time-varying adaptive recursive filter systems. Simulations show that an algorithm including random immigrants outperforms a more conventional algorithm using the breeder genetic algorithm as the mutation operator when the time variation is discontinuous, but neither algorithm performs well when the time variation is rapid but smooth. To meet this deficit, a new hybrid algorithm which uses a hill climber as an additional genetic operator, applied for several steps at each generation, is introduced. A comparison is made of the effect of applying the hill climbing operator a few times to all members of the population or a larger number of times solely to the best individual; it is found that applying to the whole population yields the better results, substantially improved compared with those obtained using earlier methods.
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Fundamental limitations on 'warp drive' spacetimes
Energy Technology Data Exchange (ETDEWEB)
Lobo, Francisco S N [Centro de Astronomia e AstrofIsica da Universidade de Lisboa, Campo Grande, Ed. C8 1749-016 Lisbon (Portugal); Visser, Matt [School of Mathematical and Computing Sciences, Victoria University of Wellington, PO Box 600, Wellington (New Zealand)
2004-12-21
'Warp drive' spacetimes are useful as 'gedanken-experiments' that force us to confront the foundations of general relativity, and among other things, to precisely formulate the notion of 'superluminal' communication. After carefully formulating the Alcubierre and Natario warp drive spacetimes, and verifying their non-perturbative violation of the classical energy conditions, we consider a more modest question and apply linearized gravity to the weak-field warp drive, testing the energy conditions to first and second orders of the warp-bubble velocity, v. Since we take the warp-bubble velocity to be non-relativistic, v << c, we are not primarily interested in the 'superluminal' features of the warp drive. Instead we focus on a secondary feature of the warp drive that has not previously been remarked upon-the warp drive (if it could be built) would be an example of a 'reaction-less drive'. For both the Alcubierre and Natario warp drives we find that the occurrence of significant energy condition violations is not just a high-speed effect, but that the violations persist even at arbitrarily low speeds. A particularly interesting feature of this construction is that it is now meaningful to think of placing a finite mass spaceship at the centre of the warp bubble, and then see how the energy in the warp field compares with the mass-energy of the spaceship. There is no hope of doing this in Alcubierre's original version of the warp field, since by definition the point at the centre of the warp bubble moves on a geodesic and is 'massless'. That is, in Alcubierre's original formalism and in the Natario formalism the spaceship is always treated as a test particle, while in the linearized theory we can treat the spaceship as a finite mass object. For both the Alcubierre and Natario warp drives we find that even at low speeds the net (negative) energy stored in the warp fields must be a significant fraction
Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo
2008-03-01
In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.
Real-time algorithm for acoustic imaging with a microphone array.
Huang, Xun
2009-05-01
Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.
Directory of Open Access Journals (Sweden)
Jiří Fejfar
2012-01-01
Full Text Available We are presenting results comparison of three artificial intelligence algorithms in a classification of time series derived from musical excerpts in this paper. Algorithms were chosen to represent different principles of classification – statistic approach, neural networks and competitive learning. The first algorithm is a classical k-Nearest neighbours algorithm, the second algorithm is Multilayer Perceptron (MPL, an example of artificial neural network and the third one is a Learning Vector Quantization (LVQ algorithm representing supervised counterpart to unsupervised Self Organizing Map (SOM.After our own former experiments with unlabelled data we moved forward to the data labels utilization, which generally led to a better accuracy of classification results. As we need huge data set of labelled time series (a priori knowledge of correct class which each time series instance belongs to, we used, with a good experience in former studies, musical excerpts as a source of real-world time series. We are using standard deviation of the sound signal as a descriptor of a musical excerpts volume level.We are describing principle of each algorithm as well as its implementation briefly, giving links for further research. Classification results of each algorithm are presented in a confusion matrix showing numbers of misclassifications and allowing to evaluate overall accuracy of the algorithm. Results are compared and particular misclassifications are discussed for each algorithm. Finally the best solution is chosen and further research goals are given.
Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT
Directory of Open Access Journals (Sweden)
Cunsuo Pang
2016-09-01
Full Text Available This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT’s performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated pulse radar, SAR (Synthetic aperture radar, or ISAR (Inverse synthetic aperture radar, for improving the probability of target recognition.
Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.
Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan
2016-09-24
This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.
Warping methods for spectroscopic and chromatographic signal alignment: A tutorial
Energy Technology Data Exchange (ETDEWEB)
Bloemberg, Tom G., E-mail: T.Bloemberg@science.ru.nl [Radboud University Nijmegen, Institute for Molecules and Materials, Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands); Radboud University Nijmegen, Education Institute for Molecular Sciences, Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands); Gerretzen, Jan; Lunshof, Anton [Radboud University Nijmegen, Institute for Molecules and Materials, Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands); Wehrens, Ron [Centre for Research and Innovation, Fondazione Edmund Mach, Via E. Mach, 1, 38010 San Michele all’Adige, TN (Italy); Buydens, Lutgarde M.C. [Radboud University Nijmegen, Institute for Molecules and Materials, Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands)
2013-06-05
Highlights: •The concepts of warping and alignment are introduced. •The most important warping methods are critically reviewed and explained. •Reference selection, evaluation and place of warping in preprocessing are discussed. •Some pitfalls, especially for LC–MS and similar data, are addressed. •Examples are provided, together with programming scripts to rework and extend them. -- Abstract: Warping methods are an important class of methods that can correct for misalignments in (a.o.) chemical measurements. Their use in preprocessing of chromatographic, spectroscopic and spectrometric data has grown rapidly over the last decade. This tutorial review aims to give a critical introduction to the most important warping methods, the place of warping in preprocessing and current views on the related matters of reference selection, optimization, and evaluation. Some pitfalls in warping, notably for liquid chromatography–mass spectrometry (LC–MS) data and similar, will be discussed. Examples will be given of the application of a number of freely available warping methods to a nuclear magnetic resonance (NMR) spectroscopic dataset and a chromatographic dataset. As part of the Supporting Information, we provide a number of programming scripts in Matlab and R, allowing the reader to work the extended examples in detail and to reproduce the figures in this paper.
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
Lower bounds on the run time of the univariate marginal distribution algorithm on OneMax
DEFF Research Database (Denmark)
Krejca, Martin S.; Witt, Carsten
2017-01-01
The Univariate Marginal Distribution Algorithm (UMDA), a popular estimation of distribution algorithm, is studied from a run time perspective. On the classical OneMax benchmark function, a lower bound of Ω(μ√n + n log n), where μ is the population size, on its expected run time is proved...... values maintained by the algorithm, including carefully designed potential functions. These techniques may prove useful in advancing the field of run time analysis for estimation of distribution algorithms in general........ This is the first direct lower bound on the run time of the UMDA. It is stronger than the bounds that follow from general black-box complexity theory and is matched by the run time of many evolutionary algorithms. The results are obtained through advanced analyses of the stochastic change of the frequencies of bit...
Accuracy evaluation of a new real-time continuous glucose monitoring algorithm in hypoglycemia
DEFF Research Database (Denmark)
Mahmoudi, Zeinab; Jensen, Morten Hasselstrøm; Johansen, Mette Dencker
2014-01-01
UNLABELLED: Abstract Background: The purpose of this study was to evaluate the performance of a new continuous glucose monitoring (CGM) calibration algorithm and to compare it with the Guardian(®) REAL-Time (RT) (Medtronic Diabetes, Northridge, CA) calibration algorithm in hypoglycemia. SUBJECTS...... AND METHODS: CGM data were obtained from 10 type 1 diabetes patients undergoing insulin-induced hypoglycemia. Data were obtained in two separate sessions using the Guardian RT CGM device. Data from the same CGM sensor were calibrated by two different algorithms: the Guardian RT algorithm and a new calibration...... algorithm. The accuracy of the two algorithms was compared using four performance metrics. RESULTS: The median (mean) of absolute relative deviation in the whole range of plasma glucose was 20.2% (32.1%) for the Guardian RT calibration and 17.4% (25.9%) for the new calibration algorithm. The mean (SD...
Perceived Speech Quality Estimation Using DTW Algorithm
Directory of Open Access Journals (Sweden)
S. Arsenovski
2009-06-01
Full Text Available In this paper a method for speech quality estimation is evaluated by simulating the transfer of speech over packet switched and mobile networks. The proposed system uses Dynamic Time Warping algorithm for test and received speech comparison. Several tests have been made on a test speech sample of a single speaker with simulated packet (frame loss effects on the perceived speech. The achieved results have been compared with measured PESQ values on the used transmission channel and their correlation has been observed.
Efficient algorithms for approximate time separation of events
Indian Academy of Sciences (India)
R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22
in the verification and analysis of asynchronous and concurrent systems. ...... Gunawardena J 1994 Timing analysis of digital circuits and the theory of min-max ... Williams T E 1994 Performance of iterative computation in self-timed rings.
An Experimental Evaluation of Real-Time DVFS Scheduling Algorithms
Saha, Sonal
2011-01-01
Dynamic voltage and frequency scaling (DVFS) is an extensively studied energy manage- ment technique, which aims to reduce the energy consumption of computing platforms by dynamically scaling the CPU frequency. Real-Time DVFS (RT-DVFS) is a branch of DVFS, which reduces CPU energy consumption through DVFS, while at the same time ensures that task time constraints are satisfied by constructing appropriate real-time task schedules. The literature presents numerous RT-DVFS schedul...
A real-time MTFC algorithm of space remote-sensing camera based on FPGA
Zhao, Liting; Huang, Gang; Lin, Zhe
2018-01-01
A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.
Nonlinear Gravitational Waves as Dark Energy in Warped Spacetimes
Directory of Open Access Journals (Sweden)
Reinoud Jan Slagter
2017-02-01
Full Text Available We find an azimuthal-angle dependent approximate wave like solution to second order on a warped five-dimensional manifold with a self-gravitating U(1 scalar gauge field (cosmic string on the brane using the multiple-scale method. The spectrum of the several orders of approximation show maxima of the energy distribution dependent on the azimuthal-angle and the winding numbers of the subsequent orders of the scalar field. This breakup of the quantized flux quanta does not lead to instability of the asymptotic wavelike solution due to the suppression of the n-dependency in the energy momentum tensor components by the warp factor. This effect is triggered by the contribution of the five dimensional Weyl tensor on the brane. This contribution can be understood as dark energy and can trigger the self-acceleration of the universe without the need of a cosmological constant. There is a striking relation between the symmetry breaking of the Higgs field described by the winding number and the SO(2 breaking of the axially symmetric configuration into a discrete subgroup of rotations of about 180 ∘ . The discrete sequence of non-axially symmetric deviations, cancelled by the emission of gravitational waves in order to restore the SO(2 symmetry, triggers the pressure T z z for discrete values of the azimuthal-angle. There could be a possible relation between the recently discovered angle-preferences of polarization axes of quasars on large scales and our theoretical predicted angle-dependency and this could be evidence for the existence of cosmic strings. Careful comparison of this spectrum of extremal values of the first and second order φ-dependency and the distribution of the alignment of the quasar polarizations is necessary. This can be accomplished when more observational data become available. It turns out that, for late time, the vacuum 5D spacetime is conformally invariant if the warp factor fulfils the equation of a vibrating
A real-time ECG data compression and transmission algorithm for an e-health device.
Lee, SangJoon; Kim, Jungkuk; Lee, Myoungho
2011-09-01
This paper introduces a real-time data compression and transmission algorithm between e-health terminals for a periodic ECGsignal. The proposed algorithm consists of five compression procedures and four reconstruction procedures. In order to evaluate the performance of the proposed algorithm, the algorithm was applied to all 48 recordings of MIT-BIH arrhythmia database, and the compress ratio (CR), percent root mean square difference (PRD), percent root mean square difference normalized (PRDN), rms, SNR, and quality score (QS) values were obtained. The result showed that the CR was 27.9:1 and the PRD was 2.93 on average for all 48 data instances with a 15% window size. In addition, the performance of the algorithm was compared to those of similar algorithms introduced recently by others. It was found that the proposed algorithm showed clearly superior performance in all 48 data instances at a compression ratio lower than 15:1, whereas it showed similar or slightly inferior PRD performance for a data compression ratio higher than 20:1. In light of the fact that the similarity with the original data becomes meaningless when the PRD is higher than 2, the proposed algorithm shows significantly better performance compared to the performance levels of other algorithms. Moreover, because the algorithm can compress and transmit data in real time, it can be served as an optimal biosignal data transmission method for limited bandwidth communication between e-health devices.
Khomich, A; Kugel, A; Männer, R; Müller, M; Baines, J T M
2003-01-01
Some of track reconstruction algorithms which are common to all B-physics channels and standard RoI processing have been tested for execution time and assessed for suitability for speed-up by using FPGA coprocessor. The studies presented in this note were performed in the C/C++ framework, CTrig, which was the fullest set of algorithms available at the time of study For investigation of possible speed-up of algorithms most time consuming parts of TRT-LUT was implemented in VHDL for running in FPGA coprocessor board MPRACE. MPRACE (Reconfigurable Accelerator / Computing Engine) is an FPGA-Coprocessor based on Xilinx Virtex-2 FPGA and made as 64Bit/66MHz PCI card developed at the University of Mannheim. Timing measurements results for a TRT Full Scan algorithm executed on the MPRACE are presented here as well. The measurement results show a speed-up factor of ~2 for this algorithm.
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed
Tian, Ye; Song, Qi; Cattafesta, Louis
2005-01-01
This report summarizes the activities on "Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed." The work summarized consists primarily of two parts. The first part summarizes our previous work and the extensions to adaptive ID and control algorithms. The second part concentrates on the validation of adaptive algorithms by applying them to a vibration beam test bed. Extensions to flow control problems are discussed.
A fast readout algorithm for Cluster Counting/Timing drift chambers on a FPGA board
Energy Technology Data Exchange (ETDEWEB)
Cappelli, L. [Università di Cassino e del Lazio Meridionale (Italy); Creti, P.; Grancagnolo, F. [Istituto Nazionale di Fisica Nucleare, Lecce (Italy); Pepino, A., E-mail: Aurora.Pepino@le.infn.it [Istituto Nazionale di Fisica Nucleare, Lecce (Italy); Tassielli, G. [Istituto Nazionale di Fisica Nucleare, Lecce (Italy); Fermilab, Batavia, IL (United States); Università Marconi, Roma (Italy)
2013-08-01
A fast readout algorithm for Cluster Counting and Timing purposes has been implemented and tested on a Virtex 6 core FPGA board. The algorithm analyses and stores data coming from a Helium based drift tube instrumented by 1 GSPS fADC and represents the outcome of balancing between cluster identification efficiency and high speed performance. The algorithm can be implemented in electronics boards serving multiple fADC channels as an online preprocessing stage for drift chamber signals.
Niazmardi, S.; Safari, A.; Homayouni, S.
2017-09-01
Crop mapping through classification of Satellite Image Time-Series (SITS) data can provide very valuable information for several agricultural applications, such as crop monitoring, yield estimation, and crop inventory. However, the SITS data classification is not straightforward. Because different images of a SITS data have different levels of information regarding the classification problems. Moreover, the SITS data is a four-dimensional data that cannot be classified using the conventional classification algorithms. To address these issues in this paper, we presented a classification strategy based on Multiple Kernel Learning (MKL) algorithms for SITS data classification. In this strategy, initially different kernels are constructed from different images of the SITS data and then they are combined into a composite kernel using the MKL algorithms. The composite kernel, once constructed, can be used for the classification of the data using the kernel-based classification algorithms. We compared the computational time and the classification performances of the proposed classification strategy using different MKL algorithms for the purpose of crop mapping. The considered MKL algorithms are: MKL-Sum, SimpleMKL, LPMKL and Group-Lasso MKL algorithms. The experimental tests of the proposed strategy on two SITS data sets, acquired by SPOT satellite sensors, showed that this strategy was able to provide better performances when compared to the standard classification algorithm. The results also showed that the optimization method of the used MKL algorithms affects both the computational time and classification accuracy of this strategy.
Hoeksema, F.W.; Srinivasan, R.; Schiphorst, Roelof; Slump, Cornelis H.
2004-01-01
In joint timing and carrier offset estimation algorithms for Time Division Duplexing (TDD) OFDM systems, different timing metrics are proposed to determine the beginning of a burst or symbol. In this contribution we investigated the different timing metrics in order to establish their impact on the
DigiWarp: a method for deformable mouse atlas warping to surface topographic data
Energy Technology Data Exchange (ETDEWEB)
Joshi, Anand A; Shattuck, David W; Toga, Arthur W [Laboratory of Neuro Imaging, UCLA School of Medicine, Los Angeles, CA 90095 (United States); Chaudhari, Abhijit J [Department of Radiology, UC Davis School of Medicine, Sacramento, CA 95817 (United States); Li Changqing; Cherry, Simon R [Department of Biomedical Engineering, University of California-Davis, Davis, CA 95616 (United States); Dutta, Joyita; Leahy, Richard M, E-mail: anand.joshi@loni.ucla.ed, E-mail: leahy@sipi.usc.ed [Signal and Image Processing Institute, University of Southern California, Los Angeles, CA 90089 (United States)
2010-10-21
For pre-clinical bioluminescence or fluorescence optical tomography, the animal's surface topography and internal anatomy need to be estimated for improving the quantitative accuracy of reconstructed images. The animal's surface profile can be measured by all-optical systems, but estimation of the internal anatomy using optical techniques is non-trivial. A 3D anatomical mouse atlas may be warped to the estimated surface. However, fitting an atlas to surface topography data is challenging because of variations in the posture and morphology of imaged mice. In addition, acquisition of partial data (for example, from limited views or with limited sampling) can make the warping problem ill-conditioned. Here, we present a method for fitting a deformable mouse atlas to surface topographic range data acquired by an optical system. As an initialization procedure, we match the posture of the atlas to the posture of the mouse being imaged using landmark constraints. The asymmetric L{sup 2} pseudo-distance between the atlas surface and the mouse surface is then minimized in order to register two data sets. A Laplacian prior is used to ensure smoothness of the surface warping field. Once the atlas surface is normalized to match the range data, the internal anatomy is transformed using elastic energy minimization. We present results from performance evaluation studies of our method where we have measured the volumetric overlap between the internal organs delineated directly from MRI or CT and those estimated by our proposed warping scheme. Computed Dice coefficients indicate excellent overlap in the brain and the heart, with fair agreement in the kidneys and the bladder.
DigiWarp: a method for deformable mouse atlas warping to surface topographic data
International Nuclear Information System (INIS)
Joshi, Anand A; Shattuck, David W; Toga, Arthur W; Chaudhari, Abhijit J; Li Changqing; Cherry, Simon R; Dutta, Joyita; Leahy, Richard M
2010-01-01
For pre-clinical bioluminescence or fluorescence optical tomography, the animal's surface topography and internal anatomy need to be estimated for improving the quantitative accuracy of reconstructed images. The animal's surface profile can be measured by all-optical systems, but estimation of the internal anatomy using optical techniques is non-trivial. A 3D anatomical mouse atlas may be warped to the estimated surface. However, fitting an atlas to surface topography data is challenging because of variations in the posture and morphology of imaged mice. In addition, acquisition of partial data (for example, from limited views or with limited sampling) can make the warping problem ill-conditioned. Here, we present a method for fitting a deformable mouse atlas to surface topographic range data acquired by an optical system. As an initialization procedure, we match the posture of the atlas to the posture of the mouse being imaged using landmark constraints. The asymmetric L 2 pseudo-distance between the atlas surface and the mouse surface is then minimized in order to register two data sets. A Laplacian prior is used to ensure smoothness of the surface warping field. Once the atlas surface is normalized to match the range data, the internal anatomy is transformed using elastic energy minimization. We present results from performance evaluation studies of our method where we have measured the volumetric overlap between the internal organs delineated directly from MRI or CT and those estimated by our proposed warping scheme. Computed Dice coefficients indicate excellent overlap in the brain and the heart, with fair agreement in the kidneys and the bladder.
A polynomial time algorithm for checking regularity of totally normed process algebra
Yang, F.; Huang, H.
2015-01-01
A polynomial algorithm for the regularity problem of weak and branching bisimilarity on totally normed process algebra (PA) processes is given. Its time complexity is O(n 3 +mn) O(n3+mn), where n is the number of transition rules and m is the maximal length of the rules. The algorithm works for
Genetic algorithms for adaptive real-time control in space systems
Vanderzijp, J.; Choudry, A.
1988-01-01
Genetic Algorithms that are used for learning as one way to control the combinational explosion associated with the generation of new rules are discussed. The Genetic Algorithm approach tends to work best when it can be applied to a domain independent knowledge representation. Applications to real time control in space systems are discussed.
Formulations and exact algorithms for the vehicle routing problem with time windows
DEFF Research Database (Denmark)
Kallehauge, Brian
2008-01-01
In this paper we review the exact algorithms proposed in the last three decades for the solution of the vehicle routing problem with time windows (VRPTW). The exact algorithms for the VRPTW are in many aspects inherited from work on the traveling salesman problem (TSP). In recognition of this fact...
Algorithm Development for a Real-Time Military Noise Monitor
National Research Council Canada - National Science Library
Vipperman, Jeffrey S; Bucci, Brian
2006-01-01
The long-range goal of this 1-year SERDP Exploratory Development (SEED) project was to create an improved real-time, high-energy military impulse noise monitoring system that can detect events with peak levels (Lpk...
Soft hairy warped black hole entropy
Grumiller, Daniel; Hacker, Philip; Merbis, Wout
2018-02-01
We reconsider warped black hole solutions in topologically massive gravity and find novel boundary conditions that allow for soft hairy excitations on the horizon. To compute the associated symmetry algebra we develop a general framework to compute asymptotic symmetries in any Chern-Simons-like theory of gravity. We use this to show that the near horizon symmetry algebra consists of two u (1) current algebras and recover the surprisingly simple entropy formula S = 2 π( J 0 + + J 0 - ), where J 0 ± are zero mode charges of the current algebras. This provides the first example of a locally non-maximally symmetric configuration exhibiting this entropy law and thus non-trivial evidence for its universality.
Warped unification, proton stability, and dark matter.
Agashe, Kaustubh; Servant, Géraldine
2004-12-03
We show that solving the problem of baryon-number violation in nonsupersymmetric grand unified theories (GUT's) in warped higher-dimensional spacetime can lead to a stable Kaluza-Klein particle. This exotic particle has gauge quantum numbers of a right-handed neutrino, but carries fractional baryon number and is related to the top quark within the higher-dimensional GUT. A combination of baryon number and SU(3) color ensures its stability. Its relic density can easily be of the right value for masses in the 10 GeV-few TeV range. An exciting aspect of these models is that the entire parameter space will be tested at near future dark matter direct detection experiments. Other exotic GUT partners of the top quark are also light and can be produced at high energy colliders with distinctive signatures.
Language comprehension warps the mirror neuron system
Directory of Open Access Journals (Sweden)
Noah eZarr
2013-12-01
Full Text Available Is the mirror neuron system (MNS used in language understanding? According to embodied accounts of language comprehension, understanding sentences describing actions makes use of neural mechanisms of action control, including the MNS. Consequently, repeatedly comprehending sentences describing similar actions should induce adaptation of the MNS thereby warping its use in other cognitive processes such as action recognition and prediction. To test this prediction, participants read blocks of multiple sentences where each sentence in the block described transfer of objects in a direction away or toward the reader. Following each block, adaptation was measured by having participants predict the end-point of videotaped actions. The adapting sentences disrupted prediction of actions in the same direction, but a only for videos of biological motion, and b only when the effector implied by the language (e.g., the hand matched the videos. These findings are signatures of the mirror neuron system.
Language comprehension warps the mirror neuron system.
Zarr, Noah; Ferguson, Ryan; Glenberg, Arthur M
2013-01-01
Is the mirror neuron system (MNS) used in language understanding? According to embodied accounts of language comprehension, understanding sentences describing actions makes use of neural mechanisms of action control, including the MNS. Consequently, repeatedly comprehending sentences describing similar actions should induce adaptation of the MNS thereby warping its use in other cognitive processes such as action recognition and prediction. To test this prediction, participants read blocks of multiple sentences where each sentence in the block described transfer of objects in a direction away or toward the reader. Following each block, adaptation was measured by having participants predict the end-point of videotaped actions. The adapting sentences disrupted prediction of actions in the same direction, but (a) only for videos of biological motion, and (b) only when the effector implied by the language (e.g., the hand) matched the videos. These findings are signatures of the MNS.
Gravity on a little warped space
International Nuclear Information System (INIS)
George, Damien P.; McDonald, Kristian L.
2011-01-01
We investigate the consistent inclusion of 4D Einstein gravity on a truncated slice of AdS 5 whose bulk-gravity and UV scales are much less than the 4D Planck scale, M * Pl . Such 'Little Warped Spaces' have found phenomenological utility and can be motivated by string realizations of the Randall-Sundrum framework. Using the interval approach to brane-world gravity, we show that the inclusion of a large UV-localized Einstein-Hilbert term allows one to consistently incorporate 4D Einstein gravity into the low-energy theory. We detail the spectrum of Kaluza-Klein metric fluctuations and, in particular, examine the coupling of the little radion to matter. Furthermore, we show that Goldberger-Wise stabilization can be successfully implemented on such spaces. Our results demonstrate that realistic low-energy effective theories can be constructed on these spaces, and have relevance for existing models in the literature.
A sub-cubic time algorithm for computing the quartet distance between two general trees
DEFF Research Database (Denmark)
Nielsen, Jesper; Kristensen, Anders Kabell; Mailund, Thomas
2011-01-01
Background When inferring phylogenetic trees different algorithms may give different trees. To study such effects a measure for the distance between two trees is useful. Quartet distance is one such measure, and is the number of quartet topologies that differ between two trees. Results We have...... derived a new algorithm for computing the quartet distance between a pair of general trees, i.e. trees where inner nodes can have any degree ≥ 3. The time and space complexity of our algorithm is sub-cubic in the number of leaves and does not depend on the degree of the inner nodes. This makes...... it the fastest algorithm so far for computing the quartet distance between general trees independent of the degree of the inner nodes. Conclusions We have implemented our algorithm and two of the best competitors. Our new algorithm is significantly faster than the competition and seems to run in close...
Adnan, F. A.; Romlay, F. R. M.; Shafiq, M.
2018-04-01
Owing to the advent of the industrial revolution 4.0, the need for further evaluating processes applied in the additive manufacturing application particularly the computational process for slicing is non-trivial. This paper evaluates a real-time slicing algorithm for slicing an STL formatted computer-aided design (CAD). A line-plane intersection equation was applied to perform the slicing procedure at any given height. The application of this algorithm has found to provide a better computational time regardless the number of facet in the STL model. The performance of this algorithm is evaluated by comparing the results of the computational time for different geometry.
Performances of the New Real Time Tsunami Detection Algorithm applied to tide gauges data
Chierici, F.; Embriaco, D.; Morucci, S.
2017-12-01
Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.
Hardware Algorithms For Tile-Based Real-Time Rendering
Crisu, D.
2012-01-01
In this dissertation, we present the GRAphics AcceLerator (GRAAL) framework for developing embedded tile-based rasterization hardware for mobile devices, meant to accelerate real-time 3-D graphics (OpenGL compliant) applications. The goal of the framework is a low-cost, low-power, high-performance
Outlier detection algorithms for least squares time series regression
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Bent
We review recent asymptotic results on some robust methods for multiple regression. The regressors include stationary and non-stationary time series as well as polynomial terms. The methods include the Huber-skip M-estimator, 1-step Huber-skip M-estimators, in particular the Impulse Indicator Sat...
An algorithm for learning real-time automata (extended abstract)
Verwer, S.E.; De Weerdt, M.M.; Witteveen, C.
2007-01-01
A common model for discrete event systems is a deterministic finite automaton (DFA). An advantage of this model is that it can be interpreted by domain experts. When observing a real-world system, however, there often is more information than just the sequence of discrete events: the time at which
Improving real-time train dispatching : Models, algorithms and applications
D'Ariano, A.
2008-01-01
Traffic controllers monitor railway traffic sequencing train movements and setting routes with the aim of ensuring smooth train behaviour and limiting as much as existing delays. Due to the strict time limit available for computing a new timetable during operations, which so far is rather infeasible
Algorithmic power management - Energy minimisation under real-time constraints
Gerards, Marco Egbertus Theodorus
2014-01-01
Energy consumption is a major concern for designers of embedded devices. Especially for battery operated systems (like many embedded systems), the energy consumption limits the time for which a device can be active, and the amount of processing that can take place. In this thesis we study how the
Algorithmic power management: energy minimisation under real-time constraints
Gerards, Marco Egbertus Theodorus
2014-01-01
Energy consumption is a major concern for designers of embedded devices. Especially for battery operated systems (like many embedded systems), the energy consumption limits the time for which a device can be active, and the amount of processing that can take place. In this thesis we study how the
Approximation algorithms for replenishment problems with fixed turnover times
T. Bosman (Thomas); M. van Ee (Martijn); Y. Jiao (Yang); A. Marchetti Spaccamela (Alberto); R. Ravi; L. Stougie (Leen)
2018-01-01
textabstractWe introduce and study a class of optimization problems we coin replenishment problems with fixed turnover times: a very natural model that has received little attention in the literature. Nodes with capacity for storing a certain commodity are located at various places; at each node the
Some examples of image warping for low vision prosthesis
Juday, Richard D.; Loshin, David S.
1988-01-01
NASA has developed an image processor, the Programmable Remapper, for certain functions in machine vision. The Remapper performs a highly arbitrary geometric warping of an image at video rate. It might ultimately be shrunk to a size and cost that could allow its use in a low-vision prosthesis. Coordinate warpings have been developed for retinitis pigmentosa (tunnel vision) and for maculapathy (loss of central field) that are intended to make best use of the patient's remaining viable retina. The rationales and mathematics are presented for some warpings that we will try in clinical studies using the Remapper's prototype.
Implicit time-dependent finite different algorithm for quench simulation
International Nuclear Information System (INIS)
Koizumi, Norikiyo; Takahashi, Yoshikazu; Tsuji, Hiroshi
1994-12-01
A magnet in a fusion machine has many difficulties in its application because of requirement of a large operating current, high operating field and high breakdown voltage. A cable-in-conduit (CIC) conductor is the best candidate to overcome these difficulties. However, there remained uncertainty in a quench event in the cable-in-conduit conductor because of a difficulty to analyze a fluid dynamics equation. Several scientists, then, developed the numerical code for the quench simulation. However, most of them were based on an explicit time-dependent finite difference scheme. In this scheme, a discrete time increment is strictly restricted by CFL (Courant-Friedrichs-Lewy) condition. Therefore, long CPU time was consumed for the quench simulation. Authors, then, developed a new quench simulation code, POCHI1, which is based on an implicit time dependent scheme. In POCHI1, the fluid dynamics equation is linearlized according to a procedure applied by Beam and Warming and then, a tridiagonal system can be offered. Therefore, no iteration is necessary to solve the fluid dynamics equation. This leads great reduction of the CPU time. Also, POCHI1 can cope with non-linear boundary condition. In this study, comparison with experimental results was carried out. The normal zone propagation behavior was investigated in two samples of CIC conductors which had different hydraulic diameters. The measured and simulated normal zone propagation length showed relatively good agreement. However, the behavior of the normal voltage shows a little disagreement. These results indicate necessity to improve the treatment of the heat transfer coefficient in the turbulent flow region and the electric resistivity of the copper stabilizer in high temperature and high field region. (author)
International Nuclear Information System (INIS)
Pyragas, V.; Pyragas, K.
2011-01-01
We propose a simple adaptive delayed feedback control algorithm for stabilization of unstable periodic orbits with unknown periods. The state dependent time delay is varied continuously towards the period of controlled orbit according to a gradient-descent method realized through three simple ordinary differential equations. We demonstrate the efficiency of the algorithm with the Roessler and Mackey-Glass chaotic systems. The stability of the controlled orbits is proven by computation of the Lyapunov exponents of linearized equations. -- Highlights: → A simple adaptive modification of the delayed feedback control algorithm is proposed. → It enables the control of unstable periodic orbits with unknown periods. → The delay time is varied continuously according to a gradient descend method. → The algorithm is embodied by three simple ordinary differential equations. → The validity of the algorithm is proven by computation of the Lyapunov exponents.
An Improved Phase Gradient Autofocus Algorithm Used in Real-time Processing
Directory of Open Access Journals (Sweden)
Qing Ji-ming
2015-10-01
Full Text Available The Phase Gradient Autofocus (PGA algorithm can remove the high order phase error effectively, which is of great significance to get high resolution images in real-time processing. While PGA usually needs iteration, which necessitates long working hours. In addition, the performances of the algorithm are not stable in different scene applications. This severely constrains the application of PGA in real-time processing. Isolated scatter selection and windowing are two important algorithmic steps of Phase Gradient Autofocus Algorithm. Therefore, this paper presents an isolated scatter selection method based on sample mean and a windowing method based on pulse envelope. These two methods are highly adaptable to data, which would make the algorithm obtain better stability and need less iteration. The adaptability of the improved PGA is demonstrated with the experimental results of real radar data.
Real time implementation of the parametric imaging correlation algorithms
Energy Technology Data Exchange (ETDEWEB)
Bogorodski, Piotr; Wolek, Tomasz; Wasielewski, Jaroslaw; Piatkowski, Adam [Medical and Nuclear Electronics Division, Institute of Radioelectronics, Warsaw University of Technology, 00-665 Warsaw, Nowowiejska 15/19 (Poland)
1999-12-31
A novel method for functional image evaluation from image set obtained in contrast aided Ultrafast Computed Tomography and Magnetic Resonance Imaging will be presented. The method converts temporal set of images of first-pass transit of injected contrast, to a single parametric image. The main difference between proposed procedure and other widely accepted methods is fast, that our method applies correlation and discrimination analysis to each concentration-time curve, instead of fitting them to the given a priori tracer kinetics model. A stress will be put on execution speed (i.e. shortening of the time required to obtain a perfusion relevant image), and easiest user interface allowing the physician to utilize the system without any technical assistance. Both execution speed and user interface should satisfy requirements in the interventional procedures. (authors)
Parallel algorithms for simulating continuous time Markov chains
Nicol, David M.; Heidelberger, Philip
1992-01-01
We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.
False-nearest-neighbors algorithm and noise-corrupted time series
International Nuclear Information System (INIS)
Rhodes, C.; Morari, M.
1997-01-01
The false-nearest-neighbors (FNN) algorithm was originally developed to determine the embedding dimension for autonomous time series. For noise-free computer-generated time series, the algorithm does a good job in predicting the embedding dimension. However, the problem of predicting the embedding dimension when the time-series data are corrupted by noise was not fully examined in the original studies of the FNN algorithm. Here it is shown that with large data sets, even small amounts of noise can lead to incorrect prediction of the embedding dimension. Surprisingly, as the length of the time series analyzed by FNN grows larger, the cause of incorrect prediction becomes more pronounced. An analysis of the effect of noise on the FNN algorithm and a solution for dealing with the effects of noise are given here. Some results on the theoretically correct choice of the FNN threshold are also presented. copyright 1997 The American Physical Society
Computing Fault-Containment Times of Self-Stabilizing Algorithms Using Lumped Markov Chains
Directory of Open Access Journals (Sweden)
Volker Turau
2018-05-01
Full Text Available The analysis of self-stabilizing algorithms is often limited to the worst case stabilization time starting from an arbitrary state, i.e., a state resulting from a sequence of faults. Considering the fact that these algorithms are intended to provide fault tolerance in the long run, this is not the most relevant metric. A common situation is that a running system is an a legitimate state when hit by a single fault. This event has a much higher probability than multiple concurrent faults. Therefore, the worst case time to recover from a single fault is more relevant than the recovery time from a large number of faults. This paper presents techniques to derive upper bounds for the mean time to recover from a single fault for self-stabilizing algorithms based on Markov chains in combination with lumping. To illustrate the applicability of the techniques they are applied to a new self-stabilizing coloring algorithm.
A Scalable GVT Estimation Algorithm for PDES: Using Lower Bound of Event-Bulk-Time
Directory of Open Access Journals (Sweden)
Yong Peng
2015-01-01
Full Text Available Global Virtual Time computation of Parallel Discrete Event Simulation is crucial for conducting fossil collection and detecting the termination of simulation. The triggering condition of GVT computation in typical approaches is generally based on the wall-clock time or logical time intervals. However, the GVT value depends on the timestamps of events rather than the wall-clock time or logical time intervals. Therefore, it is difficult for the existing approaches to select appropriate time intervals to compute the GVT value. In this study, we propose a scalable GVT estimation algorithm based on Lower Bound of Event-Bulk-Time, which triggers the computation of the GVT value according to the number of processed events. In order to calculate the number of transient messages, our algorithm employs Event-Bulk to record the messages sent and received by Logical Processes. To eliminate the performance bottleneck, we adopt an overlapping computation approach to distribute the workload of GVT computation to all worker-threads. We compare our algorithm with the fast asynchronous GVT algorithm using PHOLD benchmark on the shared memory machine. Experimental results indicate that our algorithm has a light overhead and shows higher speedup and accuracy of GVT computation than the fast asynchronous GVT algorithm.
Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia
The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.
Development of real-time plasma analysis and control algorithms for the TCV tokamak using SIMULINK
International Nuclear Information System (INIS)
Felici, F.; Le, H.B.; Paley, J.I.; Duval, B.P.; Coda, S.; Moret, J.-M.; Bortolon, A.; Federspiel, L.; Goodman, T.P.; Hommen, G.; Karpushov, A.; Piras, F.; Pitzschke, A.; Romero, J.; Sevillano, G.; Sauter, O.; Vijvers, W.
2014-01-01
Highlights: • A new digital control system for the TCV tokamak has been commissioned. • The system is entirely programmable by SIMULINK, allowing rapid algorithm development. • Different control system nodes can run different algorithms at varying sampling times. • The previous control system functions have been emulated and improved. • New capabilities include MHD control, profile control, equilibrium reconstruction. - Abstract: One of the key features of the new digital plasma control system installed on the TCV tokamak is the possibility to rapidly design, test and deploy real-time algorithms. With this flexibility the new control system has been used for a large number of new experiments which exploit TCV's powerful actuators consisting of 16 individually controllable poloidal field coils and 7 real-time steerable electron cyclotron (EC) launchers. The system has been used for various applications, ranging from event-based real-time MHD control to real-time current diffusion simulations. These advances have propelled real-time control to one of the cornerstones of the TCV experimental program. Use of the SIMULINK graphical programming language to directly program the control system has greatly facilitated algorithm development and allowed a multitude of different algorithms to be deployed in a short time. This paper will give an overview of the developed algorithms and their application in physics experiments
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-09-03
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.
Performance Evaluation of New Joint EDF-RM Scheduling Algorithm for Real Time Distributed System
Directory of Open Access Journals (Sweden)
Rashmi Sharma
2014-01-01
Full Text Available In Real Time System, the achievement of deadline is the main target of every scheduling algorithm. Earliest Deadline First (EDF, Rate Monotonic (RM, and least Laxity First are some renowned algorithms that work well in their own context. As we know, there is a very common problem Domino's effect in EDF that is generated due to overloading condition (EDF is not working well in overloading situation. Similarly, performance of RM is degraded in underloading condition. We can say that both algorithms are complements of each other. Deadline missing in both events happens because of their utilization bounding strategy. Therefore, in this paper we are proposing a new scheduling algorithm that carries through the drawback of both existing algorithms. Joint EDF-RM scheduling algorithm is implemented in global scheduler that permits task migration mechanism in between processors in the system. In order to check the improved behavior of proposed algorithm we perform simulation. Results are achieved and evaluated in terms of Success Ratio (SR, Average CPU Utilization (ECU, Failure Ratio (FR, and Maximum Tardiness parameters. In the end, the results are compared with the existing (EDF, RM, and D_R_EDF algorithms. It has been shown that the proposed algorithm performs better during overloading condition as well in underloading condition.
Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks
Directory of Open Access Journals (Sweden)
Hui-Ping Chen
2016-11-01
Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.
A polynomial time algorithm for solving the maximum flow problem in directed networks
International Nuclear Information System (INIS)
Tlas, M.
2015-01-01
An efficient polynomial time algorithm for solving maximum flow problems has been proposed in this paper. The algorithm is basically based on the binary representation of capacities; it solves the maximum flow problem as a sequence of O(m) shortest path problems on residual networks with nodes and m arcs. It runs in O(m"2r) time, where is the smallest integer greater than or equal to log B , and B is the largest arc capacity of the network. A numerical example has been illustrated using this proposed algorithm.(author)
Real time algorithm temperature compensation in tunable laser / VCSEL based WDM-PON system
DEFF Research Database (Denmark)
Iglesias Olmedo, Miguel; Rodes Lopez, Roberto; Pham, Tien Thang
2012-01-01
We report on a real time experimental validation of a centralized algorithm for temperature compensation of tunable laser/VCSEL at ONU and OLT, respectively. Locking to a chosen WDM channel is shown for temperature changes over 40°C.......We report on a real time experimental validation of a centralized algorithm for temperature compensation of tunable laser/VCSEL at ONU and OLT, respectively. Locking to a chosen WDM channel is shown for temperature changes over 40°C....
CAT-PUMA: CME Arrival Time Prediction Using Machine learning Algorithms
Liu, Jiajia; Ye, Yudong; Shen, Chenglong; Wang, Yuming; Erdélyi, Robert
2018-04-01
CAT-PUMA (CME Arrival Time Prediction Using Machine learning Algorithms) quickly and accurately predicts the arrival of Coronal Mass Ejections (CMEs) of CME arrival time. The software was trained via detailed analysis of CME features and solar wind parameters using 182 previously observed geo-effective partial-/full-halo CMEs and uses algorithms of the Support Vector Machine (SVM) to make its predictions, which can be made within minutes of providing the necessary input parameters of a CME.
A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series
Energy Technology Data Exchange (ETDEWEB)
Chandola, Varun [ORNL; Vatsavai, Raju [ORNL
2011-01-01
Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).
Pyramid Algorithm Framework for Real-Time Image Effects
DEFF Research Database (Denmark)
Sangüesa, Adriá Arbués; Ene, Andreea-Daniela; Jørgensen, Nicolai Krogh
2016-01-01
Pyramid methods are useful for certain image processing techniques due to their linear time complexity. Implementing them using compute shaders provides a basis for rendering image effects with reduced impact on performance compared to conventional methods. Although pyramid methods are used...... in the game industry, they are not easily accessible to all developers because many game engines do not include built-in support. We present a framework for a popular game engine that allows users to take advantage of pyramid methods for developing image effects. In order to evaluate the performance...... and to demonstrate the framework, a few image effects were implemented. These effects were compared to built-in effects of the same game engine. The results showed that the built-in image effects performed slightly better. The performance of our framework could potentially be improved through optimisation, mainly...
International Nuclear Information System (INIS)
Grote, D.P.; Friedman, A.; Haber, I.
1993-01-01
The multi-dimensional particle simulation code WARP is used to study the transport and acceleration of space-charge dominated ion beams in present-day and near-term experiments, and in fusion drivers. The algorithms employed in the 3d package and a number of applications have recently been described. In this paper the authors review the general features and major applications of the code. They then present recent developments in both code capabilities and applications. Most notable is modeling of the planned ESQ injector for ILSE, which uses the code's newest features, including subgrid-scale placement of internal conductor boundaries
PRESEE: an MDL/MML algorithm to time-series stream segmenting.
Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie
2013-01-01
Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.
Mechanical properties of 3D printed warped membranes
Kosmrlj, Andrej; Xiao, Kechao; Weaver, James C.; Vlassak, Joost J.; Nelson, David R.
2015-03-01
We explore how a frozen background metric affects the mechanical properties of solid planar membranes. Our focus is a special class of ``warped membranes'' with a preferred random height profile characterized by random Gaussian variables h (q) in Fourier space with zero mean and variance q-m . It has been shown theoretically that in the linear response regime, this quenched random disorder increases the effective bending rigidity, while the Young's and shear moduli are reduced. Compared to flat plates of the same thickness t, the bending rigidity of warped membranes is increased by a factor hv / t while the in-plane elastic moduli are reduced by t /hv , where hv =√{ } describes the frozen height fluctuations. Interestingly, hv is system size dependent for warped membranes characterized with m > 2 . We present experimental tests of these predictions, using warped membranes prepared via high resolution 3D printing.
Mode Identification of Guided Ultrasonic Wave using Time- Frequency Algorithm
International Nuclear Information System (INIS)
Yoon, Byung Sik; Yang, Seung Han; Cho, Yong Sang; Kim, Yong Sik; Lee, Hee Jong
2007-01-01
The ultrasonic guided waves are waves whose propagation characteristics depend on structural thickness and shape such as those in plates, tubes, rods, and embedded layers. If the angle of incidence or the frequency of sound is adjusted properly, the reflected and refracted energy within the structure will constructively interfere, thereby launching the guided wave. Because these waves penetrate the entire thickness of the tube and propagate parallel to the surface, a large portion of the material can be examined from a single transducer location. The guided ultrasonic wave has various merits like above. But various kind of modes are propagating through the entire thickness, so we don't know the which mode is received. Most of applications are limited from mode selection and mode identification. So the mode identification is very important process for guided ultrasonic inspection application. In this study, various time-frequency analysis methodologies are developed and compared for mode identification tool of guided ultrasonic signal. For this study, a high power tone-burst ultrasonic system set up for the generation and receive of guided waves. And artificial notches were fabricated on the Aluminum plate for the experiment on the mode identification
Night-Time Vehicle Detection Algorithm Based on Visual Saliency and Deep Learning
Directory of Open Access Journals (Sweden)
Yingfeng Cai
2016-01-01
Full Text Available Night vision systems get more and more attention in the field of automotive active safety field. In this area, a number of researchers have proposed far-infrared sensor based night-time vehicle detection algorithm. However, existing algorithms have low performance in some indicators such as the detection rate and processing time. To solve this problem, we propose a far-infrared image vehicle detection algorithm based on visual saliency and deep learning. Firstly, most of the nonvehicle pixels will be removed with visual saliency computation. Then, vehicle candidate will be generated by using prior information such as camera parameters and vehicle size. Finally, classifier trained with deep belief networks will be applied to verify the candidates generated in last step. The proposed algorithm is tested in around 6000 images and achieves detection rate of 92.3% and processing time of 25 Hz which is better than existing methods.
Directory of Open Access Journals (Sweden)
Weizhe Zhang
2014-01-01
Full Text Available Energy consumption in computer systems has become a more and more important issue. High energy consumption has already damaged the environment to some extent, especially in heterogeneous multiprocessors. In this paper, we first formulate and describe the energy-aware real-time task scheduling problem in heterogeneous multiprocessors. Then we propose a particle swarm optimization (PSO based algorithm, which can successfully reduce the energy cost and the time for searching feasible solutions. Experimental results show that the PSO-based energy-aware metaheuristic uses 40%–50% less energy than the GA-based and SFLA-based algorithms and spends 10% less time than the SFLA-based algorithm in finding the solutions. Besides, it can also find 19% more feasible solutions than the SFLA-based algorithm.
Directory of Open Access Journals (Sweden)
Farahmand-Mehr Mohammad
2014-01-01
Full Text Available In this paper, a hybrid flow shop scheduling problem with a new approach considering time lags and sequence-dependent setup time in realistic situations is presented. Since few works have been implemented in this field, the necessity of finding better solutions is a motivation to extend heuristic or meta-heuristic algorithms. This type of production system is found in industries such as food processing, chemical, textile, metallurgical, printed circuit board, and automobile manufacturing. A mixed integer linear programming (MILP model is proposed to minimize the makespan. Since this problem is known as NP-Hard class, a meta-heuristic algorithm, named Genetic Algorithm (GA, and three heuristic algorithms (Johnson, SPTCH and Palmer are proposed. Numerical experiments of different sizes are implemented to evaluate the performance of presented mathematical programming model and the designed GA in compare to heuristic algorithms and a benchmark algorithm. Computational results indicate that the designed GA can produce near optimal solutions in a short computational time for different size problems.
International Nuclear Information System (INIS)
Lee, Jung Uk; Sun, Ju Young; Won, Mooncheol
2013-01-01
In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner
Yan, Zheping; Li, Jiyun; Zhang, Gengshi; Wu, Yi
2018-02-02
A novel real-time reaction obstacle avoidance algorithm (RRA) is proposed for autonomous underwater vehicles (AUVs) that must adapt to unknown complex terrains, based on forward looking sonar (FLS). To accomplish this algorithm, obstacle avoidance rules are planned, and the RRA processes are split into five steps Introduction only lists 4 so AUVs can rapidly respond to various environment obstacles. The largest polar angle algorithm (LPAA) is designed to change detected obstacle's irregular outline into a convex polygon, which simplifies the obstacle avoidance process. A solution is designed to solve the trapping problem existing in U-shape obstacle avoidance by an outline memory algorithm. Finally, simulations in three unknown obstacle scenes are carried out to demonstrate the performance of this algorithm, where the obtained obstacle avoidance trajectories are safety, smooth and near-optimal.
Chang, Chein-I
2017-01-01
This book explores recursive architectures in designing progressive hyperspectral imaging algorithms. In particular, it makes progressive imaging algorithms recursive by introducing the concept of Kalman filtering in algorithm design so that hyperspectral imagery can be processed not only progressively sample by sample or band by band but also recursively via recursive equations. This book can be considered a companion book of author’s books, Real-Time Progressive Hyperspectral Image Processing, published by Springer in 2016. Explores recursive structures in algorithm architecture Implements algorithmic recursive architecture in conjunction with progressive sample and band processing Derives Recursive Hyperspectral Sample Processing (RHSP) techniques according to Band-Interleaved Sample/Pixel (BIS/BIP) acquisition format Develops Recursive Hyperspectral Band Processing (RHBP) techniques according to Band SeQuential (BSQ) acquisition format for hyperspectral data.
Energy Technology Data Exchange (ETDEWEB)
Lee, Jung Uk [Samsung Electroics, Suwon (Korea, Republic of); Sun, Ju Young; Won, Mooncheol [Chungnam Nat' l Univ., Daejeon (Korea, Republic of)
2013-12-15
In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner.
Exponential-Time Algorithms and Complexity of NP-Hard Graph Problems
DEFF Research Database (Denmark)
Taslaman, Nina Sofia
of algorithms, as well as investigations into how far such improvements can get under reasonable assumptions. The first part is concerned with detection of cycles in graphs, especially parameterized generalizations of Hamiltonian cycles. A remarkably simple Monte Carlo algorithm is presented......NP-hard problems are deemed highly unlikely to be solvable in polynomial time. Still, one can often find algorithms that are substantially faster than brute force solutions. This thesis concerns such algorithms for problems from graph theory; techniques for constructing and improving this type......, and with high probability any found solution is shortest possible. Moreover, the algorithm can be used to find a cycle of given parity through the specified elements. The second part concerns the hardness of problems encoded as evaluations of the Tutte polynomial at some fixed point in the rational plane...
Yiannakou, Marinos; Trimikliniotis, Michael; Yiallouras, Christos; Damianou, Christakis
2016-02-01
Due to the heating in the pre-focal field the delay between successive movements in high intensity focused ultrasound (HIFU) are sometimes as long as 60s, resulting to treatment time in the order of 2-3h. Because there is generally a requirement to reduce treatment time, we were motivated to explore alternative transducer motion algorithms in order to reduce pre-focal heating and treatment time. A 1 MHz single element transducer with 4 cm diameter and 10 cm focal length was used. A simulation model was developed that estimates the temperature, thermal dose and lesion development in the pre-focal field. The simulated temperature history that was combined with the motion algorithms produced thermal maps in the pre-focal region. Polyacrylimde gel phantom was used to evaluate the induced pre-focal heating for each motion algorithm used, and also was used to assess the accuracy of the simulation model. Three out of the six algorithms having successive steps close to each other, exhibited severe heating in the pre-focal field. Minimal heating was produced with the algorithms having successive steps apart from each other (square, square spiral and random). The last three algorithms were improved further (with small cost in time), thus eliminating completely the pre-focal heating and reducing substantially the treatment time as compared to traditional algorithms. Out of the six algorithms, 3 were successful in eliminating the pre-focal heating completely. Because these 3 algorithms required no delay between successive movements (except in the last part of the motion), the treatment time was reduced by 93%. Therefore, it will be possible in the future, to achieve treatment time of focused ultrasound therapies shorter than 30 min. The rate of ablated volume achieved with one of the proposed algorithms was 71 cm(3)/h. The intention of this pilot study was to demonstrate that the navigation algorithms play the most important role in reducing pre-focal heating. By evaluating in
Two-step flash light sintering of copper nanoparticle ink to remove substrate warping
Energy Technology Data Exchange (ETDEWEB)
Ryu, Chung-Hyeon; Joo, Sung-Jun [Department of Mechanical Convergence Engineering, Hanyang University, Haengdang-dong, Seongdong-gu, Seoul 133-791 (Korea, Republic of); Kim, Hak-Sung, E-mail: kima@hanyang.ac.kr [Department of Mechanical Convergence Engineering, Hanyang University, Haengdang-dong, Seongdong-gu, Seoul 133-791 (Korea, Republic of); Institute of Nano Science and Technology, Hanyang University, Seoul, 133-791 (Korea, Republic of)
2016-10-30
Highlights: • We performed the two-step flash light sintering for copper nanoparticle ink to remove substrate warping. • 12 J/cm{sup 2} of preheating and 7 J/cm{sup 2} of main sintering energies were determined as optimum conditions to sinter the copper nanoparticle ink. • The resistivity of two-step sintered copper nanoparticle ink was 3.81 μΩ cm with 5B adhesion level, 2.3 times greater than that of bulk copper. • The two-step sintered case showed a high conductivity without any substrate warping. - Abstract: A two-step flash light sintering process was devised to reduce the warping of polymer substrates during the sintering of copper nanoparticle ink. To determine the optimum sintering conditions of the copper nanoparticle ink, the flash light irradiation conditions (pulse power, pulse number, on-time, and off-time) were varied and optimized. In order to monitor the flash light sintering process, in situ resistance and temperature monitoring of copper nanoink were conducted during the flash light sintering process. Also, a transient heat transfer analysis was performed by using the finite-element program ABAQUS to predict the temperature changes of copper nanoink and polymer substrate. The microstructures of the sintered copper nanoink films were analyzed by scanning electron microscopy. Additionally, an X-ray diffraction and Fourier transform infrared spectroscopy were used to characterize the crystal phase change of the sintered copper nanoparticles. The resulting two-step flash light sintered copper nanoink films exhibited a low resistivity (3.81 μΩ cm, 2.3 times of that of bulk copper) and 5B level of adhesion strength without warping of the polymer substrate.
Directory of Open Access Journals (Sweden)
Qi Hu
2013-04-01
Full Text Available State-of-the-art heuristic algorithms to solve the vehicle routing problem with time windows (VRPTW usually present slow speeds during the early iterations and easily fall into local optimal solutions. Focusing on solving the above problems, this paper analyzes the particle encoding and decoding strategy of the particle swarm optimization algorithm, the construction of the vehicle route and the judgment of the local optimal solution. Based on these, a hybrid chaos-particle swarm optimization algorithm (HPSO is proposed to solve VRPTW. The chaos algorithm is employed to re-initialize the particle swarm. An efficient insertion heuristic algorithm is also proposed to build the valid vehicle route in the particle decoding process. A particle swarm premature convergence judgment mechanism is formulated and combined with the chaos algorithm and Gaussian mutation into HPSO when the particle swarm falls into the local convergence. Extensive experiments are carried out to test the parameter settings in the insertion heuristic algorithm and to evaluate that they are corresponding to the data’s real-distribution in the concrete problem. It is also revealed that the HPSO achieves a better performance than the other state-of-the-art algorithms on solving VRPTW.
Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets.
Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S
2010-01-01
Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships.
Fischer, Christoph; Domer, Benno; Wibmer, Thomas; Penzel, Thomas
2017-03-01
Photoplethysmography has been used in a wide range of medical devices for measuring oxygen saturation, cardiac output, assessing autonomic function, and detecting peripheral vascular disease. Artifacts can render the photoplethysmogram (PPG) useless. Thus, algorithms capable of identifying artifacts are critically important. However, the published PPG algorithms are limited in algorithm and study design. Therefore, the authors developed a novel embedded algorithm for real-time pulse waveform (PWF) segmentation and artifact detection based on a contour analysis in the time domain. This paper provides an overview about PWF and artifact classifications, presents the developed PWF analysis, and demonstrates the implementation on a 32-bit ARM core microcontroller. The PWF analysis was validated with data records from 63 subjects acquired in a sleep laboratory, ergometry laboratory, and intensive care unit in equal parts. The output of the algorithm was compared with harmonized experts' annotations of the PPG with a total duration of 31.5 h. The algorithm achieved a beat-to-beat comparison sensitivity of 99.6%, specificity of 90.5%, precision of 98.5%, and accuracy of 98.3%. The interrater agreement expressed as Cohen's kappa coefficient was 0.927 and as F-measure was 0.990. In conclusion, the PWF analysis seems to be a suitable method for PPG signal quality determination, real-time annotation, data compression, and calculation of additional pulse wave metrics such as amplitude, duration, and rise time.
A Linear Time Algorithm for the k Maximal Sums Problem
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Jørgensen, Allan Grønlund
2007-01-01
k maximal sums problem. We use this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m 2·n + k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the d-dimensional problem in O(n 2d − 1 + k) time. The space usage of all......Finding the sub-vector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k sub-vectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n + k) time algorithm for the...... the algorithms can be reduced to O(n d − 1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space....
From Massively Parallel Algorithms and Fluctuating Time Horizons to Nonequilibrium Surface Growth
International Nuclear Information System (INIS)
Korniss, G.; Toroczkai, Z.; Novotny, M. A.; Rikvold, P. A.
2000-01-01
We study the asymptotic scaling properties of a massively parallel algorithm for discrete-event simulations where the discrete events are Poisson arrivals. The evolution of the simulated time horizon is analogous to a nonequilibrium surface. Monte Carlo simulations and a coarse-grained approximation indicate that the macroscopic landscape in the steady state is governed by the Edwards-Wilkinson Hamiltonian. Since the efficiency of the algorithm corresponds to the density of local minima in the associated surface, our results imply that the algorithm is asymptotically scalable. (c) 2000 The American Physical Society
Hitting times of local and global optima in genetic algorithms with very high selection pressure
Directory of Open Access Journals (Sweden)
Eremeev Anton V.
2017-01-01
Full Text Available The paper is devoted to upper bounds on the expected first hitting times of the sets of local or global optima for non-elitist genetic algorithms with very high selection pressure. The results of this paper extend the range of situations where the upper bounds on the expected runtime are known for genetic algorithms and apply, in particular, to the Canonical Genetic Algorithm. The obtained bounds do not require the probability of fitness-decreasing mutation to be bounded by a constant which is less than one.
TaDb: A time-aware diffusion-based recommender algorithm
Li, Wen-Jun; Xu, Yuan-Yuan; Dong, Qiang; Zhou, Jun-Lin; Fu, Yan
2015-02-01
Traditional recommender algorithms usually employ the early and recent records indiscriminately, which overlooks the change of user interests over time. In this paper, we show that the interests of a user remain stable in a short-term interval and drift during a long-term period. Based on this observation, we propose a time-aware diffusion-based (TaDb) recommender algorithm, which assigns different temporal weights to the leading links existing before the target user's collection and the following links appearing after that in the diffusion process. Experiments on four real datasets, Netflix, MovieLens, FriendFeed and Delicious show that TaDb algorithm significantly improves the prediction accuracy compared with the algorithms not considering temporal effects.
Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm
Directory of Open Access Journals (Sweden)
Amjad Mahmood
2017-04-01
Full Text Available In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality.
CERN LHC signals from warped extra dimensions
International Nuclear Information System (INIS)
Agashe, Kaustubh; Belyaev, Alexander; Krupovnickas, Tadas; Perez, Gilad; Virzi, Joseph
2008-01-01
We study production of Kaluza-Klein (KK) gluons at the Large Hadron Collider (LHC) in the framework of a warped extra dimension with the standard model fields propagating in the bulk. We show that the detection of the KK gluon is challenging since its production is suppressed by small couplings to the proton's constituents. Moreover, the KK gluon decays mostly to top pairs due to an enhanced coupling and hence is broad. Nevertheless, we demonstrate that for M KKG -1 of data at the LHC can provide discovery of the KK gluon. We utilize a sizable left-right polarization asymmetry from the KK gluon resonance to maximize the signal significance, and we explore the novel feature of extremely highly energetic 'top-jets'. We briefly discuss how the detection of electroweak gauge KK states (Z/W) faces a similar challenge since their leptonic decays (golden modes) are suppressed. Our analysis suggests that other frameworks, for example, little Higgs, which rely on UV completion via strong dynamics might face similar challenges, namely, (1) suppressed production rates for the new particles (such as Z ' ), due to their 'light-fermion-phobic' nature, and (2) difficulties in detection since the new particles are broad and decay predominantly to third generation quarks and longitudinal gauge bosons
LHC Signals from Warped Extra Dimensions
International Nuclear Information System (INIS)
Agashe, K.; Belyaev, A.; Krupovnickas, T.; Perez, G.; Virzi, J.
2006-01-01
We study production of Kaluza-Klein gluons (KKG) at the Large Hadron Collider (LHC) in the framework of a warped extra dimension with the Standard Model (SM) fields propagating in the bulk. We show that the detection of KK gluon is challenging since its production is suppressed by small couplings to the proton's constituents. Moreover, the KK gluon decays mostly to top pairs due to an enhanced coupling and hence is broad. Nevertheless, we demonstrate that for MKKG < 4 TeV, 100 fb-1 of data at the LHC can provide discovery of the KK gluon. We utilize a sizeable left-right polarization asymmetry from the KK gluon resonance to maximize the signal significance, and we explore the novel feature of extremely highly energetic 'top-jets'. We briefly discuss how the detection of electroweak gauge KK states (Z/W) faces a similar challenge since their leptonic decays ('golden' modes) are suppressed. Our analysis suggests that other frameworks, for example little Higgs, which rely on UV completion via strong dynamics might face similar challenges, namely (1) Suppressed production rates for the new particles (such as Z'), due to their 'light fermion-phobic' nature, and (2) Difficulties in detection since the new particles are broad and decay predominantly to third generation quarks and longitudinal gauge bosons
Unified flavor symmetry from warped dimensions
Energy Technology Data Exchange (ETDEWEB)
Frank, Mariana, E-mail: mariana.frank@concordia.ca [Department of Physics, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec, H4B 1R6 (Canada); Hamzaoui, Cherif, E-mail: hamzaoui.cherif@uqam.ca [Groupe de Physique Théorique des Particules, Département des Sciences de la Terre et de L' Atmosphère, Université du Québec à Montréal, Case Postale 8888, Succ. Centre-Ville, Montréal, Québec, H3C 3P8 (Canada); Pourtolami, Nima, E-mail: n_pour@live.concordia.ca [Department of Physics, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec, H4B 1R6 (Canada); Toharia, Manuel, E-mail: mtoharia@physics.concordia.ca [Department of Physics, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec, H4B 1R6 (Canada)
2015-03-06
In a model of warped extra-dimensions with all matter fields in the bulk, we propose a scenario which explains all the masses and mixings of the SM fermions. In this scenario, the same flavor symmetric structure is imposed on all the fermions of the Standard Model (SM), including neutrinos. Due to the exponential sensitivity on bulk fermion masses, a small breaking of this symmetry can be greatly enhanced and produce seemingly un-symmetric hierarchical masses and small mixing angles among the charged fermion zero-modes (SM quarks and charged leptons), thus washing out visible effects of the symmetry. If the Dirac neutrinos are sufficiently localized towards the UV boundary, and the Higgs field leaking into the bulk, the neutrino mass hierarchy and flavor structure will still be largely dominated and reflect the fundamental flavor structure, whereas localization of the quark sector would reflect the effects of the flavor symmetry breaking sector. We explore these features in an example based on which a family permutation symmetry is imposed in both quark and lepton sectors.
LHC Signals from Warped Extra Dimensions
Energy Technology Data Exchange (ETDEWEB)
Agashe, K.; Belyaev, A.; Krupovnickas, T.; Perez, G.; Virzi, J.
2006-12-06
We study production of Kaluza-Klein gluons (KKG) at the Large Hadron Collider (LHC) in the framework of a warped extra dimension with the Standard Model (SM) fields propagating in the bulk. We show that the detection of KK gluon is challenging since its production is suppressed by small couplings to the proton's constituents. Moreover, the KK gluon decaysmostly to top pairs due to an enhanced coupling and hence is broad. Nevertheless, we demonstrate that for MKKG<~;; 4 TeV, 100 fb-1 of data at the LHC can provide discovery of the KK gluon. We utilize a sizeable left-right polarization asymmetry from the KK gluon resonance to maximize the signal significance, and we explore the novel feature of extremely highly energetic"top-jets." We briefly discuss how the detection of electroweak gauge KK states (Z/W) faces a similar challenge since their leptonic decays ("golden" modes) are suppressed. Our analysis suggests that other frameworks, for example little Higgs, which rely on UV completion via strong dynamics might face similar challenges, namely (1) Suppressed production rates for the new particles (such as Z'), due to their"lightfermion-phobic" nature, and (2) Difficulties in detection since the new particles are broad and decay predominantly to third generation quarks and longitudinal gauge bosons.
Extraordinary phenomenology from warped flavor triviality
International Nuclear Information System (INIS)
Delaunay, Cedric; Gedalia, Oram; Lee, Seung J.; Perez, Gilad; Ponton, Eduardo
2011-01-01
Anarchic warped extra dimensional models provide a solution to the hierarchy problem. They can also account for the observed flavor hierarchies, but only at the expense of little hierarchy and CP problems, which naturally require a Kaluza-Klein (KK) scale beyond the LHC reach. We have recently shown that when flavor issues are decoupled, and assumed to be solved by UV physics, the framework's parameter space greatly opens. Given the possibility of a lower KK scale and composite light quarks, this class of flavor triviality models enjoys a rather exceptional phenomenology, which is the focus of this Letter. We also revisit the anarchic RS EDM problem, which requires m KK ≥12 TeV, and show that it is solved within flavor triviality models. Interestingly, our framework can induce a sizable differential tt-bar forward-backward asymmetry, and leads to an excess of massive boosted di-jet events, which may be linked to the recent findings of the CDF Collaboration. This feature may be observed by looking at the corresponding planar flow distribution, which is presented here. Finally we point out that the celebrated standard model preference towards a light Higgs is significantly reduced within our framework.
STRONG FIELD EFFECTS ON EMISSION LINE PROFILES: KERR BLACK HOLES AND WARPED ACCRETION DISKS
International Nuclear Information System (INIS)
Wang Yan; Li Xiangdong
2012-01-01
If an accretion disk around a black hole is illuminated by hard X-rays from non-thermal coronae, fluorescent iron lines will be emitted from the inner region of the accretion disk. The emission line profiles will show a variety of strong field effects, which may be used as a probe of the spin parameter of the black hole and the structure of the accretion disk. In this paper, we generalize the previous relativistic line profile models by including both the black hole spinning effects and the non-axisymmetries of warped accretion disks. Our results show different features from the conventional calculations for either a flat disk around a Kerr black hole or a warped disk around a Schwarzschild black hole by presenting, at the same time, multiple peaks, rather long red tails, and time variations of line profiles with the precession of the disk. We show disk images as seen by a distant observer, which are distorted by the strong gravity. Although we are primarily concerned with the iron K-shell lines in this paper, the calculation is general and is valid for any emission lines produced from a warped accretion disk around a black hole.
Ningrum, R. W.; Surarso, B.; Farikhin; Safarudin, Y. M.
2018-03-01
This paper proposes the combination of Firefly Algorithm (FA) and Chen Fuzzy Time Series Forecasting. Most of the existing fuzzy forecasting methods based on fuzzy time series use the static length of intervals. Therefore, we apply an artificial intelligence, i.e., Firefly Algorithm (FA) to set non-stationary length of intervals for each cluster on Chen Method. The method is evaluated by applying on the Jakarta Composite Index (IHSG) and compare with classical Chen Fuzzy Time Series Forecasting. Its performance verified through simulation using Matlab.
A Real-Time evaluation system for a state-of-charge indication algorithm
Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Regtien, Paulus P.L.
2005-01-01
The known methods of State-of-Charge (SoC) indication in portable applications are not accurate enough under all practical conditions. This paper describes a real- time evaluation LabVIEW system for an SoC algorithm, that calculates the SoC in [%] and also the remaining run-time available under the
A real-time evaluation system for a state-of-charge indication algorithm
Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Regtien, P.P.L.
2005-01-01
The known methods of State-of-Charge (SoC) indication in portable applications are not accurate enough under all practical conditions. This paper describes a real- time evaluation LabVIEW system for an SoC algorithm, that calculates the SoC in [%] and also the remaining run-time available under the
An Efficient Algorithm for the Optimal Market Timing over Two Stocks
Institute of Scientific and Technical Information of China (English)
Hui Li; Hong-zhi An; Guo-fu Wu
2004-01-01
In this paper,the optimal trading strategy in timing the market by switching between two stocks is given.In order to deal with a large sample size with a fast turnaround computation time,we propose a class of recursive algorithm.A simulation is given to verify the efiectiveness of our method.
Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags
ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu
2017-05-01
Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.
New time-saving predictor algorithm for multiple breath washout in adolescents
DEFF Research Database (Denmark)
Grønbæk, Jonathan; Hallas, Henrik Wegener; Arianto, Lambang
2016-01-01
BACKGROUND: Multiple breath washout (MBW) is an informative but time-consuming test. This study evaluates the uncertainty of a time-saving predictor algorithm in adolescents. METHODS: Adolescents were recruited from the Copenhagen Prospective Study on Asthma in Childhood (COPSAC2000) birth cohort...
A branch-and-cut algorithm for the Time Window Assignment Vehicle Routing Problem
K. Dalmeijer (Kevin); R. Spliet (Remy)
2016-01-01
textabstractThis paper presents a branch-and-cut algorithm for the Time Window Assignment Vehicle Routing Problem (TWAVRP), the problem of assigning time windows for delivery before demand volume becomes known. A novel set of valid inequalities, the precedence inequalities, is introduced and
A wavelet-based PWTD algorithm-accelerated time domain surface integral equation solver
Liu, Yang; Yucel, Abdulkadir C.; Gilbert, Anna C.; Bagci, Hakan; Michielssen, Eric
2015-01-01
© 2015 IEEE. The multilevel plane-wave time-domain (PWTD) algorithm allows for fast and accurate analysis of transient scattering from, and radiation by, electrically large and complex structures. When used in tandem with marching-on-in-time (MOT
A Scheduling Algorithm for Time Bounded Delivery of Packets on the Internet
I. Vaishnavi (Ishan)
2008-01-01
htmlabstractThis thesis aims to provide a better scheduling algorithm for Real-Time delivery of packets. A number of emerging applications such as VoIP, Tele-immersive environments, distributed media viewing and distributed gaming require real-time delivery of packets. Currently the scheduling
Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie
2014-01-01
It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.
Zhou, Hui; Ji, Ning; Samuel, Oluwarotimi Williams; Cao, Yafei; Zhao, Zheyi; Chen, Shixiong; Li, Guanglin
2016-10-01
Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO) and heel strike (HS) gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses.
A Method Based on Dial's Algorithm for Multi-time Dynamic Traffic Assignment
Directory of Open Access Journals (Sweden)
Rongjie Kuang
2014-03-01
Full Text Available Due to static traffic assignment has poor performance in reflecting actual case and dynamic traffic assignment may incurs excessive compute cost, method of multi-time dynamic traffic assignment combining static and dynamic traffic assignment balances factors of precision and cost effectively. A method based on Dial's logit algorithm is proposed in the article to solve the dynamic stochastic user equilibrium problem in dynamic traffic assignment. Before that, a fitting function that can proximately reflect overloaded traffic condition of link is proposed and used to give corresponding model. Numerical example is given to illustrate heuristic procedure of method and to compare results with one of same example solved by other literature's algorithm. Results show that method based on Dial's algorithm is preferable to algorithm from others.
International Nuclear Information System (INIS)
Bur'yan, V.I.; Kozlova, L.V.; Kuzhil', A.S.; Shikalov, V.F.
2005-01-01
The development of algorithms for correction of self-powered neutron detector (SPND) inertial is caused by necessity to increase the fast response of the in-core instrumentation systems (ICIS). The increase of ICIS fast response will permit to monitor in real time fast transient processes in the core, and in perspective - to use the signals of rhodium SPND for functions of emergency protection by local parameters. In this paper it is proposed to use mathematical model of neutron flux measurements by means of SPND in integral form for creation of correction algorithms. This approach, in the case, is the most convenient for creation of recurrent algorithms for flux estimation. The results of comparison for estimation of neutron flux and reactivity by readings of ionization chambers and SPND signals, corrected by proposed algorithms, are presented [ru
Xu, Sheng-Hua; Liu, Ji-Ping; Zhang, Fu-Hao; Wang, Liang; Sun, Li-Jian
2015-08-27
A combination of genetic algorithm and particle swarm optimization (PSO) for vehicle routing problems with time windows (VRPTW) is proposed in this paper. The improvements of the proposed algorithm include: using the particle real number encoding method to decode the route to alleviate the computation burden, applying a linear decreasing function based on the number of the iterations to provide balance between global and local exploration abilities, and integrating with the crossover operator of genetic algorithm to avoid the premature convergence and the local minimum. The experimental results show that the proposed algorithm is not only more efficient and competitive with other published results but can also obtain more optimal solutions for solving the VRPTW issue. One new well-known solution for this benchmark problem is also outlined in the following.
Comparison of SAR calculation algorithms for the finite-difference time-domain method
International Nuclear Information System (INIS)
Laakso, Ilkka; Uusitupa, Tero; Ilvonen, Sami
2010-01-01
Finite-difference time-domain (FDTD) simulations of specific-absorption rate (SAR) have several uncertainty factors. For example, significantly varying SAR values may result from the use of different algorithms for determining the SAR from the FDTD electric field. The objective of this paper is to rigorously study the divergence of SAR values due to different SAR calculation algorithms and to examine if some SAR calculation algorithm should be preferred over others. For this purpose, numerical FDTD results are compared to analytical solutions in a one-dimensional layered model and a three-dimensional spherical object. Additionally, the implications of SAR calculation algorithms for dosimetry of anatomically realistic whole-body models are studied. The results show that the trapezium algorithm-based on the trapezium integration rule-is always conservative compared to the analytic solution, making it a good choice for worst-case exposure assessment. In contrast, the mid-ordinate algorithm-named after the mid-ordinate integration rule-usually underestimates the analytic SAR. The linear algorithm-which is approximately a weighted average of the two-seems to be the most accurate choice overall, typically giving the best fit with the shape of the analytic SAR distribution. For anatomically realistic models, the whole-body SAR difference between different algorithms is relatively independent of the used body model, incident direction and polarization of the plane wave. The main factors affecting the difference are cell size and frequency. The choice of the SAR calculation algorithm is an important simulation parameter in high-frequency FDTD SAR calculations, and it should be explained to allow intercomparison of the results between different studies. (note)
Self-accelerated brane Universe with warped extra dimension
Gorbunov, D S
2008-01-01
We propose a cosmological model which exhibits the phenomenon of self-acceleration: the Universe is attracted to the phase of accelerated expansion at late times even in the absence of the cosmological constant. The self-acceleration is inevitable in the sense that it cannot be neutralized by any negative explicit cosmological constant. The model is formulated in the framework of brane-world theories with a warped extra dimension. The key ingredient of the model is the brane-bulk energy transfer which is carried by bulk vector fields with a sigma-model-like boundary condition on the brane. We explicitly find the 5-dimensional metric corresponding to the late-time de Sitter expansion on the brane; this metric describes an AdS_5 black hole with growing mass. The present value of the Hubble parameter implies the scale of new physics of order 1 TeV, where the proposed model has to be replaced by putative UV-completion. The mechanism leading to the self-acceleration has AdS/CFT interpretation as occurring due to s...
Real Time Optima Tracking Using Harvesting Models of the Genetic Algorithm
Baskaran, Subbiah; Noever, D.
1999-01-01
Tracking optima in real time propulsion control, particularly for non-stationary optimization problems is a challenging task. Several approaches have been put forward for such a study including the numerical method called the genetic algorithm. In brief, this approach is built upon Darwinian-style competition between numerical alternatives displayed in the form of binary strings, or by analogy to 'pseudogenes'. Breeding of improved solution is an often cited parallel to natural selection in.evolutionary or soft computing. In this report we present our results of applying a novel model of a genetic algorithm for tracking optima in propulsion engineering and in real time control. We specialize the algorithm to mission profiling and planning optimizations, both to select reduced propulsion needs through trajectory planning and to explore time or fuel conservation strategies.
An Optimal Scheduling Algorithm with a Competitive Factor for Real-Time Systems
1991-07-29
real - time systems in which the value of a task is proportional to its computation time. The system obtains the value of a given task if the task completes by its deadline. Otherwise, the system obtains no value for the task. When such a system is underloaded (i.e. there exists a schedule for which all tasks meet their deadlines), Dertouzos [6] showed that the earliest deadline first algorithm will achieve 100% of the possible value. We consider the case of a possibly overloaded system and present an algorithm which: 1. behaves like the earliest deadline first
A Study on the Enhanced Best Performance Algorithm for the Just-in-Time Scheduling Problem
Directory of Open Access Journals (Sweden)
Sivashan Chetty
2015-01-01
Full Text Available The Just-In-Time (JIT scheduling problem is an important subject of study. It essentially constitutes the problem of scheduling critical business resources in an attempt to optimize given business objectives. This problem is NP-Hard in nature, hence requiring efficient solution techniques. To solve the JIT scheduling problem presented in this study, a new local search metaheuristic algorithm, namely, the enhanced Best Performance Algorithm (eBPA, is introduced. This is part of the initial study of the algorithm for scheduling problems. The current problem setting is the allocation of a large number of jobs required to be scheduled on multiple and identical machines which run in parallel. The due date of a job is characterized by a window frame of time, rather than a specific point in time. The performance of the eBPA is compared against Tabu Search (TS and Simulated Annealing (SA. SA and TS are well-known local search metaheuristic algorithms. The results show the potential of the eBPA as a metaheuristic algorithm.
Zhu, Zhe
2017-08-01
The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.
Galactic warps and the shape of heavy halos
International Nuclear Information System (INIS)
Sparke, L.S.
1984-01-01
The outer disks of many spiral galaxies are bent away from the plane of the inner disk; the abundance of these warps suggests that they are long-lived. Isolated galactic disks have long been thought to have no discrete modes of vertical oscillation under their own gravity, and so to be incapable of sustaining persistent warps. However, the visible disk contains only a fraction of the galactic mass; an invisible galactic halo makes up the rest. This paper presents an investigation of vertical warping modes in self-gravitating disks, in the imposed potential due to an axisymmetric unseen massive halo. If the halo matter is distributed so that the free precession rate of a test particle decreases with radius near the edge of the disk, then the disk has a discrete mode of vibration; oblate halos which become rapidly more flattened at large radii, and uniformly prolate halos, satisfy this requirement. Otherwise, the disk has no discrete modes and so cannot maintain a long-lived warp, unless the edge is sharply truncated. Computed mode shapes which resemble the observed warps can be found for halo masses consistent with those inferred from galactic rotation curves
Comparison of turbulence mitigation algorithms
Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric
2017-07-01
When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.
Park, Jihong; Kim, Ki-Hyung; Kim, Kangseok
2017-04-19
The IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) was proposed for various applications of IPv6 low power wireless networks. While RPL supports various routing metrics and is designed to be suitable for wireless sensor network environments, it does not consider the mobility of nodes. Therefore, there is a need for a method that is energy efficient and that provides stable and reliable data transmission by considering the mobility of nodes in RPL networks. This paper proposes an algorithm to support node mobility in RPL in an energy-efficient manner and describes its operating principle based on different scenarios. The proposed algorithm supports the mobility of nodes by dynamically adjusting the transmission interval of the messages that request the route based on the speed and direction of the motion of mobile nodes, as well as the costs between neighboring nodes. The performance of the proposed algorithm and previous algorithms for supporting node mobility were examined experimentally. From the experiment, it was observed that the proposed algorithm requires fewer messages per unit time for selecting a new parent node following the movement of a mobile node. Since fewer messages are used to select a parent node, the energy consumption is also less than that of previous algorithms.
Directory of Open Access Journals (Sweden)
Benjamin M. Cowan
2013-04-01
Full Text Available We describe a modification to the finite-difference time-domain algorithm for electromagnetics on a Cartesian grid which eliminates numerical dispersion error in vacuum for waves propagating along a grid axis. We provide details of the algorithm, which generalizes previous work by allowing 3D operation with a wide choice of aspect ratio, and give conditions to eliminate dispersive errors along one or more of the coordinate axes. We discuss the algorithm in the context of laser-plasma acceleration simulation, showing significant reduction—up to a factor of 280, at a plasma density of 10^{23} m^{-3}—of the dispersion error of a linear laser pulse in a plasma channel. We then compare the new algorithm with the standard electromagnetic update for laser-plasma accelerator stage simulations, demonstrating that by controlling numerical dispersion, the new algorithm allows more accurate simulation than is otherwise obtained. We also show that the algorithm can be used to overcome the critical but difficult challenge of consistent initialization of a relativistic particle beam and its fields in an accelerator simulation.
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.
2016-04-01
Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.
On Closed Timelike Curves and Warped Brane World Models
Directory of Open Access Journals (Sweden)
Slagter Reinoud Jan
2013-09-01
Full Text Available At first glance, it seems possible to construct in general relativity theory causality violating solutions. The most striking one is the Gott spacetime. Two cosmic strings, approaching each other with high velocity, could produce closed timelike curves. It was quickly recognized that this solution violates physical boundary conditions. The effective one particle generator becomes hyperbolic, so the center of mass is tachyonic. On a 5-dimensional warped spacetime, it seems possible to get an elliptic generator, so no obstruction is encountered and the velocity of the center of mass of the effective particle has an overlap with the Gott region. So a CTC could, in principle, be constructed. However, from the effective 4D field equations on the brane, which are influenced by the projection of the bulk Weyl tensor on the brane, it follows that no asymptotic conical space time is found, so no angle deficit as in the 4D counterpart model. This could also explain why we do not observe cosmic strings.
Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm
Energy Technology Data Exchange (ETDEWEB)
Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research - University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research - University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy)
2015-08-07
We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reaction rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.
Real time implementation of a linear predictive coding algorithm on digital signal processor DSP32C
International Nuclear Information System (INIS)
Sheikh, N.M.; Usman, S.R.; Fatima, S.
2002-01-01
Pulse Code Modulation (PCM) has been widely used in speech coding. However, due to its high bit rate. PCM has severe limitations in application where high spectral efficiency is desired, for example, in mobile communication, CD quality broadcasting system etc. These limitation have motivated research in bit rate reduction techniques. Linear predictive coding (LPC) is one of the most powerful complex techniques for bit rate reduction. With the introduction of powerful digital signal processors (DSP) it is possible to implement the complex LPC algorithm in real time. In this paper we present a real time implementation of the LPC algorithm on AT and T's DSP32C at a sampling frequency of 8192 HZ. Application of the LPC algorithm on two speech signals is discussed. Using this implementation , a bit rate reduction of 1:3 is achieved for better than tool quality speech, while a reduction of 1.16 is possible for speech quality required in military applications. (author)
Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui
2013-12-01
In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.
Real Time Search Algorithm for Observation Outliers During Monitoring Engineering Constructions
Latos, Dorota; Kolanowski, Bogdan; Pachelski, Wojciech; Sołoducha, Ryszard
2017-12-01
Real time monitoring of engineering structures in case of an emergency of disaster requires collection of a large amount of data to be processed by specific analytical techniques. A quick and accurate assessment of the state of the object is crucial for a probable rescue action. One of the more significant evaluation methods of large sets of data, either collected during a specified interval of time or permanently, is the time series analysis. In this paper presented is a search algorithm for those time series elements which deviate from their values expected during monitoring. Quick and proper detection of observations indicating anomalous behavior of the structure allows to take a variety of preventive actions. In the algorithm, the mathematical formulae used provide maximal sensitivity to detect even minimal changes in the object's behavior. The sensitivity analyses were conducted for the algorithm of moving average as well as for the Douglas-Peucker algorithm used in generalization of linear objects in GIS. In addition to determining the size of deviations from the average it was used the so-called Hausdorff distance. The carried out simulation and verification of laboratory survey data showed that the approach provides sufficient sensitivity for automatic real time analysis of large amount of data obtained from different and various sensors (total stations, leveling, camera, radar).
Development of real time diagnostics and feedback algorithms for JET in view of the next step
International Nuclear Information System (INIS)
Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.
2004-01-01
Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)
Development of real time diagnostics and feedback algorithms for JET in view of the next step
International Nuclear Information System (INIS)
Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.
2004-01-01
Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)
Development of real time diagnostics and feedback algorithms for JET in view of the next step
Energy Technology Data Exchange (ETDEWEB)
Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)
2004-07-01
Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)
Real Time Search Algorithm for Observation Outliers During Monitoring Engineering Constructions
Directory of Open Access Journals (Sweden)
Latos Dorota
2017-12-01
Full Text Available Real time monitoring of engineering structures in case of an emergency of disaster requires collection of a large amount of data to be processed by specific analytical techniques. A quick and accurate assessment of the state of the object is crucial for a probable rescue action. One of the more significant evaluation methods of large sets of data, either collected during a specified interval of time or permanently, is the time series analysis. In this paper presented is a search algorithm for those time series elements which deviate from their values expected during monitoring. Quick and proper detection of observations indicating anomalous behavior of the structure allows to take a variety of preventive actions. In the algorithm, the mathematical formulae used provide maximal sensitivity to detect even minimal changes in the object’s behavior. The sensitivity analyses were conducted for the algorithm of moving average as well as for the Douglas-Peucker algorithm used in generalization of linear objects in GIS. In addition to determining the size of deviations from the average it was used the so-called Hausdorff distance. The carried out simulation and verification of laboratory survey data showed that the approach provides sufficient sensitivity for automatic real time analysis of large amount of data obtained from different and various sensors (total stations, leveling, camera, radar.
How Similar Are Forest Disturbance Maps Derived from Different Landsat Time Series Algorithms?
Directory of Open Access Journals (Sweden)
Warren B. Cohen
2017-03-01
Full Text Available Disturbance is a critical ecological process in forested systems, and disturbance maps are important for understanding forest dynamics. Landsat data are a key remote sensing dataset for monitoring forest disturbance and there recently has been major growth in the development of disturbance mapping algorithms. Many of these algorithms take advantage of the high temporal data volume to mine subtle signals in Landsat time series, but as those signals become subtler, they are more likely to be mixed with noise in Landsat data. This study examines the similarity among seven different algorithms in their ability to map the full range of magnitudes of forest disturbance over six different Landsat scenes distributed across the conterminous US. The maps agreed very well in terms of the amount of undisturbed forest over time; however, for the ~30% of forest mapped as disturbed in a given year by at least one algorithm, there was little agreement about which pixels were affected. Algorithms that targeted higher-magnitude disturbances exhibited higher omission errors but lower commission errors than those targeting a broader range of disturbance magnitudes. These results suggest that a user of any given forest disturbance map should understand the map’s strengths and weaknesses (in terms of omission and commission error rates, with respect to the disturbance targets of interest.
A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers
Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair
We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.
A fuzzy logic algorithm to assign confidence levels to heart and respiratory rate time series
International Nuclear Information System (INIS)
Liu, J; McKenna, T M; Gribok, A; Reifman, J; Beidleman, B A; Tharion, W J
2008-01-01
We have developed a fuzzy logic-based algorithm to qualify the reliability of heart rate (HR) and respiratory rate (RR) vital-sign time-series data by assigning a confidence level to the data points while they are measured as a continuous data stream. The algorithm's membership functions are derived from physiology-based performance limits and mass-assignment-based data-driven characteristics of the signals. The assigned confidence levels are based on the reliability of each HR and RR measurement as well as the relationship between them. The algorithm was tested on HR and RR data collected from subjects undertaking a range of physical activities, and it showed acceptable performance in detecting four types of faults that result in low-confidence data points (receiver operating characteristic areas under the curve ranged from 0.67 (SD 0.04) to 0.83 (SD 0.03), mean and standard deviation (SD) over all faults). The algorithm is sensitive to noise in the raw HR and RR data and will flag many data points as low confidence if the data are noisy; prior processing of the data to reduce noise allows identification of only the most substantial faults. Depending on how HR and RR data are processed, the algorithm can be applied as a tool to evaluate sensor performance or to qualify HR and RR time-series data in terms of their reliability before use in automated decision-assist systems
IMPLEMENTATION OF A REAL-TIME STACKING ALGORITHM IN A PHOTOGRAMMETRIC DIGITAL CAMERA FOR UAVS
Directory of Open Access Journals (Sweden)
A. Audi
2017-08-01
Full Text Available In the recent years, unmanned aerial vehicles (UAVs have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn’t seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation
Aspects of warped AdS3/CFT2 correspondence
Chen, Bin; Zhang, Jia-Ju; Zhang, Jian-Dong; Zhong, De-Liang
2013-04-01
In this paper we apply the thermodynamics method to investigate the holographic pictures for the BTZ black hole, the spacelike and the null warped black holes in three-dimensional topologically massive gravity (TMG) and new massive gravity (NMG). Even though there are higher derivative terms in these theories, the thermodynamics method is still effective. It gives consistent results with the ones obtained by using asymptotical symmetry group (ASG) analysis. In doing the ASG analysis we develop a brute-force realization of the Barnich-Brandt-Compere formalism with Mathematica code, which also allows us to calculate the masses and the angular momenta of the black holes. In particular, we propose the warped AdS3/CFT2 correspondence in the new massive gravity, which states that quantum gravity in the warped spacetime could holographically dual to a two-dimensional CFT with {c_R}={c_L}=24 /{Gm{β^2√{{2( {21-4{β^2}} )}}}}.
Design of a reading test for low vision image warping
Loshin, David S.; Wensveen, Janice; Juday, Richard D.; Barton, R. S.
1993-01-01
NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision - maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer-generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We will describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.
DEFF Research Database (Denmark)
Nielsen, Martin Bjerre; Krenk, Steen
2012-01-01
A conservative time integration algorithm for rigid body rotations is presented in a purely algebraic form in terms of the four quaternions components and the four conjugate momentum variables via Hamilton’s equations. The introduction of an extended mass matrix leads to a symmetric set of eight...
An algorithm to provide real time neutral beam substitution in the DIII-D tokamak
International Nuclear Information System (INIS)
Phillips, J.C.; Greene, K.L.; Hyatt, A.W.; McHarg, B.B. Jr.; Penaflor, B.G.
1999-06-01
A key component of the DIII-D tokamak fusion experiment is a flexible and easy to expand digital control system which actively controls a large number of parameters in real-time. These include plasma shape, position, density, and total stored energy. This system, known as the PCS (plasma control system), also has the ability to directly control auxiliary plasma heating systems, such as the 20 MW of neutral beams routinely used on DIII-D. This paper describes the implementation of a real-time algorithm allowing substitution of power from one neutral beam for another, given a fault in the originally scheduled beam. Previously, in the event of a fault in one of the neutral beams, the actual power profile for the shot might be deficient, resulting in a less useful or wasted shot. Using this new real-time algorithm, a stand by neutral beam may substitute within milliseconds for one which has faulted. Since single shots can have substantial value, this is an important advance to DIII-D's capabilities and utilization. Detailed results are presented, along with a description not only of the algorithm but of the simulation setup required to prove the algorithm without the costs normally associated with using physics operations time
Application of the Region-Time-Length algorithm to study of ...
Indian Academy of Sciences (India)
51
analyzed using the Region-Time-Length (RTL) algorithm based statistical technique. The utilized earthquake data were obtained from the International Seismological Centre. Thereafter, the homogeneity and completeness of the catalogue were improved. After performing iterative tests with different values of the r0 and t0 ...
Tataw, Oben Moses
2013-01-01
Interdisciplinary research in computer science requires the development of computational techniques for practical application in different domains. This usually requires careful integration of different areas of technical expertise. This dissertation presents image and time series analysis algorithms, with practical interdisciplinary applications…
Marufuzzaman, M; Reaz, M B I; Ali, M A M; Rahman, L F
2015-01-01
The goal of smart homes is to create an intelligent environment adapting the inhabitants need and assisting the person who needs special care and safety in their daily life. This can be reached by collecting the ADL (activities of daily living) data and further analysis within existing computing elements. In this research, a very recent algorithm named sequence prediction via enhanced episode discovery (SPEED) is modified and in order to improve accuracy time component is included. The modified SPEED or M-SPEED is a sequence prediction algorithm, which modified the previous SPEED algorithm by using time duration of appliance's ON-OFF states to decide the next state. M-SPEED discovered periodic episodes of inhabitant behavior, trained it with learned episodes, and made decisions based on the obtained knowledge. The results showed that M-SPEED achieves 96.8% prediction accuracy, which is better than other time prediction algorithms like PUBS, ALZ with temporal rules and the previous SPEED. Since human behavior shows natural temporal patterns, duration times can be used to predict future events more accurately. This inhabitant activity prediction system will certainly improve the smart homes by ensuring safety and better care for elderly and handicapped people.
Genetic algorithm for project time-cost optimization in fuzzy environment
Directory of Open Access Journals (Sweden)
Khan Md. Ariful Haque
2012-12-01
Full Text Available Purpose: The aim of this research is to develop a more realistic approach to solve project time-cost optimization problem under uncertain conditions, with fuzzy time periods. Design/methodology/approach: Deterministic models for time-cost optimization are never efficient considering various uncertainty factors. To make such problems realistic, triangular fuzzy numbers and the concept of a-cut method in fuzzy logic theory are employed to model the problem. Because of NP-hard nature of the project scheduling problem, Genetic Algorithm (GA has been used as a searching tool. Finally, Dev-C++ 4.9.9.2 has been used to code this solver. Findings: The solution has been performed under different combinations of GA parameters and after result analysis optimum values of those parameters have been found for the best solution. Research limitations/implications: For demonstration of the application of the developed algorithm, a project on new product (Pre-paid electric meter, a project under government finance launching has been chosen as a real case. The algorithm is developed under some assumptions. Practical implications: The proposed model leads decision makers to choose the desired solution under different risk levels. Originality/value: Reports reveal that project optimization problems have never been solved under multiple uncertainty conditions. Here, the function has been optimized using Genetic Algorithm search technique, with varied level of risks and fuzzy time periods.
Fire behavior simulation in Mediterranean forests using the minimum travel time algorithm
Kostas Kalabokidis; Palaiologos Palaiologou; Mark A. Finney
2014-01-01
Recent large wildfires in Greece exemplify the need for pre-fire burn probability assessment and possible landscape fire flow estimation to enhance fire planning and resource allocation. The Minimum Travel Time (MTT) algorithm, incorporated as FlamMap's version five module, provide valuable fire behavior functions, while enabling multi-core utilization for the...
Energy Technology Data Exchange (ETDEWEB)
Li, Xinya [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Deng, Zhiqun Daniel [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Rauchenstein, Lynn T. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Carlson, Thomas J. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA
2016-04-01
Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based and maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.
Stochastic time-dependent vehicle routing problem: Mathematical models and ant colony algorithm
Directory of Open Access Journals (Sweden)
Zhengyu Duan
2015-11-01
Full Text Available This article addresses the stochastic time-dependent vehicle routing problem. Two mathematical models named robust optimal schedule time model and minimum expected schedule time model are proposed for stochastic time-dependent vehicle routing problem, which can guarantee delivery within the time windows of customers. The robust optimal schedule time model only requires the variation range of link travel time, which can be conveniently derived from historical traffic data. In addition, the robust optimal schedule time model based on robust optimization method can be converted into a time-dependent vehicle routing problem. Moreover, an ant colony optimization algorithm is designed to solve stochastic time-dependent vehicle routing problem. As the improvements in initial solution and transition probability, ant colony optimization algorithm has a good performance in convergence. Through computational instances and Monte Carlo simulation tests, robust optimal schedule time model is proved to be better than minimum expected schedule time model in computational efficiency and coping with the travel time fluctuations. Therefore, robust optimal schedule time model is applicable in real road network.
A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor
Rao, Hariprasad Nannapaneni
1989-01-01
The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.
Directory of Open Access Journals (Sweden)
Žigić Aleksandar D.
2005-01-01
Full Text Available Experimental verifications of two optimized adaptive digital signal processing algorithms implemented in two pre set time count rate meters were per formed ac cording to appropriate standards. The random pulse generator realized using a personal computer, was used as an artificial radiation source for preliminary system tests and performance evaluations of the pro posed algorithms. Then measurement results for background radiation levels were obtained. Finally, measurements with a natural radiation source radioisotope 90Sr-90Y, were carried out. Measurement results, con ducted without and with radio isotopes for the specified errors of 10% and 5% showed to agree well with theoretical predictions.
Majorana neutrinos in a warped 5D standard model
International Nuclear Information System (INIS)
Huber, S.J.; Shafi, Q.
2002-05-01
We consider neutrino oscillations and neutrinoless double beta decay in a five dimensional standard model with warped geometry. Although the see-saw mechanism in its simplest form cannot be implemented because of the warped geometry, the bulk standard model neutrinos can acquire the desired (Majorana) masses from dimension five interactions. We discuss how large mixings can arise, why the large mixing angle MSW solution for solar neutrinos is favored, and provide estimates for the mixing angle U e3 . Implications for neutrinoless double beta decay are also discussed. (orig.)
Thermodynamic stability of warped AdS3 black holes
International Nuclear Information System (INIS)
Birmingham, Danny; Mokhtari, Susan
2011-01-01
We study the thermodynamic stability of warped black holes in three-dimensional topologically massive gravity. The spacelike stretched black hole is parametrized by its mass and angular momentum. We determine the local and global stability properties in the canonical and grand canonical ensembles. The presence of a Hawking-Page type transition is established, and the critical temperature is determined. The thermodynamic metric of Ruppeiner is computed, and the curvature is shown to diverge in the extremal limit. The consequences of these results for the classical stability properties of warped black holes are discussed within the context of the correlated stability conjecture.
A Heuristic Scheduling Algorithm for Minimizing Makespan and Idle Time in a Nagare Cell
Directory of Open Access Journals (Sweden)
M. Muthukumaran
2012-01-01
Full Text Available Adopting a focused factory is a powerful approach for today manufacturing enterprise. This paper introduces the basic manufacturing concept for a struggling manufacturer with limited conventional resources, providing an alternative solution to cell scheduling by implementing the technique of Nagare cell. Nagare cell is a Japanese concept with more objectives than cellular manufacturing system. It is a combination of manual and semiautomatic machine layout as cells, which gives maximum output flexibility for all kind of low-to-medium- and medium-to-high- volume productions. The solution adopted is to create a dedicated group of conventional machines, all but one of which are already available on the shop floor. This paper focuses on the development of heuristic scheduling algorithm in step-by-step method. The algorithm states that the summation of processing time of all products on each machine is calculated first and then the sum of processing time is sorted by the shortest processing time rule to get the assignment schedule. Based on the assignment schedule Nagare cell layout is arranged for processing the product. In addition, this algorithm provides steps to determine the product ready time, machine idle time, and product idle time. And also the Gantt chart, the experimental analysis, and the comparative results are illustrated with five (1×8 to 5×8 scheduling problems. Finally, the objective of minimizing makespan and idle time with greater customer satisfaction is studied through.
Real-time algorithms for JET hard X-ray and gamma-ray profile monitor
International Nuclear Information System (INIS)
Fernandes, A.; Pereira, R.C.; Valcárcel, D.F.; Alves, D.; Carvalho, B.B.; Sousa, J.; Kiptily, V.; Correia, C.M.B.A.; Gonçalves, B.
2014-01-01
Highlights: • Real-time tools and mechanisms are required for data handling and machine control. • A new DAQ system, ATCA based, with embedded FPGAs, was installed at JET. • Different real-time algorithms were developed for FPGAs and MARTe application. • MARTe provides the interface to CODAS and to the JET real-time network. • The new DAQ system is capable to process and deliver data in real-time. - Abstract: The steady state operation with high energy content foreseen for future generation of fusion devices will necessarily demand dedicated real-time tools and mechanisms for data handling and machine control. Consequently, the real-time systems for those devices should be carefully selected and their capabilities previously established. The Joint European Torus (JET) is undertaking an enhancement program, which includes tests of relevant real-time tools for the International Thermonuclear Experimental Reactor (ITER), a key experiment for future fusion devices. In these enhancements a new Data AcQuisition (DAQ) system is included, with real-time processing capabilities, for the JET hard X-ray and gamma-ray profile monitor. The DAQ system is composed of dedicated digitizer modules with embedded Field Programmable Gate Array (FPGA) devices. The interface between the DAQ system, the JET control and data acquisition system and the JET real-time data network is provided by the Multithreaded Application Real-Time executor (MARTe). This paper describes the real-time algorithms, developed for both digitizers’ FPGAs and MARTe application, capable of meeting the DAQ real-time requirements. The new DAQ system, including the embedded real-time features, was commissioned during the 2012 experiments. Results achieved with these real-time algorithms during experiments are presented
Real-time algorithms for JET hard X-ray and gamma-ray profile monitor
Energy Technology Data Exchange (ETDEWEB)
Fernandes, A., E-mail: anaf@ipfn.ist.utl.pt [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal); Pereira, R.C.; Valcárcel, D.F.; Alves, D.; Carvalho, B.B.; Sousa, J. [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal); Kiptily, V. [EURATOM/CCFE Fusion Association, Culham Centre for Fusion Energy, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); Correia, C.M.B.A. [Centro de Instrumentação, Dept. de Física, Universidade de Coimbra, 3004-516 Coimbra (Portugal); Gonçalves, B. [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal)
2014-03-15
Highlights: • Real-time tools and mechanisms are required for data handling and machine control. • A new DAQ system, ATCA based, with embedded FPGAs, was installed at JET. • Different real-time algorithms were developed for FPGAs and MARTe application. • MARTe provides the interface to CODAS and to the JET real-time network. • The new DAQ system is capable to process and deliver data in real-time. - Abstract: The steady state operation with high energy content foreseen for future generation of fusion devices will necessarily demand dedicated real-time tools and mechanisms for data handling and machine control. Consequently, the real-time systems for those devices should be carefully selected and their capabilities previously established. The Joint European Torus (JET) is undertaking an enhancement program, which includes tests of relevant real-time tools for the International Thermonuclear Experimental Reactor (ITER), a key experiment for future fusion devices. In these enhancements a new Data AcQuisition (DAQ) system is included, with real-time processing capabilities, for the JET hard X-ray and gamma-ray profile monitor. The DAQ system is composed of dedicated digitizer modules with embedded Field Programmable Gate Array (FPGA) devices. The interface between the DAQ system, the JET control and data acquisition system and the JET real-time data network is provided by the Multithreaded Application Real-Time executor (MARTe). This paper describes the real-time algorithms, developed for both digitizers’ FPGAs and MARTe application, capable of meeting the DAQ real-time requirements. The new DAQ system, including the embedded real-time features, was commissioned during the 2012 experiments. Results achieved with these real-time algorithms during experiments are presented.
A time reversal algorithm in acoustic media with Dirac measure approximations
Bretin, Élie; Lucas, Carine; Privat, Yannick
2018-04-01
This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t = 0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.
The Effect of Knitting Parameter and Finishing on Elastic Property of PET/PBT Warp Knitted Fabric
Directory of Open Access Journals (Sweden)
Chen Qing
2017-12-01
Full Text Available This study investigated the elastic elongation and elastic recovery of the elastic warp knittedfabric made of PET( polyethylene terephthalate and PBT(polybutylene terephthalate filament. Using 50/24F PET and 50D/24F PBT in two threadingbars, the tricot, locknit and satin warp knitted fabrics were produced on the E28 tricot warpknitting machine. The knitting parameters influencing the elastic elongation under 100N wereanalyzed in terms of fabric structure, yarn run-in speed and drawing density set on machine.Besides, dyeing temperature and heat setting temperature/time were also examined in order toretain proper elastic elongation and elastic recovery. The relationship between elastic elongationand knitting parameter and finishing parameter were analyzed. Finally, the elastic recovery ofPET/PBT warp knitted fabric was examined to demonstrate the elastic property of final finishedfabric. This study could help us to further exploit the use of PET/PBT warp knitted fabric in thedevelopment of elastic garment in future.
A First-order Prediction-Correction Algorithm for Time-varying (Constrained) Optimization: Preprint
Energy Technology Data Exchange (ETDEWEB)
Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simonetto, Andrea [Universite catholique de Louvain
2017-07-25
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are established to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.
Modified SURF Algorithm Implementation on FPGA For Real-Time Object Tracking
Directory of Open Access Journals (Sweden)
Tomyslav Sledevič
2013-05-01
Full Text Available The paper describes the FPGA-based implementation of the modified speeded-up robust features (SURF algorithm. FPGA was selected for parallel process implementation using VHDL to ensure features extraction in real-time. A sliding 84×84 size window was used to store integral pixels and accelerate Hessian determinant calculation, orientation assignment and descriptor estimation. The local extreme searching was used to find point of interest in 8 scales. The simplified descriptor and orientation vector were calculated in parallel in 6 scales. The algorithm was investigated by tracking marker and drawing a plane or cube. All parts of algorithm worked on 25 MHz clock. The video stream was generated using 60 fps and 640×480 pixel camera.Article in Lithuanian
Efficient Fourier-based algorithms for time-periodic unsteady problems
Gopinath, Arathi Kamath
2007-12-01
This dissertation work proposes two algorithms for the simulation of time-periodic unsteady problems via the solution of Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations. These algorithms use a Fourier representation in time and hence solve for the periodic state directly without resolving transients (which consume most of the resources in a time-accurate scheme). In contrast to conventional Fourier-based techniques which solve the governing equations in frequency space, the new algorithms perform all the calculations in the time domain, and hence require minimal modifications to an existing solver. The complete space-time solution is obtained by iterating in a fifth pseudo-time dimension. Various time-periodic problems such as helicopter rotors, wind turbines, turbomachinery and flapping-wings can be simulated using the Time Spectral method. The algorithm is first validated using pitching airfoil/wing test cases. The method is further extended to turbomachinery problems, and computational results verified by comparison with a time-accurate calculation. The technique can be very memory intensive for large problems, since the solution is computed (and hence stored) simultaneously at all time levels. Often, the blade counts of a turbomachine are rescaled such that a periodic fraction of the annulus can be solved. This approximation enables the solution to be obtained at a fraction of the cost of a full-scale time-accurate solution. For a viscous computation over a three-dimensional single-stage rescaled compressor, an order of magnitude savings is achieved. The second algorithm, the reduced-order Harmonic Balance method is applicable only to turbomachinery flows, and offers even larger computational savings than the Time Spectral method. It simulates the true geometry of the turbomachine using only one blade passage per blade row as the computational domain. In each blade row of the turbomachine, only the dominant frequencies are resolved, namely
Algorithm for determining two-periodic steady-states in AC machines directly in time domain
Directory of Open Access Journals (Sweden)
Sobczyk Tadeusz J.
2016-09-01
Full Text Available This paper describes an algorithm for finding steady states in AC machines for the cases of their two-periodic nature. The algorithm enables to specify the steady-state solution identified directly in time domain despite of the fact that two-periodic waveforms are not repeated in any finite time interval. The basis for such an algorithm is a discrete differential operator that specifies the temporary values of the derivative of the two-periodic function in the selected set of points on the basis of the values of that function in the same set of points. It allows to develop algebraic equations defining the steady state solution reached in a chosen point set for the nonlinear differential equations describing the AC machines when electrical and mechanical equations should be solved together. That set of those values allows determining the steady state solution at any time instant up to infinity. The algorithm described in this paper is competitive with respect to the one known in literature an approach based on the harmonic balance method operated in frequency domain.
Oh, Cheolhwan; Huang, Xiaodong; Regnier, Fred E; Buck, Charles; Zhang, Xiang
2008-02-01
We report a novel peak sorting method for the two-dimensional gas chromatography/time-of-flight mass spectrometry (GC x GC/TOF-MS) system. The objective of peak sorting is to recognize peaks from the same metabolite occurring in different samples from thousands of peaks detected in the analytical procedure. The developed algorithm is based on the fact that the chromatographic peaks for a given analyte have similar retention times in all of the chromatograms. Raw instrument data are first processed by ChromaTOF (Leco) software to provide the peak tables. Our algorithm achieves peak sorting by utilizing the first- and second-dimension retention times in the peak tables and the mass spectra generated during the process of electron impact ionization. The algorithm searches the peak tables for the peaks generated by the same type of metabolite using several search criteria. Our software also includes options to eliminate non-target peaks from the sorting results, e.g., peaks of contaminants. The developed software package has been tested using a mixture of standard metabolites and another mixture of standard metabolites spiked into human serum. Manual validation demonstrates high accuracy of peak sorting with this algorithm.
An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System
Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed
PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.
Directory of Open Access Journals (Sweden)
Juan Pardo
2015-04-01
Full Text Available Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.
Pardo, Juan; Zamora-Martínez, Francisco; Botella-Rocamora, Paloma
2015-01-01
Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources. PMID:25905698
Pardo, Juan; Zamora-Martínez, Francisco; Botella-Rocamora, Paloma
2015-04-21
Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.
Directory of Open Access Journals (Sweden)
Helio Yochihiro Fuchigami
2014-08-01
Full Text Available This article addresses the problem of minimizing makespan on two parallel flow shops with proportional processing and setup times. The setup times are separated and sequence-independent. The parallel flow shop scheduling problem is a specific case of well-known hybrid flow shop, characterized by a multistage production system with more than one machine working in parallel at each stage. This situation is very common in various kinds of companies like chemical, electronics, automotive, pharmaceutical and food industries. This work aimed to propose six Simulated Annealing algorithms, their perturbation schemes and an algorithm for initial sequence generation. This study can be classified as “applied research” regarding the nature, “exploratory” about the objectives and “experimental” as to procedures, besides the “quantitative” approach. The proposed algorithms were effective regarding the solution and computationally efficient. Results of Analysis of Variance (ANOVA revealed no significant difference between the schemes in terms of makespan. It’s suggested the use of PS4 scheme, which moves a subsequence of jobs, for providing the best percentage of success. It was also found that there is a significant difference between the results of the algorithms for each value of the proportionality factor of the processing and setup times of flow shops.
Algorithms and programming tools for image processing on the MPP:3
Reeves, Anthony P.
1987-01-01
This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.
A Time-Domain Filtering Scheme for the Modified Root-MUSIC Algorithm
Yamada, Hiroyoshi; Yamaguchi, Yoshio; Sengoku, Masakazu
1996-01-01
A new superresolution technique is proposed for high-resolution estimation of the scattering analysis. For complicated multipath propagation environment, it is not enough to estimate only the delay-times of the signals. Some other information should be required to identify the signal path. The proposed method can estimate the frequency characteristic of each signal in addition to its delay-time. One method called modified (Root) MUSIC algorithm is known as a technique that can treat both of t...
International Nuclear Information System (INIS)
Haug, E.; Rouvray, A.L. de; Nguyen, Q.S.
1977-01-01
This study proposes a general nonlinear algorithm stability criterion; it introduces a nonlinear algorithm, easily implemented in existing incremental/iterative codes, and it applies the new scheme beneficially to problems of linear elastic dynamic snap buckling. Based on the concept of energy conservation, the paper outlines an algorithm which degenerates into the trapezoidal rule, if applied to linear systems. The new algorithm conserves energy in systems having elastic potentials up to the fourth order in the displacements. This is true in the important case of nonlinear total Lagrange formulations where linear elastic material properties are substituted. The scheme is easily implemented in existing incremental-iterative codes with provisions for stiffness reformation and containing the basic Newmark scheme. Numerical analyses of dynamic stability can be dramatically sensitive to amplitude errors, because damping algorithms may mask, and overestimating schemes may numerically trigger, the physical instability. The newly proposed scheme has been applied with larger time steps and less cost to the dynamic snap buckling of simple one and multi degree-of-freedom structures for various initial conditions
Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.
Vestergaard, Christian L; Génois, Mathieu
2015-10-01
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.
A Modular Low-Complexity ECG Delineation Algorithm for Real-Time Embedded Systems.
Bote, Jose Manuel; Recas, Joaquin; Rincon, Francisco; Atienza, David; Hermida, Roman
2018-03-01
This work presents a new modular and low-complexity algorithm for the delineation of the different ECG waves (QRS, P and T peaks, onsets, and end). Involving a reduced number of operations per second and having a small memory footprint, this algorithm is intended to perform real-time delineation on resource-constrained embedded systems. The modular design allows the algorithm to automatically adjust the delineation quality in runtime to a wide range of modes and sampling rates, from a ultralow-power mode when no arrhythmia is detected, in which the ECG is sampled at low frequency, to a complete high-accuracy delineation mode, in which the ECG is sampled at high frequency and all the ECG fiducial points are detected, in the case of arrhythmia. The delineation algorithm has been adjusted using the QT database, providing very high sensitivity and positive predictivity, and validated with the MIT database. The errors in the delineation of all the fiducial points are below the tolerances given by the Common Standards for Electrocardiography Committee in the high-accuracy mode, except for the P wave onset, for which the algorithm is above the agreed tolerances by only a fraction of the sample duration. The computational load for the ultralow-power 8-MHz TI MSP430 series microcontroller ranges from 0.2% to 8.5% according to the mode used.
Research on the time optimization model algorithm of Customer Collaborative Product Innovation
Directory of Open Access Journals (Sweden)
Guodong Yu
2014-01-01
Full Text Available Purpose: To improve the efficiency of information sharing among the innovation agents of customer collaborative product innovation and shorten the product design cycle, an improved genetic annealing algorithm of the time optimization was presented. Design/methodology/approach: Based on the analysis of the objective relationship between the design tasks, the paper takes job shop problems for machining model and proposes the improved genetic algorithm to solve the problems, which is based on the niche technology and thus a better product collaborative innovation design time schedule is got to improve the efficiency. Finally, through the collaborative innovation design of a certain type of mobile phone, the proposed model and method were verified to be correct and effective. Findings and Originality/value: An algorithm with obvious advantages in terms of searching capability and optimization efficiency of customer collaborative product innovation was proposed. According to the defects of the traditional genetic annealing algorithm, the niche genetic annealing algorithm was presented. Firstly, it avoided the effective gene deletions at the early search stage and guaranteed the diversity of solution; Secondly, adaptive double point crossover and swap mutation strategy were introduced to overcome the defects of long solving process and easily converging local minimum value due to the fixed crossover and mutation probability; Thirdly, elite reserved strategy was imported that optimal solution missing was avoided effectively and evolution speed was accelerated. Originality/value: Firstly, the improved genetic simulated annealing algorithm overcomes some defects such as effective gene easily lost in early search. It is helpful to shorten the calculation process and improve the accuracy of the convergence value. Moreover, it speeds up the evolution and ensures the reliability of the optimal solution. Meanwhile, it has obvious advantages in efficiency of
Embedded algorithms within an FPGA-based system to process nonlinear time series data
Jones, Jonathan D.; Pei, Jin-Song; Tull, Monte P.
2008-03-01
This paper presents some preliminary results of an ongoing project. A pattern classification algorithm is being developed and embedded into a Field-Programmable Gate Array (FPGA) and microprocessor-based data processing core in this project. The goal is to enable and optimize the functionality of onboard data processing of nonlinear, nonstationary data for smart wireless sensing in structural health monitoring. Compared with traditional microprocessor-based systems, fast growing FPGA technology offers a more powerful, efficient, and flexible hardware platform including on-site (field-programmable) reconfiguration capability of hardware. An existing nonlinear identification algorithm is used as the baseline in this study. The implementation within a hardware-based system is presented in this paper, detailing the design requirements, validation, tradeoffs, optimization, and challenges in embedding this algorithm. An off-the-shelf high-level abstraction tool along with the Matlab/Simulink environment is utilized to program the FPGA, rather than coding the hardware description language (HDL) manually. The implementation is validated by comparing the simulation results with those from Matlab. In particular, the Hilbert Transform is embedded into the FPGA hardware and applied to the baseline algorithm as the centerpiece in processing nonlinear time histories and extracting instantaneous features of nonstationary dynamic data. The selection of proper numerical methods for the hardware execution of the selected identification algorithm and consideration of the fixed-point representation are elaborated. Other challenges include the issues of the timing in the hardware execution cycle of the design, resource consumption, approximation accuracy, and user flexibility of input data types limited by the simplicity of this preliminary design. Future work includes making an FPGA and microprocessor operate together to embed a further developed algorithm that yields better
Phellan, Renzo; Forkert, Nils D
2017-11-01
Vessel enhancement algorithms are often used as a preprocessing step for vessel segmentation in medical images to improve the overall segmentation accuracy. Each algorithm uses different characteristics to enhance vessels, such that the most suitable algorithm may vary for different applications. This paper presents a comparative analysis of the accuracy gains in vessel segmentation generated by the use of nine vessel enhancement algorithms: Multiscale vesselness using the formulas described by Erdt (MSE), Frangi (MSF), and Sato (MSS), optimally oriented flux (OOF), ranking orientations responses path operator (RORPO), the regularized Perona-Malik approach (RPM), vessel enhanced diffusion (VED), hybrid diffusion with continuous switch (HDCS), and the white top hat algorithm (WTH). The filters were evaluated and compared based on time-of-flight MRA datasets and corresponding manual segmentations from 5 healthy subjects and 10 patients with an arteriovenous malformation. Additionally, five synthetic angiographic datasets with corresponding ground truth segmentation were generated with three different noise levels (low, medium, and high) and also used for comparison. The parameters for each algorithm and subsequent segmentation were optimized using leave-one-out cross evaluation. The Dice coefficient, Matthews correlation coefficient, area under the ROC curve, number of connected components, and true positives were used for comparison. The results of this study suggest that vessel enhancement algorithms do not always lead to more accurate segmentation results compared to segmenting nonenhanced images directly. Multiscale vesselness algorithms, such as MSE, MSF, and MSS proved to be robust to noise, while diffusion-based filters, such as RPM, VED, and HDCS ranked in the top of the list in scenarios with medium or no noise. Filters that assume tubular-shapes, such as MSE, MSF, MSS, OOF, RORPO, and VED show a decrease in accuracy when considering patients with an AVM
A Dynamic Traffic Signal Timing Model and its Algorithm for Junction of Urban Road
DEFF Research Database (Denmark)
Cai, Yanguang; Cai, Hao
2012-01-01
As an important part of Intelligent Transportation System, the scientific traffic signal timing of junction can improve the efficiency of urban transport. This paper presents a novel dynamic traffic signal timing model. According to the characteristics of the model, hybrid chaotic quantum...... evolutionary algorithm is employed to solve it. The proposed model has simple structure, and only requires traffic inflow speed and outflow speed are bounded functions with at most finite number of discontinuity points. The condition is very loose and better meets the requirements of the practical real......-time and dynamic signal control of junction. To obtain the optimal solution of the model by hybrid chaotic quantum evolutionary algorithm, the model is converted to an easily solvable form. To simplify calculation, we give the expression of the partial derivative and change rate of the objective function...
Iris unwrapping using the Bresenham circle algorithm for real-time iris recognition
Carothers, Matthew T.; Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.
2015-02-01
An efficient parallel architecture design for the iris unwrapping process in a real-time iris recognition system using the Bresenham Circle Algorithm is presented in this paper. Based on the characteristics of the model parameters this algorithm was chosen over the widely used polar conversion technique as the iris unwrapping model. The architecture design is parallelized to increase the throughput of the system and is suitable for processing an inputted image size of 320 × 240 pixels in real-time using Field Programmable Gate Array (FPGA) technology. Quartus software is used to implement, verify, and analyze the design's performance using the VHSIC Hardware Description Language. The system's predicted processing time is faster than the modern iris unwrapping technique used today∗.
Yurtkuran, Alkın
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834
A Fast Density-Based Clustering Algorithm for Real-Time Internet of Things Stream
Ying Wah, Teh
2014-01-01
Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets. PMID:25110753
A fast density-based clustering algorithm for real-time Internet of Things stream.
Amini, Amineh; Saboohi, Hadi; Wah, Teh Ying; Herawan, Tutut
2014-01-01
Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets.
An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm.
Qin, Qin; Li, Jianqing; Yue, Yinggao; Liu, Chengyu
2017-01-01
R-peak detection is crucial in electrocardiogram (ECG) signal analysis. This study proposed an adaptive and time-efficient R-peak detection algorithm for ECG processing. First, wavelet multiresolution analysis was applied to enhance the ECG signal representation. Then, ECG was mirrored to convert large negative R-peaks to positive ones. After that, local maximums were calculated by the first-order forward differential approach and were truncated by the amplitude and time interval thresholds to locate the R-peaks. The algorithm performances, including detection accuracy and time consumption, were tested on the MIT-BIH arrhythmia database and the QT database. Experimental results showed that the proposed algorithm achieved mean sensitivity of 99.39%, positive predictivity of 99.49%, and accuracy of 98.89% on the MIT-BIH arrhythmia database and 99.83%, 99.90%, and 99.73%, respectively, on the QT database. By processing one ECG record, the mean time consumptions were 0.872 s and 0.763 s for the MIT-BIH arrhythmia database and QT database, respectively, yielding 30.6% and 32.9% of time reduction compared to the traditional Pan-Tompkins method.
Directory of Open Access Journals (Sweden)
Alkın Yurtkuran
2014-01-01
Full Text Available The traveling salesman problem with time windows (TSPTW is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle’s boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.
Real time processing of neutron monitor data using the edge editor algorithm
Directory of Open Access Journals (Sweden)
Mavromichalaki Helen
2012-09-01
Full Text Available The nucleonic component of the secondary cosmic rays is measured by the worldwide network of neutron monitors (NMs. In most cases, a NM station publishes the measured data in a real time basis in order to be available for instant use from the scientific community. The space weather centers and the online applications such as the ground level enhancement (GLE alert make use of the online data and are highly dependent on their quality. However, the primary data in some cases are distorted due to unpredictable instrument variations. For this reason, the real time primary data processing of the measured data of a station is necessary. The general operational principle of the correction algorithms is the comparison between the different channels of a NM, taking advantage of the fact that a station hosts a number of identical detectors. Median editor, Median editor plus and Super editor are some of the correction algorithms that are being used with satisfactory results. In this work an alternative algorithm is proposed and analyzed. The new algorithm uses a statistical approach to define the distribution of the measurements and introduces an error index which is used for the correction of the measurements that deviate from this distribution.
A hybrid algorithm for flexible job-shop scheduling problem with setup times
Directory of Open Access Journals (Sweden)
Ameni Azzouz
2017-01-01
Full Text Available Job-shop scheduling problem is one of the most important fields in manufacturing optimization where a set of n jobs must be processed on a set of m specified machines. Each job consists of a specific set of operations, which have to be processed according to a given order. The Flexible Job Shop problem (FJSP is a generalization of the above-mentioned problem, where each operation can be processed by a set of resources and has a processing time depending on the resource used. The FJSP problems cover two difficulties, namely, machine assignment problem and operation sequencing problem. This paper addresses the flexible job-shop scheduling problem with sequence-dependent setup times to minimize two kinds of objectives function: makespan and bi-criteria objective function. For that, we propose a hybrid algorithm based on genetic algorithm (GA and variable neighbourhood search (VNS to solve this problem. To evaluate the performance of our algorithm, we compare our results with other methods existing in literature. All the results show the superiority of our algorithm against the available ones in terms of solution quality.
Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.
Ćwik, Michał; Józefczyk, Jerzy
2018-01-01
An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.
Positioning performance analysis of the time sum of arrival algorithm with error features
Gong, Feng-xun; Ma, Yan-qiu
2018-03-01
The theoretical positioning accuracy of multilateration (MLAT) with the time difference of arrival (TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival (TSOA) algorithm from the root mean square error ( RMSE) and geometric dilution of precision (GDOP) in additive white Gaussian noise (AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.
Sinuous oscillations and steady warps of polytropic disks
International Nuclear Information System (INIS)
Balmforth, N.J.; Spiegel, E.A.
1995-05-01
In an asymptotic development of the equations governing the equilibria and linear stability of rapidly rotating polytropes we employed the slender aspect of these objects to reduce the three-dimensional partial differential equations to a somewhat simpler, ordinary integro-differential form. The earlier calculations dealt with isolated objects that were in centrifugal balance, that is the centrifugal acceleration of the configuration was balanced largely by self gravity with small contributions from the pressure gradient. Another interesting situation is that in which the polytrope rotates subject to externally imposed gravitational fields. In astrophysics, this is common in the theory of galactic dynamics because disks are unlikely to be isolated objects. The dark halos associated with disks also provide one possible explanation of the apparent warping of many galaxies. If the axis of the highly flattened disk is not aligned with that of the much less flattened halo, then the resultant torque of the halo gravity on the disk might provide a nonaxisymmetric distortion or disk warp. Motivated by these possibilities we shall here build models of polytropic disks of small but finite thickness which are subjected to prescribed, external gravitational fields. First we estimate how a symmetrical potential distorts the structure of the disk, then we examine its sinuous oscillations to confirm that they freely decay, hence suggesting that a warp must be externally forced. Finally, we consider steady warps of the disk plane when the axis of the disk does not coincide with that of the halo
Acoustic analysis of warp potential of green ponderosa pine lumber
Xiping Wang; William T. Simpson
2005-01-01
This study evaluated the potential of acoustic analysis as presorting criteria to identify warp-prone boards before kiln drying. Dimension lumber, 38 by 89 mm (nominal 2 by 4 in.) and 2.44 m (8 ft) long, sawn from open-grown small-diameter ponderosa pine trees, was acoustically tested lengthwise at green condition. Three acoustic properties (acoustic speed, rate of...
WARP: a double phase argon programme for dark matter detection
International Nuclear Information System (INIS)
Ferrari, N
2006-01-01
WARP (Wimp ARgon Programme) is a double phase Argon detector for Dark Matter search under construction at Laboratori Nazionali del Gran Sasso. We present recent results obtained operating a prototype with a sensitive mass of 2.3 litres deep underground
A controlled method to flatten warped wooden panels
Van Gerven, G.; Ankersmit, B.; van Duin, P.H.J.C.; Jorissen, A.J.M.; Schellen, H.L.
2016-01-01
This article describes the research and subsequent treatment to flatten the warped wooden doors of a seventeenth-century cabinet. The aim was to flatten the veneered panels, in very strict climatic conditions and without lifting any veneer or damaging the surface finish of the exterior. In this
Interactions between massive dark halos and warped disks
Kuijken, K; Persic, M; Salucci, P
1997-01-01
The normal mode theory for warping of galaxy disks, in which disks are assumed to be tilted with respect to the equator of a massive, flattened dark halo, assumes a rigid, fixed halo. However, consideration of the back-reaction by a misaligned disk on a massive particle halo shows there to be strong
Scales of Time Where the Quantum Discord Allows an Efficient Execution of the DQC1 Algorithm
Directory of Open Access Journals (Sweden)
M. Ávila
2014-01-01
Full Text Available The power of one qubit deterministic quantum processor (DQC1 (Knill and Laflamme (1998 generates a nonclassical correlation known as quantum discord. The DQC1 algorithm executes in an efficient way with a characteristic time given by τ=Tr[Un]/2n, where Un is an n qubit unitary gate. For pure states, quantum discord means entanglement while for mixed states such a quantity is more than entanglement. Quantum discord can be thought of as the mutual information between two systems. Within the quantum discord approach the role of time in an efficient evaluation of τ is discussed. It is found that the smaller the value of t/T is, where t is the time of execution of the DQC1 algorithm and T is the scale of time where the nonclassical correlations prevail, the more efficient the calculation of τ is. A Mösbauer nucleus might be a good processor of the DQC1 algorithm while a nuclear spin chain would not be efficient for the calculation of τ.
International Nuclear Information System (INIS)
Lin, Chang Sheng; Tseng, Tse Chuan
2014-01-01
Modal Identification from response data only is studied for structural systems under nonstationary ambient vibration. The topic of this paper is the estimation of modal parameters from nonstationary ambient vibration data by applying the random decrement algorithm with time-varying threshold level. In the conventional random decrement algorithm, the threshold level for evaluating random dec signatures is defined as the standard deviation value of response data of the reference channel. The distortion of random dec signatures may be, however, induced by the error involved in noise from the original response data in practice. To improve the accuracy of identification, a modification of the sampling procedure in random decrement algorithm is proposed for modal-parameter identification from the nonstationary ambient response data. The time-varying threshold level is presented for the acquisition of available sample time history to perform averaging analysis, and defined as the temporal root-mean-square function of structural response, which can appropriately describe a wide variety of nonstationary behaviors in reality, such as the time-varying amplitude (variance) of a nonstationary process in a seismic record. Numerical simulations confirm the validity and robustness of the proposed modal-identification method from nonstationary ambient response data under noisy conditions.
New algorithms for processing time-series big EEG data within mobile health monitoring systems.
Serhani, Mohamed Adel; Menshawy, Mohamed El; Benharref, Abdelghani; Harous, Saad; Navaz, Alramzana Nujum
2017-10-01
Recent advances in miniature biomedical sensors, mobile smartphones, wireless communications, and distributed computing technologies provide promising techniques for developing mobile health systems. Such systems are capable of monitoring epileptic seizures reliably, which are classified as chronic diseases. Three challenging issues raised in this context with regard to the transformation, compression, storage, and visualization of big data, which results from a continuous recording of epileptic seizures using mobile devices. In this paper, we address the above challenges by developing three new algorithms to process and analyze big electroencephalography data in a rigorous and efficient manner. The first algorithm is responsible for transforming the standard European Data Format (EDF) into the standard JavaScript Object Notation (JSON) and compressing the transformed JSON data to decrease the size and time through the transfer process and to increase the network transfer rate. The second algorithm focuses on collecting and storing the compressed files generated by the transformation and compression algorithm. The collection process is performed with respect to the on-the-fly technique after decompressing files. The third algorithm provides relevant real-time interaction with signal data by prospective users. It particularly features the following capabilities: visualization of single or multiple signal channels on a smartphone device and query data segments. We tested and evaluated the effectiveness of our approach through a software architecture model implementing a mobile health system to monitor epileptic seizures. The experimental findings from 45 experiments are promising and efficiently satisfy the approach's objectives in a price of linearity. Moreover, the size of compressed JSON files and transfer times are reduced by 10% and 20%, respectively, while the average total time is remarkably reduced by 67% through all performed experiments. Our approach
Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model
Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.
2009-04-01
The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.
A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging
International Nuclear Information System (INIS)
Jiang, J; Hall, T J
2007-01-01
Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows (registered) system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s -1 ) that exceed our previous methods
GPU-accelerated algorithms for many-particle continuous-time quantum walks
Piccinini, Enrico; Benedetti, Claudia; Siloi, Ilaria; Paris, Matteo G. A.; Bordone, Paolo
2017-06-01
Many-particle continuous-time quantum walks (CTQWs) represent a resource for several tasks in quantum technology, including quantum search algorithms and universal quantum computation. In order to design and implement CTQWs in a realistic scenario, one needs effective simulation tools for Hamiltonians that take into account static noise and fluctuations in the lattice, i.e. Hamiltonians containing stochastic terms. To this aim, we suggest a parallel algorithm based on the Taylor series expansion of the evolution operator, and compare its performances with those of algorithms based on the exact diagonalization of the Hamiltonian or a 4th order Runge-Kutta integration. We prove that both Taylor-series expansion and Runge-Kutta algorithms are reliable and have a low computational cost, the Taylor-series expansion showing the additional advantage of a memory allocation not depending on the precision of calculation. Both algorithms are also highly parallelizable within the SIMT paradigm, and are thus suitable for GPGPU computing. In turn, we have benchmarked 4 NVIDIA GPUs and 3 quad-core Intel CPUs for a 2-particle system over lattices of increasing dimension, showing that the speedup provided by GPU computing, with respect to the OPENMP parallelization, lies in the range between 8x and (more than) 20x, depending on the frequency of post-processing. GPU-accelerated codes thus allow one to overcome concerns about the execution time, and make it possible simulations with many interacting particles on large lattices, with the only limit of the memory available on the device.
Efficient on-the-fly Algorithm for Checking Alternating Timed Simulation
DEFF Research Database (Denmark)
David, Alexandre; Larsen, Kim Guldstrand; Chatain, Thomas
2009-01-01
In this paper we focus on property-preserving preorders between timed game automata and their application to control of partially observable systems. We define timed weak alternating simulation as a preorder between timed game automata, which preserves controllability. We define the rules...... of building a symbolic turn-based two-player game such that the existence of a winning strategy is equivalent to the simulation being satisfied. We also propose an on-the-fly algorithm for solving this game. This simulation checking method can be applied to the case of non-alternating or strong simulations...
Approximate k-NN delta test minimization method using genetic algorithms: Application to time series
Mateo, F; Gadea, Rafael; Sovilj, Dusan
2010-01-01
In many real world problems, the existence of irrelevant input variables (features) hinders the predictive quality of the models used to estimate the output variables. In particular, time series prediction often involves building large regressors of artificial variables that can contain irrelevant or misleading information. Many techniques have arisen to confront the problem of accurate variable selection, including both local and global search strategies. This paper presents a method based on genetic algorithms that intends to find a global optimum set of input variables that minimize the Delta Test criterion. The execution speed has been enhanced by substituting the exact nearest neighbor computation by its approximate version. The problems of scaling and projection of variables have been addressed. The developed method works in conjunction with MATLAB's Genetic Algorithm and Direct Search Toolbox. The goodness of the proposed methodology has been evaluated on several popular time series examples, and also ...
Directory of Open Access Journals (Sweden)
Liyun Su
2011-01-01
Full Text Available In order to suppress the interference of the strong fractional noise signal in discrete-time ultrawideband (UWB systems, this paper presents a new UWB multi-scale Kalman filter (KF algorithm for the interference suppression. This approach solves the problem of the narrowband interference (NBI as nonstationary fractional signal in UWB communication, which does not need to estimate any channel parameter. In this paper, the received sampled signal is transformed through multiscale wavelet to obtain a state transition equation and an observation equation based on the stationarity theory of wavelet coefficients in time domain. Then through the Kalman filter method, fractional signal of arbitrary scale is easily figured out. Finally, fractional noise interference is subtracted from the received signal. Performance analysis and computer simulations reveal that this algorithm is effective to reduce the strong fractional noise when the sampling rate is low.
A Faster Algorithm for Solving One-Clock Priced Timed Games
DEFF Research Database (Denmark)
Hansen, Thomas Dueholm; Ibsen-Jensen, Rasmus; Miltersen, Peter Bro
2013-01-01
previously known time bound for solving one-clock priced timed games was 2O(n2+m) , due to Rutkowski. For our improvement, we introduce and study a new algorithm for solving one-clock priced timed games, based on the sweep-line technique from computational geometry and the strategy iteration paradigm from......One-clock priced timed games is a class of two-player, zero-sum, continuous-time games that was defined and thoroughly studied in previous works. We show that one-clock priced timed games can be solved in time m 12 n n O(1), where n is the number of states and m is the number of actions. The best...
A Faster Algorithm for Solving One-Clock Priced Timed Games
DEFF Research Database (Denmark)
Hansen, Thomas Dueholm; Ibsen-Jensen, Rasmus; Miltersen, Peter Bro
2012-01-01
previously known time bound for solving one-clock priced timed games was 2^(O(n^2+m)), due to Rutkowski. For our improvement, we introduce and study a new algorithm for solving one-clock priced timed games, based on the sweep-line technique from computational geometry and the strategy iteration paradigm from......One-clock priced timed games is a class of two-player, zero-sum, continuous-time games that was defined and thoroughly studied in previous works. We show that one-clock priced timed games can be solved in time m 12^n n^(O(1)), where n is the number of states and m is the number of actions. The best...
Load power device and system for real-time execution of hierarchical load identification algorithms
Yang, Yi; Madane, Mayura Arun; Zambare, Prachi Suresh
2017-11-14
A load power device includes a power input; at least one power output for at least one load; and a plurality of sensors structured to sense voltage and current at the at least one power output. A processor is structured to provide real-time execution of: (a) a plurality of load identification algorithms, and (b) event detection and operating mode detection for the at least one load.
Research and Realization of the HJ-1C Real-time Software Frame Synchronization Algorithm
Hou Yang-shuan; Shi Tao; Hu Yu-xin
2014-01-01
Conventional software frame synchronization methods are inefficient in processing huge continuous data without synchronization words. To improve the processing speed, a real-time synchronization algorithm is proposed based on reverse searching. Satellite data are grouped and searched in the reverse direction to avoid searching for synchronization words in huge continuous invalid data; thus, the frame synchronization speed is improved enormously. The fastest processing speed is up to 15445.9 M...
Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.
Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence
2012-01-01
Abstract Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. Background There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real...
A tighter bound for the self-stabilization time in Hermanʼs algorithm
DEFF Research Database (Denmark)
Feng, Yuan; Zhang, Lijun
2013-01-01
We study the expected self-stabilization time of Hermanʼs algorithm. For N processors the lower bound is 427N2 (0.148N2), and an upper bound of 0.64N2 is presented in Kiefer et al. (2011) [4]. In this paper we give a tighter upper bound 0.521N2. © 2013 Published by Elsevier B.V....
Simple nuclear norm based algorithms for imputing missing data and forecasting in time series
Butcher, Holly Louise; Gillard, Jonathan William
2017-01-01
There has been much recent progress on the use of the nuclear norm for the so-called matrix completion problem (the problem of imputing missing values of a matrix). In this paper we investigate the use of the nuclear norm for modelling time series, with particular attention to imputing missing data and forecasting. We introduce a simple alternating projections type algorithm based on the nuclear norm for these tasks, and consider a number of practical examples.
Directory of Open Access Journals (Sweden)
Simon Fong
2012-01-01
Full Text Available Voice biometrics has a long history in biosecurity applications such as verification and identification based on characteristics of the human voice. The other application called voice classification which has its important role in grouping unlabelled voice samples, however, has not been widely studied in research. Lately voice classification is found useful in phone monitoring, classifying speakers’ gender, ethnicity and emotion states, and so forth. In this paper, a collection of computational algorithms are proposed to support voice classification; the algorithms are a combination of hierarchical clustering, dynamic time wrap transform, discrete wavelet transform, and decision tree. The proposed algorithms are relatively more transparent and interpretable than the existing ones, though many techniques such as Artificial Neural Networks, Support Vector Machine, and Hidden Markov Model (which inherently function like a black box have been applied for voice verification and voice identification. Two datasets, one that is generated synthetically and the other one empirically collected from past voice recognition experiment, are used to verify and demonstrate the effectiveness of our proposed voice classification algorithm.
Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem
Luo, Yabo; Waden, Yongo P.
2017-06-01
Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.
Hardware-Efficient Design of Real-Time Profile Shape Matching Stereo Vision Algorithm on FPGA
Directory of Open Access Journals (Sweden)
Beau Tippetts
2014-01-01
Full Text Available A variety of platforms, such as micro-unmanned vehicles, are limited in the amount of computational hardware they can support due to weight and power constraints. An efficient stereo vision algorithm implemented on an FPGA would be able to minimize payload and power consumption in microunmanned vehicles, while providing 3D information and still leaving computational resources available for other processing tasks. This work presents a hardware design of the efficient profile shape matching stereo vision algorithm. Hardware resource usage is presented for the targeted micro-UV platform, Helio-copter, that uses the Xilinx Virtex 4 FX60 FPGA. Less than a fifth of the resources on this FGPA were used to produce dense disparity maps for image sizes up to 450 × 375, with the ability to scale up easily by increasing BRAM usage. A comparison is given of accuracy, speed performance, and resource usage of a census transform-based stereo vision FPGA implementation by Jin et al. Results show that the profile shape matching algorithm is an efficient real-time stereo vision algorithm for hardware implementation for resource limited systems such as microunmanned vehicles.
A practical O(n log2 n) time algorithm for computing the triplet distance on binary trees
DEFF Research Database (Denmark)
Sand, Andreas; Pedersen, Christian Nørgaard Storm; Mailund, Thomas
2013-01-01
rooted binary trees in time O (n log2 n). The algorithm is related to an algorithm for computing the quartet distance between two unrooted binary trees in time O (n log n). While the quartet distance algorithm has a very severe overhead in the asymptotic time complexity that makes it impractical compared......The triplet distance is a distance measure that compares two rooted trees on the same set of leaves by enumerating all sub-sets of three leaves and counting how often the induced topologies of the tree are equal or different. We present an algorithm that computes the triplet distance between two...
Nakamura, Yoshimasa; Sekido, Hiroto
2018-04-01
The finite or the semi-infinite discrete-time Toda lattice has many applications to various areas in applied mathematics. The purpose of this paper is to review how the Toda lattice appears in the Lanczos algorithm through the quotient-difference algorithm and its progressive form (pqd). Then a multistep progressive algorithm (MPA) for solving linear systems is presented. The extended Lanczos parameters can be given not by computing inner products of the extended Lanczos vectors but by using the pqd algorithm with highly relative accuracy in a lower cost. The asymptotic behavior of the pqd algorithm brings us some applications of MPA related to eigenvectors.
International Nuclear Information System (INIS)
Faulin, Javier; Juan, Angel A.; Serrat, Carles; Bargueno, Vicente
2008-01-01
In this paper, we propose the use of discrete-event simulation (DES) as an efficient methodology to obtain estimates of both survival and availability functions in time-dependent real systems-such as telecommunication networks or distributed computer systems. We discuss the use of DES in reliability and availability studies, not only as an alternative to the use of analytical and probabilistic methods, but also as a complementary way to: (i) achieve a better understanding of the system internal behavior and (ii) find out the relevance of each component under reliability/availability considerations. Specifically, this paper describes a general methodology and two DES algorithms, called SAEDES, which can be used to analyze a wide range of time-dependent complex systems, including those presenting multiple states, dependencies among failure/repair times or non-perfect maintenance policies. These algorithms can provide valuable information, specially during the design stages, where different scenarios can be compared in order to select a system design offering adequate reliability and availability levels. Two case studies are discussed, using a C/C++ implementation of the SAEDES algorithms, to show some potential applications of our approach
Energy Technology Data Exchange (ETDEWEB)
Faulin, Javier [Department of Statistics and Operations Research, Los Magnolios Building, First Floor, Campus Arrosadia, Public University of Navarre, 31006 Pamplona, Navarre (Spain)], E-mail: javier.faulin@unavarra.es; Juan, Angel A. [Department of Applied Mathematics I, Av. Doctor Maranon 44-50, Technical University of Catalonia, 08028 Barcelona (Spain)], E-mail: angel.alejandro.juan@upc.edu; Serrat, Carles [Department of Applied Mathematics I, Av. Doctor Maranon 44-50, Technical University of Catalonia, 08028 Barcelona (Spain)], E-mail: carles.serrat@upc.edu; Bargueno, Vicente [Department of Applied Mathematics I, ETS Ingenieros Industriales, Universidad Nacional de Educacion a Distancia, 28080 Madrid (Spain)], E-mail: vbargueno@ind.uned.es
2008-11-15
In this paper, we propose the use of discrete-event simulation (DES) as an efficient methodology to obtain estimates of both survival and availability functions in time-dependent real systems-such as telecommunication networks or distributed computer systems. We discuss the use of DES in reliability and availability studies, not only as an alternative to the use of analytical and probabilistic methods, but also as a complementary way to: (i) achieve a better understanding of the system internal behavior and (ii) find out the relevance of each component under reliability/availability considerations. Specifically, this paper describes a general methodology and two DES algorithms, called SAEDES, which can be used to analyze a wide range of time-dependent complex systems, including those presenting multiple states, dependencies among failure/repair times or non-perfect maintenance policies. These algorithms can provide valuable information, specially during the design stages, where different scenarios can be compared in order to select a system design offering adequate reliability and availability levels. Two case studies are discussed, using a C/C++ implementation of the SAEDES algorithms, to show some potential applications of our approach.
A Finite State Machine Approach to Algorithmic Lateral Inhibition for Real-Time Motion Detection †
Directory of Open Access Journals (Sweden)
María T. López
2018-05-01
Full Text Available Many researchers have explored the relationship between recurrent neural networks and finite state machines. Finite state machines constitute the best-characterized computational model, whereas artificial neural networks have become a very successful tool for modeling and problem solving. The neurally-inspired lateral inhibition method, and its application to motion detection tasks, have been successfully implemented in recent years. In this paper, control knowledge of the algorithmic lateral inhibition (ALI method is described and applied by means of finite state machines, in which the state space is constituted from the set of distinguishable cases of accumulated charge in a local memory. The article describes an ALI implementation for a motion detection task. For the implementation, we have chosen to use one of the members of the 16-nm Kintex UltraScale+ family of Xilinx FPGAs. FPGAs provide the necessary accuracy, resolution, and precision to run neural algorithms alongside current sensor technologies. The results offered in this paper demonstrate that this implementation provides accurate object tracking performance on several datasets, obtaining a high F-score value (0.86 for the most complex sequence used. Moreover, it outperforms implementations of a complete ALI algorithm and a simplified version of the ALI algorithm—named “accumulative computation”—which was run about ten years ago, now reaching real-time processing times that were simply not achievable at that time for ALI.
Fractal dimension algorithms and their application to time series associated with natural phenomena
International Nuclear Information System (INIS)
La Torre, F Cervantes-De; González-Trejo, J I; Real-Ramírez, C A; Hoyos-Reyes, L F
2013-01-01
Chaotic invariants like the fractal dimensions are used to characterize non-linear time series. The fractal dimension is an important characteristic of systems, because it contains information about their geometrical structure at multiple scales. In this work, three algorithms are applied to non-linear time series: spectral analysis, rescaled range analysis and Higuchi's algorithm. The analyzed time series are associated with natural phenomena. The disturbance storm time (Dst) is a global indicator of the state of the Earth's geomagnetic activity. The time series used in this work show a self-similar behavior, which depends on the time scale of measurements. It is also observed that fractal dimensions, D, calculated with Higuchi's method may not be constant over-all time scales. This work shows that during 2001, D reaches its lowest values in March and November. The possibility that D recovers a change pattern arising from self-organized critical phenomena is also discussed
DynPeak: An Algorithm for Pulse Detection and Frequency Analysis in Hormonal Time Series
Vidal, Alexandre; Zhang, Qinghua; Médigue, Claire; Fabre, Stéphane; Clément, Frédérique
2012-01-01
The endocrine control of the reproductive function is often studied from the analysis of luteinizing hormone (LH) pulsatile secretion by the pituitary gland. Whereas measurements in the cavernous sinus cumulate anatomical and technical difficulties, LH levels can be easily assessed from jugular blood. However, plasma levels result from a convolution process due to clearance effects when LH enters the general circulation. Simultaneous measurements comparing LH levels in the cavernous sinus and jugular blood have revealed clear differences in the pulse shape, the amplitude and the baseline. Besides, experimental sampling occurs at a relatively low frequency (typically every 10 min) with respect to LH highest frequency release (one pulse per hour) and the resulting LH measurements are noised by both experimental and assay errors. As a result, the pattern of plasma LH may be not so clearly pulsatile. Yet, reliable information on the InterPulse Intervals (IPI) is a prerequisite to study precisely the steroid feedback exerted on the pituitary level. Hence, there is a real need for robust IPI detection algorithms. In this article, we present an algorithm for the monitoring of LH pulse frequency, basing ourselves both on the available endocrinological knowledge on LH pulse (shape and duration with respect to the frequency regime) and synthetic LH data generated by a simple model. We make use of synthetic data to make clear some basic notions underlying our algorithmic choices. We focus on explaining how the process of sampling affects drastically the original pattern of secretion, and especially the amplitude of the detectable pulses. We then describe the algorithm in details and perform it on different sets of both synthetic and experimental LH time series. We further comment on how to diagnose possible outliers from the series of IPIs which is the main output of the algorithm. PMID:22802933
Reducing acquisition times in multidimensional NMR with a time-optimized Fourier encoding algorithm
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhiyong [Department of Chemical Physics, Weizmann Institute of Science, Rehovot 76100 (Israel); Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian 361005 (China); Smith, Pieter E. S.; Frydman, Lucio, E-mail: lucio.frydman@weizmann.ac.il [Department of Chemical Physics, Weizmann Institute of Science, Rehovot 76100 (Israel)
2014-11-21
Speeding up the acquisition of multidimensional nuclear magnetic resonance (NMR) spectra is an important topic in contemporary NMR, with central roles in high-throughput investigations and analyses of marginally stable samples. A variety of fast NMR techniques have been developed, including methods based on non-uniform sampling and Hadamard encoding, that overcome the long sampling times inherent to schemes based on fast-Fourier-transform (FFT) methods. Here, we explore the potential of an alternative fast acquisition method that leverages a priori knowledge, to tailor polychromatic pulses and customized time delays for an efficient Fourier encoding of the indirect domain of an NMR experiment. By porting the encoding of the indirect-domain to the excitation process, this strategy avoids potential artifacts associated with non-uniform sampling schemes and uses a minimum number of scans equal to the number of resonances present in the indirect dimension. An added convenience is afforded by the fact that a usual 2D FFT can be used to process the generated data. Acquisitions of 2D heteronuclear correlation NMR spectra on quinine and on the anti-inflammatory drug isobutyl propionic phenolic acid illustrate the new method's performance. This method can be readily automated to deal with complex samples such as those occurring in metabolomics, in in-cell as well as in in vivo NMR applications, where speed and temporal stability are often primary concerns.
Reducing acquisition times in multidimensional NMR with a time-optimized Fourier encoding algorithm
International Nuclear Information System (INIS)
Zhang, Zhiyong; Smith, Pieter E. S.; Frydman, Lucio
2014-01-01
Speeding up the acquisition of multidimensional nuclear magnetic resonance (NMR) spectra is an important topic in contemporary NMR, with central roles in high-throughput investigations and analyses of marginally stable samples. A variety of fast NMR techniques have been developed, including methods based on non-uniform sampling and Hadamard encoding, that overcome the long sampling times inherent to schemes based on fast-Fourier-transform (FFT) methods. Here, we explore the potential of an alternative fast acquisition method that leverages a priori knowledge, to tailor polychromatic pulses and customized time delays for an efficient Fourier encoding of the indirect domain of an NMR experiment. By porting the encoding of the indirect-domain to the excitation process, this strategy avoids potential artifacts associated with non-uniform sampling schemes and uses a minimum number of scans equal to the number of resonances present in the indirect dimension. An added convenience is afforded by the fact that a usual 2D FFT can be used to process the generated data. Acquisitions of 2D heteronuclear correlation NMR spectra on quinine and on the anti-inflammatory drug isobutyl propionic phenolic acid illustrate the new method's performance. This method can be readily automated to deal with complex samples such as those occurring in metabolomics, in in-cell as well as in in vivo NMR applications, where speed and temporal stability are often primary concerns
Time- and Cost-Optimal Parallel Algorithms for the Dominance and Visibility Graphs
Directory of Open Access Journals (Sweden)
D. Bhagavathi
1996-01-01
Full Text Available The compaction step of integrated circuit design motivates associating several kinds of graphs with a collection of non-overlapping rectangles in the plane. These graphs are intended to capture various visibility relations amongst the rectangles in the collection. The contribution of this paper is to propose time- and cost-optimal algorithms to construct two such graphs, namely, the dominance graph (DG, for short and the visibility graph (VG, for short. Specifically, we show that with a collection of n non-overlapping rectangles as input, both these structures can be constructed in θ(log n time using n processors in the CREW model.
Color reproduction and processing algorithm based on real-time mapping for endoscopic images.
Khan, Tareq H; Mohammed, Shahed K; Imtiaz, Mohammad S; Wahid, Khan A
2016-01-01
In this paper, we present a real-time preprocessing algorithm for image enhancement for endoscopic images. A novel dictionary based color mapping algorithm is used for reproducing the color information from a theme image. The theme image is selected from a nearby anatomical location. A database of color endoscopy image for different location is prepared for this purpose. The color map is dynamic as its contents change with the change of the theme image. This method is used on low contrast grayscale white light images and raw narrow band images to highlight the vascular and mucosa structures and to colorize the images. It can also be applied to enhance the tone of color images. The statistic visual representation and universal image quality measures show that the proposed method can highlight the mucosa structure compared to other methods. The color similarity has been verified using Delta E color difference, structure similarity index, mean structure similarity index and structure and hue similarity. The color enhancement was measured using color enhancement factor that shows considerable improvements. The proposed algorithm has low and linear time complexity, which results in higher execution speed than other related works.
Directory of Open Access Journals (Sweden)
KIM, J.
2009-10-01
Full Text Available 5.9 GHz advanced dedicated short range communications (ADSRC is a short-to-medium range communication standard that supports both public safety and private operations in roadside-to-vehicle and vehicle-to-vehicle communication environments. The core technology of physical layer in ADSRC is orthogonal frequency division multiplexing (OFDM, which is sensitive to timing synchronization error. In this paper, a robust and low-complexity timing synchronization algorithm suitable for ADSRC system and its efficient hardware architecture are proposed. The implementation of the proposed architecture is performed with Xilinx Vertex-II XC2V1000 Field Programmable Gate Array (FPGA. The proposed algorithm is based on cross-correlation technique, which is employed to detect the starting point of short training symbol and the guard interval of the long training symbol. Synchronization error rate (SER evaluation results and post-layout simulation results show that the proposed algorithm is efficient in high-mobility environments. The post-layout results of implementation demonstrate the robustness and low-complexity of the proposed architecture.
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
Time series modeling and forecasting using memetic algorithms for regime-switching models.
Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel
2012-11-01
In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.
Heuristic and Exact Algorithms for the Two-Machine Just in Time Job Shop Scheduling Problem
Directory of Open Access Journals (Sweden)
Mohammed Al-Salem
2016-01-01
Full Text Available The problem addressed in this paper is the two-machine job shop scheduling problem when the objective is to minimize the total earliness and tardiness from a common due date (CDD for a set of jobs when their weights equal 1 (unweighted problem. This objective became very significant after the introduction of the Just in Time manufacturing approach. A procedure to determine whether the CDD is restricted or unrestricted is developed and a semirestricted CDD is defined. Algorithms are introduced to find the optimal solution when the CDD is unrestricted and semirestricted. When the CDD is restricted, which is a much harder problem, a heuristic algorithm is proposed to find approximate solutions. Through computational experiments, the heuristic algorithms’ performance is evaluated with problems up to 500 jobs.
Spatial-time-state fusion algorithm for defect detection through eddy current pulsed thermography
Xiao, Xiang; Gao, Bin; Woo, Wai Lok; Tian, Gui Yun; Xiao, Xiao Ting
2018-05-01
Eddy Current Pulsed Thermography (ECPT) has received extensive attention due to its high sensitive of detectability on surface and subsurface cracks. However, it remains as a difficult challenge in unsupervised detection as to identify defects without knowing any prior knowledge. This paper presents a spatial-time-state features fusion algorithm to obtain fully profile of the defects by directional scanning. The proposed method is intended to conduct features extraction by using independent component analysis (ICA) and automatic features selection embedding genetic algorithm. Finally, the optimal feature of each step is fused to obtain defects reconstruction by applying common orthogonal basis extraction (COBE) method. Experiments have been conducted to validate the study and verify the efficacy of the proposed method on blind defect detection.
Zhang, Chenxin; Öwall, Viktor
2016-01-01
This book focuses on domain-specific heterogeneous reconfigurable architectures, demonstrating for readers a computing platform which is flexible enough to support multiple standards, multiple modes, and multiple algorithms. The content is multi-disciplinary, covering areas of wireless communication, computing architecture, and circuit design. The platform described provides real-time processing capability with reasonable implementation cost, achieving balanced trade-offs among flexibility, performance, and hardware costs. The authors discuss efficient design methods for wireless communication processing platforms, from both an algorithm and architecture design perspective. Coverage also includes computing platforms for different wireless technologies and standards, including MIMO, OFDM, Massive MIMO, DVB, WLAN, LTE/LTE-A, and 5G. •Discusses reconfigurable architectures, including hardware building blocks such as processing elements, memory sub-systems, Network-on-Chip (NoC), and dynamic hardware reconfigur...
Identification of time-varying nonlinear systems using differential evolution algorithm
DEFF Research Database (Denmark)
Perisic, Nevena; Green, Peter L; Worden, Keith
2013-01-01
(DE) algorithm for the identification of time-varying systems. DE is an evolutionary optimisation method developed to perform direct search in a continuous space without requiring any derivative estimation. DE is modified so that the objective function changes with time to account for the continuing......, thus identification of time-varying systems with nonlinearities can be a very challenging task. In order to avoid conventional least squares and gradient identification methods which require uni-modal and double differentiable objective functions, this work proposes a modified differential evolution...... inclusion of new data within an error metric. This paper presents results of identification of a time-varying SDOF system with Coulomb friction using simulated noise-free and noisy data for the case of time-varying friction coefficient, stiffness and damping. The obtained results are promising and the focus...
Kiguchi, Masashi; Funane, Tsukasa
2014-11-01
A real-time algorithm for removing scalp-blood signals from functional near-infrared spectroscopy signals is proposed. Scalp and deep signals have different dependencies on the source-detector distance. These signals were separated using this characteristic. The algorithm was validated through an experiment using a dynamic phantom in which shallow and deep absorptions were independently changed. The algorithm for measurement of oxygenated and deoxygenated hemoglobins using two wavelengths was explicitly obtained. This algorithm is potentially useful for real-time systems, e.g., brain-computer interfaces and neuro-feedback systems.
Tchagang, Alain B; Phan, Sieu; Famili, Fazel; Shearer, Heather; Fobert, Pierre; Huang, Yi; Zou, Jitao; Huang, Daiqing; Cutler, Adrian; Liu, Ziying; Pan, Youlian
2012-04-04
Nowadays, it is possible to collect expression levels of a set of genes from a set of biological samples during a series of time points. Such data have three dimensions: gene-sample-time (GST). Thus they are called 3D microarray gene expression data. To take advantage of the 3D data collected, and to fully understand the biological knowledge hidden in the GST data, novel subspace clustering algorithms have to be developed to effectively address the biological problem in the corresponding space. We developed a subspace clustering algorithm called Order Preserving Triclustering (OPTricluster), for 3D short time-series data mining. OPTricluster is able to identify 3D clusters with coherent evolution from a given 3D dataset using a combinatorial approach on the sample dimension, and the order preserving (OP) concept on the time dimension. The fusion of the two methodologies allows one to study similarities and differences between samples in terms of their temporal expression profile. OPTricluster has been successfully applied to four case studies: immune response in mice infected by malaria (Plasmodium chabaudi), systemic acquired resistance in Arabidopsis thaliana, similarities and differences between inner and outer cotyledon in Brassica napus during seed development, and to Brassica napus whole seed development. These studies showed that OPTricluster is robust to noise and is able to detect the similarities and differences between biological samples. Our analysis showed that OPTricluster generally outperforms other well known clustering algorithms such as the TRICLUSTER, gTRICLUSTER and K-means; it is robust to noise and can effectively mine the biological knowledge hidden in the 3D short time-series gene expression data.
Object Orientated Simulation on Transputer Arrays Using Time Warp
1989-12-01
Transputer based Machines, Grenoble, Sept 14-16 1987, Ed. Traian Muntean. [ 3 ] Muntean T., "PARX operating system kernal; application to Minix ", Esprit P1085...achieved by combining multiple calls of this separate compilation unit, and is exemplified in the ’race track’ example with 2 and 3 process to...event is at its correct destination - if not the transputer error flag is set (with a call to procedure crash) and the system terminates. Assuming events
Speech Recognition Using Neural Nets and Dynamic Time Warping
1988-12-01
flost tmap [~J20J0[16J; mnt r, Ci: double diet ; double minimum =99M9.9; for (r =0; r < fysize ; ri-i-){ for (c =0; c f xeize ; c+ +){ diet 6 .0; for...i)[1] = (location[iflO] 0); Lmindist (I map, inp, close) double inp[16J; mrt close [2] float tmap [20(201[161 double diet ; double minimum 9.99e31
Chemical fingerprinting of petroleum biomakers using time warping and PCA
DEFF Research Database (Denmark)
Christensen, Jan H.; Tomasi, Giorgio; Hansen, Asger B.
2005-01-01
A new method for chemical fingerprinting of petroleum biomakers is described. The method consists of GC-MS analysis, preprocessing of GC-MS chromatograms, and principal component analysis (PCA) of selected regions. The preprocessing consists of baseline removal by derivatization, normalization...
3D temporal subtraction on multislice CT images using nonlinear warping technique
Ishida, Takayuki; Katsuragawa, Shigehiko; Kawashita, Ikuo; Kim, Hyounseop; Itai, Yoshinori; Awai, Kazuo; Li, Qiang; Doi, Kunio
2007-03-01
The detection of very subtle lesions and/or lesions overlapped with vessels on CT images is a time consuming and difficult task for radiologists. In this study, we have developed a 3D temporal subtraction method to enhance interval changes between previous and current multislice CT images based on a nonlinear image warping technique. Our method provides a subtraction CT image which is obtained by subtraction of a previous CT image from a current CT image. Reduction of misregistration artifacts is important in the temporal subtraction method. Therefore, our computerized method includes global and local image matching techniques for accurate registration of current and previous CT images. For global image matching, we selected the corresponding previous section image for each current section image by using 2D cross-correlation between a blurred low-resolution current CT image and a blurred previous CT image. For local image matching, we applied the 3D template matching technique with translation and rotation of volumes of interests (VOIs) which were selected in the current and the previous CT images. The local shift vector for each VOI pair was determined when the cross-correlation value became the maximum in the 3D template matching. The local shift vectors at all voxels were determined by interpolation of shift vectors of VOIs, and then the previous CT image was nonlinearly warped according to the shift vector for each voxel. Finally, the warped previous CT image was subtracted from the current CT image. The 3D temporal subtraction method was applied to 19 clinical cases. The normal background structures such as vessels, ribs, and heart were removed without large misregistration artifacts. Thus, interval changes due to lung diseases were clearly enhanced as white shadows on subtraction CT images.
The Hierarchical Spectral Merger Algorithm: A New Time Series Clustering Procedure
Euán, Carolina
2018-04-12
We present a new method for time series clustering which we call the Hierarchical Spectral Merger (HSM) method. This procedure is based on the spectral theory of time series and identifies series that share similar oscillations or waveforms. The extent of similarity between a pair of time series is measured using the total variation distance between their estimated spectral densities. At each step of the algorithm, every time two clusters merge, a new spectral density is estimated using the whole information present in both clusters, which is representative of all the series in the new cluster. The method is implemented in an R package HSMClust. We present two applications of the HSM method, one to data coming from wave-height measurements in oceanography and the other to electroencefalogram (EEG) data.
An algorithm of Saxena-Easo on fuzzy time series forecasting
Ramadhani, L. C.; Anggraeni, D.; Kamsyakawuni, A.; Hadi, A. F.
2018-04-01
This paper presents a forecast model of Saxena-Easo fuzzy time series prediction to study the prediction of Indonesia inflation rate in 1970-2016. We use MATLAB software to compute this method. The algorithm of Saxena-Easo fuzzy time series doesn’t need stationarity like conventional forecasting method, capable of dealing with the value of time series which are linguistic and has the advantage of reducing the calculation, time and simplifying the calculation process. Generally it’s focus on percentage change as the universe discourse, interval partition and defuzzification. The result indicate that between the actual data and the forecast data are close enough with Root Mean Square Error (RMSE) = 1.5289.
AUTHOR|(INSPIRE)INSPIRE-00372074; The ATLAS collaboration; Sotiropoulou, Calliope Louisa; Annovi, Alberto; Kordas, Kostantinos
2016-01-01
In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avail...
Gkaitatzis, Stamatios; The ATLAS collaboration
2016-01-01
In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100 µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avai...
Grey Forecast Rainfall with Flow Updating Algorithm for Real-Time Flood Forecasting
Directory of Open Access Journals (Sweden)
Jui-Yi Ho
2015-04-01
Full Text Available The dynamic relationship between watershed characteristics and rainfall-runoff has been widely studied in recent decades. Since watershed rainfall-runoff is a non-stationary process, most deterministic flood forecasting approaches are ineffective without the assistance of adaptive algorithms. The purpose of this paper is to propose an effective flow forecasting system that integrates a rainfall forecasting model, watershed runoff model, and real-time updating algorithm. This study adopted a grey rainfall forecasting technique, based on existing hourly rainfall data. A geomorphology-based runoff model can be used for simulating impacts of the changing geo-climatic conditions on the hydrologic response of unsteady and non-linear watershed system, and flow updating algorithm were combined to estimate watershed runoff according to measured flow data. The proposed flood forecasting system was applied to three watersheds; one in the United States and two in Northern Taiwan. Four sets of rainfall-runoff simulations were performed to test the accuracy of the proposed flow forecasting technique. The results indicated that the forecast and observed hydrographs are in good agreement for all three watersheds. The proposed flow forecasting system could assist authorities in minimizing loss of life and property during flood events.
Application of the Trend Filtering Algorithm for Photometric Time Series Data
Gopalan, Giri; Plavchan, Peter; van Eyken, Julian; Ciardi, David; von Braun, Kaspar; Kane, Stephen R.
2016-08-01
Detecting transient light curves (e.g., transiting planets) requires high-precision data, and thus it is important to effectively filter systematic trends affecting ground-based wide-field surveys. We apply an implementation of the Trend Filtering Algorithm (TFA) to the 2MASS calibration catalog and select Palomar Transient Factory (PTF) photometric time series data. TFA is successful at reducing the overall dispersion of light curves, however, it may over-filter intrinsic variables and increase “instantaneous” dispersion when a template set is not judiciously chosen. In an attempt to rectify these issues we modify the original TFA from the literature by including measurement uncertainties in its computation, including ancillary data correlated with noise, and algorithmically selecting a template set using clustering algorithms as suggested by various authors. This approach may be particularly useful for appropriately accounting for variable photometric precision surveys and/or combined data sets. In summary, our contributions are to provide a MATLAB software implementation of TFA and a number of modifications tested on synthetics and real data, summarize the performance of TFA and various modifications on real ground-based data sets (2MASS and PTF), and assess the efficacy of TFA and modifications using synthetic light curve tests consisting of transiting and sinusoidal variables. While the transiting variables test indicates that these modifications confer no advantage to transit detection, the sinusoidal variables test indicates potential improvements in detection accuracy.
Dynamically warped theory space and collective supersymmetry breaking
International Nuclear Information System (INIS)
Carone, Christopher D.; Erlich, Joshua; Glover, Brian
2005-01-01
We study deconstructed gauge theories in which a warp factor emerges dynamically. We present nonsupersymmetric models in which the potential for the link fields has translational invariance, broken only by boundary effects that trigger an exponential profile of vacuum expectation values. The spectrum of physical states deviates exponentially from that of the continuum for large masses; we discuss the effects of such exponential towers on gauge coupling unification. We also present a supersymmetric example in which a warp factor is driven by Fayet-Iliopoulos terms. The model is peculiar in that it possesses a global supersymmetry that remains unbroken despite nonvanishing D-terms. Inclusion of gravity and/or additional messenger fields leads to the collective breaking of supersymmetry and to unusual phenomenology
Little Randall-Sundrum model and a multiply warped spacetime
International Nuclear Information System (INIS)
McDonald, Kristian L.
2008-01-01
A recent work has investigated the possibility that the mass scale for the ultraviolet (UV) brane in the Randall-Sundrum (RS) model is of the order 10 3 TeV. In this so called 'little Randall-Sundrum' (LRS) model the bounds on the gauge sector are less severe, permitting a lower Kaluza-Klein scale and cleaner discovery channels. However employing a low UV scale nullifies one major appeal of the RS model, namely, the elegant explanation of the hierarchy between the Planck and weak scales. In this work we show that by localizing the gauge, fermion, and scalar sector of the LRS model on a five dimensional slice of a doubly warped spacetime one may obtain the low UV brane scale employed in the LRS model and motivate the weak-Planck hierarchy. We also consider the generalization to an n-warped spacetime
Knox, C. E.; Cannon, D. G.
1979-01-01
A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.
Bodin, Jacques
2015-03-01
In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.
Gauge and moduli hierarchy in a multiply warped braneworld scenario
International Nuclear Information System (INIS)
Das, Ashmita; SenGupta, Soumitra
2013-01-01
Discovery of Higgs-like boson near the mass scale ∼126 Gev generates renewed interest to the gauge hierarchy problem in the standard model related to the stabilisation of the Higgs mass within Tev scale without any unnatural fine tuning. One of the successful attempts to resolve this problem has been the Randall–Sundrum warped geometry model. Subsequently this 5-dimensional model was extended to a doubly warped 6-dimensional (or higher) model which can offer a geometric explanation of the fermion mass hierarchy in the standard model of elementary particles (D. Choudhury and S. SenGupta, 2007 [1]). In an attempt to address the dark energy issue, we in this work extend such 6-dimensional warped braneworld model to include non-flat 3-branes at the orbifold fixed points such that a small but non-vanishing brane cosmological constant is induced in our observable brane. We show that the requirements of a Planck to Tev scale warping along with a vanishingly small but non-zero cosmological constant on the visible brane with non-hierarchical moduli, each with scale close to Planck length, lead to a scenario where the 3-branes can have energy scales either close to Tev or close to Planck scale. Such a scenario can address both the gauge hierarchy as well as fermion mass hierarchy problem in standard model without introducing hierarchical scales between the two moduli. Thus simultaneous resolutions to the gauge hierarchy problem, fermion mass hierarchy problem and non-hierarchical moduli problem are closely linked with the near flatness condition of our universe.
Exact Algorithm for the Capacitated Team Orienteering Problem with Time Windows
Directory of Open Access Journals (Sweden)
Junhyuk Park
2017-01-01
Full Text Available The capacitated team orienteering problem with time windows (CTOPTW is a problem to determine players’ paths that have the maximum rewards while satisfying the constraints. In this paper, we present the exact solution approach for the CTOPTW which has not been done in previous literature. We show that the branch-and-price (B&P scheme which was originally developed for the team orienteering problem can be applied to the CTOPTW. To solve pricing problems, we used implicit enumeration acceleration techniques, heuristic algorithms, and ng-route relaxations.
Research and Realization of the HJ-1C Real-time Software Frame Synchronization Algorithm
Directory of Open Access Journals (Sweden)
Hou Yang-shuan
2014-06-01
Full Text Available Conventional software frame synchronization methods are inefficient in processing huge continuous data without synchronization words. To improve the processing speed, a real-time synchronization algorithm is proposed based on reverse searching. Satellite data are grouped and searched in the reverse direction to avoid searching for synchronization words in huge continuous invalid data; thus, the frame synchronization speed is improved enormously. The fastest processing speed is up to 15445.9 Mbps when HJ-1C data are tested. This method is presently applied to the HJ-1C quick-look system in remote sensing satellite ground stations.
Rasim; Junaeti, E.; Wirantika, R.
2018-01-01
Accurate forecasting for the sale of a product depends on the forecasting method used. The purpose of this research is to build motorcycle sales forecasting application using Fuzzy Time Series method combined with interval determination using automatic clustering algorithm. Forecasting is done using the sales data of motorcycle sales in the last ten years. Then the error rate of forecasting is measured using Means Percentage Error (MPE) and Means Absolute Percentage Error (MAPE). The results of forecasting in the one-year period obtained in this study are included in good accuracy.
Evaluation of the global orbit correction algorithm for the APS real-time orbit feedback system
International Nuclear Information System (INIS)
Carwardine, J.; Evans, K. Jr.
1997-01-01
The APS real-time orbit feedback system uses 38 correctors per plane and has available up to 320 rf beam position monitors. Orbit correction is implemented using multiple digital signal processors. Singular value decomposition is used to generate a correction matrix from a linear response matrix model of the storage ring lattice. This paper evaluates the performance of the APS system in terms of its ability to correct localized and distributed sources of orbit motion. The impact of regulator gain and bandwidth, choice of beam position monitors, and corrector dynamics are discussed. The weighted least-squares algorithm is reviewed in the context of local feedback
On the best learning algorithm for web services response time prediction
DEFF Research Database (Denmark)
Madsen, Henrik; Albu, Razvan-Daniel; Popentiu-Vladicescu, Florin
2013-01-01
In this article we will examine the effect of different learning algorithms, while training the MLP (Multilayer Perceptron) with the intention of predicting web services response time. Web services do not necessitate a user interface. This may seem contradictory to most people's concept of what...... an application is. A Web service is better imagined as an application "segment," or better as a program enabler. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the response of web services during their operation is very important....
Online Estimation of Time-Varying Volatility Using a Continuous-Discrete LMS Algorithm
Directory of Open Access Journals (Sweden)
Jacques Oksman
2008-09-01
Full Text Available The following paper addresses a problem of inference in financial engineering, namely, online time-varying volatility estimation. The proposed method is based on an adaptive predictor for the stock price, built from an implicit integration formula. An estimate for the current volatility value which minimizes the mean square prediction error is calculated recursively using an LMS algorithm. The method is then validated on several synthetic examples as well as on real data. Throughout the illustration, the proposed method is compared with both UKF and offline volatility estimation.
Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît
2016-04-12
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.
Directory of Open Access Journals (Sweden)
Jingbo Zhang
2018-01-01
Full Text Available In the field of cognitive radio spectrum sensing, the adaptive silence period management mechanism (ASPM has improved the problem of the low time-resource utilization rate of the traditional silence period management mechanism (TSPM. However, in the case of the low signal-to-noise ratio (SNR, the ASPM algorithm will increase the probability of missed detection for the primary user (PU. Focusing on this problem, this paper proposes an improved adaptive silence period management (IA-SPM algorithm which can adaptively adjust the sensing parameters of the current period in combination with the feedback information from the data communication with the sensing results of the previous period. The feedback information in the channel is achieved with frequency resources rather than time resources in order to adapt to the parameter change in the time-varying channel. The Monte Carlo simulation results show that the detection probability of the IA-SPM is 10–15% higher than that of the ASPM under low SNR conditions.
Hwang, J Y; Kang, J M; Jang, Y W; Kim, H
2004-01-01
Novel algorithm and real-time ambulatory monitoring system for fall detection in elderly people is described. Our system is comprised of accelerometer, tilt sensor and gyroscope. For real-time monitoring, we used Bluetooth. Accelerometer measures kinetic force, tilt sensor and gyroscope estimates body posture. Also, we suggested algorithm using signals which obtained from the system attached to the chest for fall detection. To evaluate our system and algorithm, we experimented on three people aged over 26 years. The experiment of four cases such as forward fall, backward fall, side fall and sit-stand was repeated ten times and the experiment in daily life activity was performed one time to each subject. These experiments showed that our system and algorithm could distinguish between falling and daily life activity. Moreover, the accuracy of fall detection is 96.7%. Our system is especially adapted for long-time and real-time ambulatory monitoring of elderly people in emergency situation.
Warps, grids and curvature in triple vector bundles
Flari, Magdalini K.; Mackenzie, Kirill
2018-06-01
A triple vector bundle is a cube of vector bundle structures which commute in the (strict) categorical sense. A grid in a triple vector bundle is a collection of sections of each bundle structure with certain linearity properties. A grid provides two routes around each face of the triple vector bundle, and six routes from the base manifold to the total manifold; the warps measure the lack of commutativity of these routes. In this paper we first prove that the sum of the warps in a triple vector bundle is zero. The proof we give is intrinsic and, we believe, clearer than the proof using decompositions given earlier by one of us. We apply this result to the triple tangent bundle T^3M of a manifold and deduce (as earlier) the Jacobi identity. We further apply the result to the triple vector bundle T^2A for a vector bundle A using a connection in A to define a grid in T^2A . In this case the curvature emerges from the warp theorem.
Explicit Supersymmetry Breaking on Boundaries of Warped Extra Dimensions
Energy Technology Data Exchange (ETDEWEB)
Hall, Lawrence J.; Nomura, Yasunori; Okui, Takemichi; Oliver, Steven J.
2003-02-25
Explicit supersymmetry breaking is studied in higher dimensional theories by having boundaries respect only a subgroup of the bulk symmetry. If the boundary symmetry is the maximal subgroup allowed by the boundary conditions imposed on the fields, then the symmetry can be consistently gauged; otherwise gauging leads to an inconsistent theory. In a warped fifth dimension, an explicit breaking of all bulk supersymmetries by the boundaries is found to be inconsistent with gauging; unlike the case of flat 5D, complete supersymmetry breaking by boundary conditions is not consistent with supergravity. Despite this result, the low energy effective theory resulting from boundary supersymmetry breaking becomes consistent in the limit where gravity decouples, and such models are explored in the hope that some way of successfully incorporating gravity can be found. A warped constrained standard model leads to a theory with one Higgs boson with mass expected close to the experimental limit. A unified theory in a warped fifth dimension is studied with boundary breaking of both SU(5) gauge symmetry and supersymmetry. The usual supersymmetric predictionfor gauge coupling unification holds even though the TeV spectrum is quite unlike the MSSM. Such a theory may unify matter and Higgs in the same SU(5) hypermultiplet.
Directory of Open Access Journals (Sweden)
Dong Zhang
2014-02-01
Full Text Available This paper presents a new profile shape matching stereovision algorithm that is designed to extract 3D information in real time. This algorithm obtains 3D information by matching profile intensity shapes of each corresponding row of the stereo image pair. It detects the corresponding matching patterns of the intensity profile rather than the intensity values of individual pixels or pixels in a small neighbourhood. This approach reduces the effect of the intensity and colour variations caused by lighting differences. As with all real-time vision algorithms, there is always a trade-off between accuracy and processing speed. This algorithm achieves a balance between the two to produce accurate results for real-time applications. To demonstrate its performance, the proposed algorithm is tested for human pose and hand gesture recognition to control a smart phone and an entertainment system.
Triggerless Readout with Time and Amplitude Reconstruction of Event Based on Deconvolution Algorithm
International Nuclear Information System (INIS)
Kulis, S.; Idzik, M.
2011-01-01
In future linear colliders like CLIC, where the period between the bunch crossings is in a sub-nanoseconds range ( 500 ps), an appropriate detection technique with triggerless signal processing is needed. In this work we discuss a technique, based on deconvolution algorithm, suitable for time and amplitude reconstruction of an event. In the implemented method the output of a relatively slow shaper (many bunch crossing periods) is sampled and digitalised in an ADC and then the deconvolution procedure is applied to digital data. The time of an event can be found with a precision of few percent of sampling time. The signal to noise ratio is only slightly decreased after passing through the deconvolution filter. The performed theoretical and Monte Carlo studies are confirmed by the results of preliminary measurements obtained with the dedicated system comprising of radiation source, silicon sensor, front-end electronics, ADC and further digital processing implemented on a PC computer. (author)
International Nuclear Information System (INIS)
Pan Jun-Yang; Xie Yi
2015-01-01
With tremendous advances in modern techniques, Einstein's general relativity has become an inevitable part of deep space missions. We investigate the relativistic algorithm for time transfer between the proper time τ of the onboard clock and the Geocentric Coordinate Time, which extends some previous works by including the effects of propagation of electromagnetic signals. In order to evaluate the implicit algebraic equations and integrals in the model, we take an analytic approach to work out their approximate values. This analytic model might be used in an onboard computer because of its limited capability to perform calculations. Taking an orbiter like Yinghuo-1 as an example, we find that the contributions of the Sun, the ground station and the spacecraft dominate the outcomes of the relativistic corrections to the model. (research papers)
Progress in parallel implementation of the multilevel plane wave time domain algorithm
Liu, Yang
2013-07-01
The computational complexity and memory requirements of classical schemes for evaluating transient electromagnetic fields produced by Ns dipoles active for Nt time steps scale as O(NtN s 2) and O(Ns 2), respectively. The multilevel plane wave time domain (PWTD) algorithm [A.A. Ergin et al., Antennas and Propagation Magazine, IEEE, vol. 41, pp. 39-52, 1999], viz. the extension of the frequency domain fast multipole method (FMM) to the time domain, reduces the above costs to O(NtNslog2Ns) and O(Ns α) with α = 1.5 for surface current distributions and α = 4/3 for volumetric ones. Its favorable computational and memory costs notwithstanding, serial implementations of the PWTD scheme unfortunately remain somewhat limited in scope and ill-suited to tackle complex real-world scattering problems, and parallel implementations are called for. © 2013 IEEE.
A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.
Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad
2012-01-01
The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.
Parallel algorithm of real-time infrared image restoration based on total variation theory
Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei
2015-10-01
Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.
A new comparison of hyperspectral anomaly detection algorithms for real-time applications
Díaz, María.; López, Sebastián.; Sarmiento, Roberto
2016-10-01
Due to the high spectral resolution that remotely sensed hyperspectral images provide, there has been an increasing interest in anomaly detection. The aim of anomaly detection is to stand over pixels whose spectral signature differs significantly from the background spectra. Basically, anomaly detectors mark pixels with a certain score, considering as anomalies those whose scores are higher than a threshold. Receiver Operating Characteristic (ROC) curves have been widely used as an assessment measure in order to compare the performance of different algorithms. ROC curves are graphical plots which illustrate the trade- off between false positive and true positive rates. However, they are limited in order to make deep comparisons due to the fact that they discard relevant factors required in real-time applications such as run times, costs of misclassification and the competence to mark anomalies with high scores. This last fact is fundamental in anomaly detection in order to distinguish them easily from the background without any posterior processing. An extensive set of simulations have been made using different anomaly detection algorithms, comparing their performances and efficiencies using several extra metrics in order to complement ROC curves analysis. Results support our proposal and demonstrate that ROC curves do not provide a good visualization of detection performances for themselves. Moreover, a figure of merit has been proposed in this paper which encompasses in a single global metric all the measures yielded for the proposed additional metrics. Therefore, this figure, named Detection Efficiency (DE), takes into account several crucial types of performance assessment that ROC curves do not consider. Results demonstrate that algorithms with the best detection performances according to ROC curves do not have the highest DE values. Consequently, the recommendation of using extra measures to properly evaluate performances have been supported and justified by
Collaborative real-time motion video analysis by human observer and image exploitation algorithms
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2015-05-01
Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.
Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.
Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence
2012-08-29
Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real numbers, mainly based on differential equations and chemical kinetics formalism; (2) and qualitative modeling, representing chemical species concentrations or activities by a finite set of discrete values. Both approaches answer particular (and often different) biological questions. Qualitative modeling approach permits a simple and less detailed description of the biological systems, efficiently describes stable state identification but remains inconvenient in describing the transient kinetics leading to these states. In this context, time is represented by discrete steps. Quantitative modeling, on the other hand, can describe more accurately the dynamical behavior of biological processes as it follows the evolution of concentration or activities of chemical species as a function of time, but requires an important amount of information on the parameters difficult to find in the literature. Here, we propose a modeling framework based on a qualitative approach that is intrinsically continuous in time. The algorithm presented in this article fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution of the biological process we wish to model, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. Mathematically, this approach can be translated in a set of ordinary differential
Evaluation of the Intel iWarp parallel processor for space flight applications
Hine, Butler P., III; Fong, Terrence W.
1993-01-01
The potential of a DARPA-sponsored advanced processor, the Intel iWarp, for use in future SSF Data Management Systems (DMS) upgrades is evaluated through integration into the Ames DMS testbed and applications testing. The iWarp is a distributed, parallel computing system well suited for high performance computing applications such as matrix operations and image processing. The system architecture is modular, supports systolic and message-based computation, and is capable of providing massive computational power in a low-cost, low-power package. As a consequence, the iWarp offers significant potential for advanced space-based computing. This research seeks to determine the iWarp's suitability as a processing device for space missions. In particular, the project focuses on evaluating the ease of integrating the iWarp into the SSF DMS baseline architecture and the iWarp's ability to support computationally stressing applications representative of SSF tasks.
Conformal hyperbolicity of Lorentzian warped products
International Nuclear Information System (INIS)
Markowitz, M.J.
1982-01-01
A space-time M is said to be conformally hyperbolic if the intrinsic conformal Lorentz pseudodistance dsub(M) is a true distance. In this paper criteria are derived which insure the conformal hyperbolicity of certain space-times which are generalizations of the Robertson-Walker spaces. Then dsub(M) is determined explicitly for Einstein-de Sitter space, and important cosmological model. (author)
Conformal hyperbolicity of Lorentzian warped products
Energy Technology Data Exchange (ETDEWEB)
Markowitz, M.J. (Chicago Univ., IL (USA). Dept. of Mathematics)
1982-12-01
A space-time M is said to be conformally hyperbolic if the intrinsic conformal Lorentz pseudodistance dsub(M) is a true distance. In this paper criteria are derived which insure the conformal hyperbolicity of certain space-times which are generalizations of the Robertson-Walker spaces. Then dsub(M) is determined explicitly for Einstein-de Sitter space, and important cosmological model.
A Time-Varied Probabilistic ON/OFF Switching Algorithm for Cellular Networks
Rached, Nadhir B.; Ghazzai, Hakim; Kadri, Abdullah; Alouini, Mohamed-Slim
2018-01-01
In this letter, we develop a time-varied probabilistic on/off switching planning method for cellular networks to reduce their energy consumption. It consists in a risk-aware optimization approach that takes into consideration the randomness of the user profile associated with each base station (BS). The proposed approach jointly determines (i) the instants of time at which the current active BS configuration must be updated due to an increase or decrease of the network traffic load, and (ii) the set of minimum BSs to be activated to serve the networks’ subscribers. Probabilistic metrics modeling the traffic profile variation are developed to trigger this dynamic on/off switching operation. Selected simulation results are then performed to validate the proposed algorithm for different system parameters.
A Time-Varied Probabilistic ON/OFF Switching Algorithm for Cellular Networks
Rached, Nadhir B.
2018-01-11
In this letter, we develop a time-varied probabilistic on/off switching planning method for cellular networks to reduce their energy consumption. It consists in a risk-aware optimization approach that takes into consideration the randomness of the user profile associated with each base station (BS). The proposed approach jointly determines (i) the instants of time at which the current active BS configuration must be updated due to an increase or decrease of the network traffic load, and (ii) the set of minimum BSs to be activated to serve the networks’ subscribers. Probabilistic metrics modeling the traffic profile variation are developed to trigger this dynamic on/off switching operation. Selected simulation results are then performed to validate the proposed algorithm for different system parameters.
Development of algorithms for real time track selection in the TOTEM experiment
Minafra, Nicola; Radicioni, E
The TOTEM experiment at the LHC has been designed to measure the total proton-proton cross-section with a luminosity independent method and to study elastic and diffractive scattering at energy up to 14 TeV in the center of mass. Elastic interactions are detected by Roman Pot stations, placed at 147m and 220m along the two exiting beams. At the present time, data acquired by these detectors are stored on disk without any data reduction by the data acquisition chain. In this thesis several tracking and selection algorithms, suitable for real-time implementation in the firmware of the back-end electronics, have been proposed and tested using real data.
FPGA-based real-time phase measuring profilometry algorithm design and implementation
Zhan, Guomin; Tang, Hongwei; Zhong, Kai; Li, Zhongwei; Shi, Yusheng
2016-11-01
Phase measuring profilometry (PMP) has been widely used in many fields, like Computer Aided Verification (CAV), Flexible Manufacturing System (FMS) et al. High frame-rate (HFR) real-time vision-based feedback control will be a common demands in near future. However, the instruction time delay in the computer caused by numerous repetitive operations greatly limit the efficiency of data processing. FPGA has the advantages of pipeline architecture and parallel execution, and it fit for handling PMP algorithm. In this paper, we design a fully pipelined hardware architecture for PMP. The functions of hardware architecture includes rectification, phase calculation, phase shifting, and stereo matching. The experiment verified the performance of this method, and the factors that may influence the computation accuracy was analyzed.
Warped frequency transform analysis of ultrasonic guided waves in long bones
De Marchi, L.; Baravelli, E.; Xu, K.; Ta, D.; Speciale, N.; Marzani, A.; Viola, E.
2010-03-01
Long bones can be seen as irregular hollow tubes, in which, for a given excitation frequency, many ultrasonic Guided Waves (GWs) can propagate. The analysis of GWs is potential to reflect more information on both geometry and material properties of the bone than any other method (such as dual-energy X-ray absorptiometry, or quantitative computed tomography), and can be used in the assessment of osteoporosis and in the evaluation of fracture healing. In this study, time frequency representations (TFRs) were used to gain insights into the expected behavior of GWs in bones. To this aim, we implemented a dedicated Warped Frequency Transform (WFT) which decomposes the spectrotemporal components of the different propagating modes by selecting an appropriate warping map to reshape the frequency axis. The map can be designed once the GWs group velocity dispersion curves can be predicted. To this purpose, the bone is considered as a hollow cylinder with inner and outer diameter of 16.6 and 24.7 mm, respectively, and linear poroelastic material properties in agreement with the low level of stresses induced by the waves. Timetransient events obtained experimentally, via a piezoelectric ultrasonic set-up applied to bovine tibiae, are analyzed. The results show that WFT limits interference patterns which appear with others TFRs (such as scalograms or warpograms) and produces a sparse representation suitable for characterization purposes. In particular, the mode-frequency combinations propagating with minimal losses are identified.
First arrival time picking for microseismic data based on DWSW algorithm
Li, Yue; Wang, Yue; Lin, Hongbo; Zhong, Tie
2018-03-01
The first arrival time picking is a crucial step in microseismic data processing. When the signal-to-noise ratio (SNR) is low, however, it is difficult to get the first arrival time accurately with traditional methods. In this paper, we propose the double-sliding-window SW (DWSW) method based on the Shapiro-Wilk (SW) test. The DWSW method is used to detect the first arrival time by making full use of the differences between background noise and effective signals in the statistical properties. Specifically speaking, we obtain the moment corresponding to the maximum as the first arrival time of microseismic data when the statistic of our method reaches its maximum. Hence, in our method, there is no need to select the threshold, which makes the algorithm more facile when the SNR of microseismic data is low. To verify the reliability of the proposed method, a series of experiments is performed on both synthetic and field microseismic data. Our method is compared with the traditional short-time and long-time average (STA/LTA) method, the Akaike information criterion, and the kurtosis method. Analysis results indicate that the accuracy rate of the proposed method is superior to that of the other three methods when the SNR is as low as - 10 dB.
2015-03-01
COMMUNICATION AND JAMMING BDA OF OFDMA COMMUNICATION SYSTEMS USING THE SOFTWARE DEFINED RADIO PLATFORM WARP THESIS Kate J. Yaxley, FLTLT, Royal... BDA OF OFDMA COMMUNICATION SYSTEMS USING THE SOFTWARE DEFINED RADIO PLATFORM WARP THESIS Presented to the Faculty Department of Electrical and...COMMUNICATION AND JAMMING BDA OF OFDMA COMMUNICATION SYSTEMS USING THE SOFTWARE DEFINED RADIO PLATFORM WARP THESIS Kate J. Yaxley, B.E. (Elec) Hons Div II
FPGA-Based Implementation of Lithuanian Isolated Word Recognition Algorithm
Directory of Open Access Journals (Sweden)
Tomyslav Sledevič
2013-05-01
Full Text Available The paper describes the FPGA-based implementation of Lithuanian isolated word recognition algorithm. FPGA is selected for parallel process implementation using VHDL to ensure fast signal processing at low rate clock signal. Cepstrum analysis was applied to features extraction in voice. The dynamic time warping algorithm was used to compare the vectors of cepstrum coefficients. A library of 100 words features was created and stored in the internal FPGA BRAM memory. Experimental testing with speaker dependent records demonstrated the recognition rate of 94%. The recognition rate of 58% was achieved for speaker-independent records. Calculation of cepstrum coefficients lasted for 8.52 ms at 50 MHz clock, while 100 DTWs took 66.56 ms at 25 MHz clock.Article in Lithuanian
Directory of Open Access Journals (Sweden)
Erxu Pi
Full Text Available Temperature is a predominant environmental factor affecting grass germination and distribution. Various thermal-germination models for prediction of grass seed germination have been reported, in which the relationship between temperature and germination were defined with kernel functions, such as quadratic or quintic function. However, their prediction accuracies warrant further improvements. The purpose of this study is to evaluate the relative prediction accuracies of genetic algorithm (GA models, which are automatically parameterized with observed germination data. The seeds of five P. pratensis (Kentucky bluegrass, KB cultivars were germinated under 36 day/night temperature regimes ranging from 5/5 to 40/40 °C with 5 °C increments. Results showed that optimal germination percentages of all five tested KB cultivars were observed under a fluctuating temperature regime of 20/25 °C. Meanwhile, the constant temperature regimes (e.g., 5/5, 10/10, 15/15 °C, etc. suppressed the germination of all five cultivars. Furthermore, the back propagation artificial neural network (BP-ANN algorithm was integrated to optimize temperature-germination response models from these observed germination data. It was found that integrations of GA-BP-ANN (back propagation aided genetic algorithm artificial neural network significantly reduced the Root Mean Square Error (RMSE values from 0.21~0.23 to 0.02~0.09. In an effort to provide a more reliable prediction of optimum sowing time for the tested KB cultivars in various regions in the country, the optimized GA-BP-ANN models were applied to map spatial and temporal germination percentages of blue grass cultivars in China. Our results demonstrate that the GA-BP-ANN model is a convenient and reliable option for constructing thermal-germination response models since it automates model parameterization and has excellent prediction accuracy.
National Research Council Canada - National Science Library
Sorgaard, Duane
2004-01-01
.... A time-to-location algorithm can successfully resolve a geographic location of a computer node using only latency information from known sites and mathematically calculating the Euclidean distance...
National Research Council Canada - National Science Library
Moon, II, Ron L
2005-01-01
...) development environment into an FPGA-based embedded-platform development board. Research at the Naval Postgraduate School has produced a revolutionary time-optimal spacecraft control algorithm based upon the Legendre Pseudospectral method...
Wang, Libing; Mao, Chengxiong; Wang, Dan; Lu, Jiming; Zhang, Junfeng; Chen, Xun
2014-01-01
In order to control the cascaded H-bridges (CHB) converter with staircase modulation strategy in a real-time manner, a real-time and closed-loop control algorithm based on artificial neural network (ANN) for three-phase CHB converter is proposed in this paper. It costs little computation time and memory. It has two steps. In the first step, hierarchical particle swarm optimizer with time-varying acceleration coefficient (HPSO-TVAC) algorithm is employed to minimize the total harmonic distortion (THD) and generate the optimal switching angles offline. In the second step, part of optimal switching angles are used to train an ANN and the well-designed ANN can generate optimal switching angles in a real-time manner. Compared with previous real-time algorithm, the proposed algorithm is suitable for a wider range of modulation index and results in a smaller THD and a lower calculation time. Furthermore, the well-designed ANN is embedded into a closed-loop control algorithm for CHB converter with variable direct voltage (DC) sources. Simulation results demonstrate that the proposed closed-loop control algorithm is able to quickly stabilize load voltage and minimize the line current's THD (<5%) when subjecting the DC sources disturbance or load disturbance. In real design stage, a switching angle pulse generation scheme is proposed and experiment results verify its correctness.
Directory of Open Access Journals (Sweden)
Libing Wang
2014-01-01
Full Text Available In order to control the cascaded H-bridges (CHB converter with staircase modulation strategy in a real-time manner, a real-time and closed-loop control algorithm based on artificial neural network (ANN for three-phase CHB converter is proposed in this paper. It costs little computation time and memory. It has two steps. In the first step, hierarchical particle swarm optimizer with time-varying acceleration coefficient (HPSO-TVAC algorithm is employed to minimize the total harmonic distortion (THD and generate the optimal switching angles offline. In the second step, part of optimal switching angles are used to train an ANN and the well-designed ANN can generate optimal switching angles in a real-time manner. Compared with previous real-time algorithm, the proposed algorithm is suitable for a wider range of modulation index and results in a smaller THD and a lower calculation time. Furthermore, the well-designed ANN is embedded into a closed-loop control algorithm for CHB converter with variable direct voltage (DC sources. Simulation results demonstrate that the proposed closed-loop control algorithm is able to quickly stabilize load voltage and minimize the line current’s THD (<5% when subjecting the DC sources disturbance or load disturbance. In real design stage, a switching angle pulse generation scheme is proposed and experiment results verify its correctness.
Electrodynamics in a 6D warped geometry
International Nuclear Information System (INIS)
Aranda, A.; Diaz-Cruz, J. L.; Linares, R.; Morales-Tecotl, H. A.; Pedraza, O.
2009-01-01
We obtain the effective 4D action that arises from a 6D free gauge action in the space-time metric RSI-1. Solving explicitly the 6D equations of motion we obtain the Kaluza-Klein decomposition of the 6D gauge field. This work constitutes the first step towards the discussion of the Gauge-Higgs Unification scenario in this background.
Stone, Wesley W.; Gilliom, Robert J.
2012-01-01
Watershed Regressions for Pesticides (WARP) models, previously developed for atrazine at the national scale, are improved for application to the United States (U.S.) Corn Belt region by developing region-specific models that include watershed characteristics that are influential in predicting atrazine concentration statistics within the Corn Belt. WARP models for the Corn Belt (WARP-CB) were developed for annual maximum moving-average (14-, 21-, 30-, 60-, and 90-day durations) and annual 95th-percentile atrazine concentrations in streams of the Corn Belt region. The WARP-CB models accounted for 53 to 62% of the variability in the various concentration statistics among the model-development sites. Model predictions were within a factor of 5 of the observed concentration statistic for over 90% of the model-development sites. The WARP-CB residuals and uncertainty are lower than those of the National WARP model for the same sites. Although atrazine-use intensity is the most important explanatory variable in the National WARP models, it is not a significant variable in the WARP-CB models. The WARP-CB models provide improved predictions for Corn Belt streams draining watersheds with atrazine-use intensities of 17 kg/km2 of watershed area or greater.
Analysis of Time and Frequency Domain Pace Algorithms for OFDM with Virtual Subcarriers
DEFF Research Database (Denmark)
Rom, Christian; Manchón, Carles Navarro; Deneire, Luc
2007-01-01
This paper studies common linear frequency direction pilot-symbol aided channel estimation algorithms for orthogonal frequency division multiplexing in a UTRA long term evolution context. Three deterministic algorithms are analyzed: the maximum likelihood (ML) approach, the noise reduction algori...
Energy Technology Data Exchange (ETDEWEB)
Alexander S. Rattner; Donna Post Guillen; Alark Joshi
2012-12-01
Photo- and physically-realistic techniques are often insufficient for visualization of simulation results, especially for 3D and time-varying datasets. Substantial research efforts have been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. While these efforts have yielded valuable visualization results, a great deal of work has been reproduced in studies as individual research groups often develop purpose-built platforms. Additionally, interoperability between illustrative visualization software is limited due to specialized processing and rendering architectures employed in different studies. In this investigation, a generalized framework for illustrative visualization is proposed, and implemented in marmotViz, a ParaView plugin, enabling its use on variety of computing platforms with various data file formats and mesh geometries. Detailed descriptions of the region-of-interest identification and feature-tracking algorithms incorporated into this tool are provided. Additionally, implementations of multiple illustrative effect algorithms are presented to demonstrate the use and flexibility of this framework. By providing a framework and useful underlying functionality, the marmotViz tool can act as a springboard for future research in the field of illustrative visualization.
Efficient Algorithms for Real-Time GPU Volumetric Cloud Rendering with Enhanced Geometry
Directory of Open Access Journals (Sweden)
Carlos Jiménez de Parga
2018-04-01
Full Text Available This paper presents several new techniques for volumetric cloud rendering using efficient algorithms and data structures based on ray-tracing methods for cumulus generation, achieving an optimum balance between realism and performance. These techniques target applications such as flight simulations, computer games, and educational software, even with conventional graphics hardware. The contours of clouds are defined by implicit mathematical expressions or triangulated structures inside which volumetric rendering is performed. Novel techniques are used to reproduce the asymmetrical nature of clouds and the effects of light-scattering, with low computing costs. The work includes a new method to create randomized fractal clouds using a recursive grammar. The graphical results are comparable to those produced by state-of-the-art, hyper-realistic algorithms. These methods provide real-time performance, and are superior to particle-based systems. These outcomes suggest that our methods offer a good balance between realism and performance, and are suitable for use in the standard graphics industry.