Spatio-temporal point process filtering methods with an application
Czech Academy of Sciences Publication Activity Database
Frcalová, B.; Beneš, V.; Klement, Daniel
2010-01-01
Roč. 21, 3-4 (2010), s. 240-252 ISSN 1180-4009 R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z50110509 Keywords : cox point process * filtering * spatio-temporal modelling * spike Subject RIV: BA - General Mathematics Impact factor: 0.750, year: 2010
Clusterless Decoding of Position From Multiunit Activity Using A Marked Point Process Filter
Deng, Xinyi; Liu, Daniel F.; Kay, Kenneth; Frank, Loren M.; Eden, Uri T.
2016-01-01
Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally, these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision such as real-time decoding for brain-computer interfaces. As the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights about clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’ rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and with experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalently or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain. PMID:25973549
A customizable stochastic state point process filter (SSPPF) for neural spiking activity.
Xin, Yao; Li, Will X Y; Min, Biao; Han, Yan; Cheung, Ray C C
2013-01-01
Stochastic State Point Process Filter (SSPPF) is effective for adaptive signal processing. In particular, it has been successfully applied to neural signal coding/decoding in recent years. Recent work has proven its efficiency in non-parametric coefficients tracking in modeling of mammal nervous system. However, existing SSPPF has only been realized in commercial software platforms which limit their computational capability. In this paper, the first hardware architecture of SSPPF has been designed and successfully implemented on field-programmable gate array (FPGA), proving a more efficient means for coefficient tracking in a well-established generalized Laguerre-Volterra model for mammalian hippocampal spiking activity research. By exploring the intrinsic parallelism of the FPGA, the proposed architecture is able to process matrices or vectors with random size, and is efficiently scalable. Experimental result shows its superior performance comparing to the software implementation, while maintaining the numerical precision. This architecture can also be potentially utilized in the future hippocampal cognitive neural prosthesis design.
Poisson branching point processes
International Nuclear Information System (INIS)
Matsuo, K.; Teich, M.C.; Saleh, B.E.A.
1984-01-01
We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers
Filtering Photogrammetric Point Clouds Using Standard LIDAR Filters Towards DTM Generation
Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M. Y.
2018-05-01
Digital Terrain Models (DTMs) can be generated from point clouds acquired by laser scanning or photogrammetric dense matching. During the last two decades, much effort has been paid to developing robust filtering algorithms for the airborne laser scanning (ALS) data. With the point cloud quality from dense image matching (DIM) getting better and better, the research question that arises is whether those standard Lidar filters can be used to filter photogrammetric point clouds as well. Experiments are implemented to filter two dense matching point clouds with different noise levels. Results show that the standard Lidar filter is robust to random noise. However, artefacts and blunders in the DIM points often appear due to low contrast or poor texture in the images. Filtering will be erroneous in these locations. Filtering the DIM points pre-processed by a ranking filter will bring higher Type II error (i.e. non-ground points actually labelled as ground points) but much lower Type I error (i.e. bare ground points labelled as non-ground points). Finally, the potential DTM accuracy that can be achieved by DIM points is evaluated. Two DIM point clouds derived by Pix4Dmapper and SURE are compared. On grassland dense matching generates points higher than the true terrain surface, which will result in incorrectly elevated DTMs. The application of the ranking filter leads to a reduced bias in the DTM height, but a slightly increased noise level.
Frame Filtering and Skipping for Point Cloud Data Video Transmission
Directory of Open Access Journals (Sweden)
Carlos Moreno
2017-01-01
Full Text Available Sensors for collecting 3D spatial data from the real world are becoming more important. They are a prime research area topic and have applications in consumer markets, such as medical, entertainment, and robotics. However, a primary concern with collecting this data is the vast amount of information being generated, and thus, needing to be processed before being transmitted. To address the issue, we propose the use of filtering methods and frame skipping. To collect the 3D spatial data, called point clouds, we used the Microsoft Kinect sensor. In addition, we utilized the Point Cloud Library to process and filter the data being generated by the Kinect. Two different computers were used: a client which collects, filters, and transmits the point clouds; and a server that receives and visualizes the point clouds. The client is also checking for similarity in consecutive frames, skipping those that reach a similarity threshold. In order to compare the filtering methods and test the effectiveness of the frame skipping technique, quality of service (QoS metrics such as frame rate and percentage of filter were introduced. These metrics indicate how well a certain combination of filtering method and frame skipping accomplishes the goal of transmitting point clouds from one location to another. We found that the pass through filter in conjunction with frame skipping provides the best relative QoS. However, results also show that there is still too much data for a satisfactory QoS. For a real-time system to provide reasonable end-to-end quality, dynamic compression and progressive transmission need to be utilized.
TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL
Directory of Open Access Journals (Sweden)
N. Zhu
2016-06-01
Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.
Adaptive robust Kalman filtering for precise point positioning
International Nuclear Information System (INIS)
Guo, Fei; Zhang, Xiaohong
2014-01-01
The optimality of precise point postioning (PPP) solution using a Kalman filter is closely connected to the quality of the a priori information about the process noise and the updated mesurement noise, which are sometimes difficult to obtain. Also, the estimation enviroment in the case of dynamic or kinematic applications is not always fixed but is subject to change. To overcome these problems, an adaptive robust Kalman filtering algorithm, the main feature of which introduces an equivalent covariance matrix to resist the unexpected outliers and an adaptive factor to balance the contribution of observational information and predicted information from the system dynamic model, is applied for PPP processing. The basic models of PPP including the observation model, dynamic model and stochastic model are provided first. Then an adaptive robust Kalmam filter is developed for PPP. Compared with the conventional robust estimator, only the observation with largest standardized residual will be operated by the IGG III function in each iteration to avoid reducing the contribution of the normal observations or even filter divergence. Finally, tests carried out in both static and kinematic modes have confirmed that the adaptive robust Kalman filter outperforms the classic Kalman filter by turning either the equivalent variance matrix or the adaptive factor or both of them. This becomes evident when analyzing the positioning errors in flight tests at the turns due to the target maneuvering and unknown process/measurement noises. (paper)
SURFACE FITTING FILTERING OF LIDAR POINT CLOUD WITH WAVEFORM INFORMATION
Directory of Open Access Journals (Sweden)
S. Xing
2017-09-01
Full Text Available Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from “WATER (Watershed Allied Telemetry Experimental Research” are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.
Stochastic processes and filtering theory
Jazwinski, Andrew H
1970-01-01
This unified treatment of linear and nonlinear filtering theory presents material previously available only in journals, and in terms accessible to engineering students. Its sole prerequisites are advanced calculus, the theory of ordinary differential equations, and matrix analysis. Although theory is emphasized, the text discusses numerous practical applications as well.Taking the state-space approach to filtering, this text models dynamical systems by finite-dimensional Markov processes, outputs of stochastic difference, and differential equations. Starting with background material on probab
Process for washing electromagnetic filters
International Nuclear Information System (INIS)
Guittet, Maurice; Treille, Pierre.
1980-01-01
This process concerns the washing of an electro-magnetic filter used, inter alia, for filtering the drain-off waters of nuclear power station steam generators, by means of a washing water used in closed circuit and freed, after each cleaning, of the solids in suspension it contains, by settlement of these solids. This invention enables the volume of water to be evaporated to be divided by 50, thereby providing a solid assurance of better safety, apart from a very significant saving [fr
Risk Sensitive Filtering with Poisson Process Observations
International Nuclear Information System (INIS)
Malcolm, W. P.; James, M. R.; Elliott, R. J.
2000-01-01
In this paper we consider risk sensitive filtering for Poisson process observations. Risk sensitive filtering is a type of robust filtering which offers performance benefits in the presence of uncertainties. We derive a risk sensitive filter for a stochastic system where the signal variable has dynamics described by a diffusion equation and determines the rate function for an observation process. The filtering equations are stochastic integral equations. Computer simulations are presented to demonstrate the performance gain for the risk sensitive filter compared with the risk neutral filter
An approach of point cloud denoising based on improved bilateral filtering
Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin
2018-04-01
An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal
Discrete stochastic processes and optimal filtering
Bertein, Jean-Claude
2012-01-01
Optimal filtering applied to stationary and non-stationary signals provides the most efficient means of dealing with problems arising from the extraction of noise signals. Moreover, it is a fundamental feature in a range of applications, such as in navigation in aerospace and aeronautics, filter processing in the telecommunications industry, etc. This book provides a comprehensive overview of this area, discussing random and Gaussian vectors, outlining the results necessary for the creation of Wiener and adaptive filters used for stationary signals, as well as examining Kalman filters which ar
Nonlinear filtering for LIDAR signal processing
Directory of Open Access Journals (Sweden)
D. G. Lainiotis
1996-01-01
Full Text Available LIDAR (Laser Integrated Radar is an engineering problem of great practical importance in environmental monitoring sciences. Signal processing for LIDAR applications involves highly nonlinear models and consequently nonlinear filtering. Optimal nonlinear filters, however, are practically unrealizable. In this paper, the Lainiotis's multi-model partitioning methodology and the related approximate but effective nonlinear filtering algorithms are reviewed and applied to LIDAR signal processing. Extensive simulation and performance evaluation of the multi-model partitioning approach and its application to LIDAR signal processing shows that the nonlinear partitioning methods are very effective and significantly superior to the nonlinear extended Kalman filter (EKF, which has been the standard nonlinear filter in past engineering applications.
Thinning spatial point processes into Poisson processes
DEFF Research Database (Denmark)
Møller, Jesper; Schoenberg, Frederic Paik
2010-01-01
are identified, and where we simulate backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and......In this paper we describe methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points......, thus, can be used as a graphical exploratory tool for inspecting the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....
Thinning spatial point processes into Poisson processes
DEFF Research Database (Denmark)
Møller, Jesper; Schoenberg, Frederic Paik
, and where one simulates backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and thus can......This paper describes methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points are identified...... be used as a diagnostic for assessing the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....
Detecting determinism from point processes.
Andrzejak, Ralph G; Mormann, Florian; Kreuz, Thomas
2014-12-01
The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.
Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models
2015-07-06
similarly transformed to work with the Laplace distribution. Cubature formulae for w(x) = 1 over regions of various shapes could be used for evaluating...measurement and process non- linearities, such as the cubature Kalman filter, can perform ex- tremely poorly in many applications involving angular...in the form of the “unscented transform ”) consider just converting such measurements into Cartesian coordinates and feeding the converted measurements
International Nuclear Information System (INIS)
Chen, Lin; Fan, Xiangtao; Du, Xiaoping
2014-01-01
Point cloud filtering is the basic and key step in LiDAR data processing. Adaptive Triangle Irregular Network Modelling (ATINM) algorithm and Threshold Segmentation on Elevation Statistics (TSES) algorithm are among the mature algorithms. However, few researches concentrate on the parameter selections of ATINM and the iteration condition of TSES, which can greatly affect the filtering results. First the paper presents these two key problems under two different terrain environments. For a flat area, small height parameter and angle parameter perform well and for areas with complex feature changes, large height parameter and angle parameter perform well. One-time segmentation is enough for flat areas, and repeated segmentations are essential for complex areas. Then the paper makes comparisons and analyses of the results by these two methods. ATINM has a larger I error in both two data sets as it sometimes removes excessive points. TSES has a larger II error in both two data sets as it ignores topological relations between points. ATINM performs well even with a large region and a dramatic topology while TSES is more suitable for small region with flat topology. Different parameters and iterations can cause relative large filtering differences
Comparison of Sigma-Point and Extended Kalman Filters on a Realistic Orbit Determination Scenario
Gaebler, John; Hur-Diaz. Sun; Carpenter, Russell
2010-01-01
Sigma-point filters have received a lot of attention in recent years as a better alternative to extended Kalman filters for highly nonlinear problems. In this paper, we compare the performance of the additive divided difference sigma-point filter to the extended Kalman filter when applied to orbit determination of a realistic operational scenario based on the Interstellar Boundary Explorer mission. For the scenario studied, both filters provided equivalent results. The performance of each is discussed in detail.
ECG fiducial point extraction using switching Kalman filter.
Akhbari, Mahsa; Ghahjaverestan, Nasim Montazeri; Shamsollahi, Mohammad B; Jutten, Christian
2018-04-01
In this paper, we propose a novel method for extracting fiducial points (FPs) of the beats in electrocardiogram (ECG) signals using switching Kalman filter (SKF). In this method, according to McSharry's model, ECG waveforms (P-wave, QRS complex and T-wave) are modeled with Gaussian functions and ECG baselines are modeled with first order auto regressive models. In the proposed method, a discrete state variable called "switch" is considered that affects only the observation equations. We denote a mode as a specific observation equation and switch changes between 7 modes and corresponds to different segments of an ECG beat. At each time instant, the probability of each mode is calculated and compared among two consecutive modes and a path is estimated, which shows the relation of each part of the ECG signal to the mode with the maximum probability. ECG FPs are found from the estimated path. For performance evaluation, the Physionet QT database is used and the proposed method is compared with methods based on wavelet transform, partially collapsed Gibbs sampler (PCGS) and extended Kalman filter. For our proposed method, the mean error and the root mean square error across all FPs are 2 ms (i.e. less than one sample) and 14 ms, respectively. These errors are significantly smaller than those obtained using other methods. The proposed method achieves lesser RMSE and smaller variability with respect to others. Copyright © 2018 Elsevier B.V. All rights reserved.
Padgett, Wayne T
2009-01-01
This book is intended to fill the gap between the ""ideal precision"" digital signal processing (DSP) that is widely taught, and the limited precision implementation skills that are commonly required in fixed-point processors and field programmable gate arrays (FPGAs). These skills are often neglected at the university level, particularly for undergraduates. We have attempted to create a resource both for a DSP elective course and for the practicing engineer with a need to understand fixed-point implementation. Although we assume a background in DSP, Chapter 2 contains a review of basic theory
Reducing and filtering point clouds with enhanced vector quantization.
Ferrari, Stefano; Ferrigno, Giancarlo; Piuri, Vincenzo; Borghese, N Alberto
2007-01-01
Modern scanners are able to deliver huge quantities of three-dimensional (3-D) data points sampled on an object's surface, in a short time. These data have to be filtered and their cardinality reduced to come up with a mesh manageable at interactive rates. We introduce here a novel procedure to accomplish these two tasks, which is based on an optimized version of soft vector quantization (VQ). The resulting technique has been termed enhanced vector quantization (EVQ) since it introduces several improvements with respect to the classical soft VQ approaches. These are based on computationally expensive iterative optimization; local computation is introduced here, by means of an adequate partitioning of the data space called hyperbox (HB), to reduce the computational time so as to be linear in the number of data points N, saving more than 80% of time in real applications. Moreover, the algorithm can be fully parallelized, thus leading to an implementation that is sublinear in N. The voxel side and the other parameters are automatically determined from data distribution on the basis of the Zador's criterion. This makes the algorithm completely automatic. Because the only parameter to be specified is the compression rate, the procedure is suitable even for nontrained users. Results obtained in reconstructing faces of both humans and puppets as well as artifacts from point clouds publicly available on the web are reported and discussed, in comparison with other methods available in the literature. EVQ has been conceived as a general procedure, suited for VQ applications with large data sets whose data space has relatively low dimensionality.
Processing Terrain Point Cloud Data
DeVore, Ronald
2013-01-10
Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization. Processing terrain data has not received the attention of other forms of surface reconstruction or of image processing. The goal of terrain data processing is to convert the point cloud into a succinct representation system that is amenable to the various application demands. The present paper presents a platform for terrain processing built on the following principles: (i) measuring distortion in the Hausdorff metric, which we argue is a good match for the application demands, (ii) a multiscale representation based on tree approximation using local polynomial fitting. The basic elements held in the nodes of the tree can be efficiently encoded, transmitted, visualized, and utilized for the various target applications. Several challenges emerge because of the variable resolution of the data, missing data, occlusions, and noise. Techniques for identifying and handling these challenges are developed. © 2013 Society for Industrial and Applied Mathematics.
Integration of GPS precise point positioning and MEMS-based INS using unscented particle filter.
Abd Rabbou, Mahmoud; El-Rabbany, Ahmed
2015-03-25
Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) integrated system involves nonlinear motion state and measurement models. However, the extended Kalman filter (EKF) is commonly used as the estimation filter, which might lead to solution divergence. This is usually encountered during GPS outages, when low-cost micro-electro-mechanical sensors (MEMS) inertial sensors are used. To enhance the navigation system performance, alternatives to the standard EKF should be considered. Particle filtering (PF) is commonly considered as a nonlinear estimation technique to accommodate severe MEMS inertial sensor biases and noise behavior. However, the computation burden of PF limits its use. In this study, an improved version of PF, the unscented particle filter (UPF), is utilized, which combines the unscented Kalman filter (UKF) and PF for the integration of GPS precise point positioning and MEMS-based inertial systems. The proposed filter is examined and compared with traditional estimation filters, namely EKF, UKF and PF. Tightly coupled mechanization is adopted, which is developed in the raw GPS and INS measurement domain. Un-differenced ionosphere-free linear combinations of pseudorange and carrier-phase measurements are used for PPP. The performance of the UPF is analyzed using a real test scenario in downtown Kingston, Ontario. It is shown that the use of UPF reduces the number of samples needed to produce an accurate solution, in comparison with the traditional PF, which in turn reduces the processing time. In addition, UPF enhances the positioning accuracy by up to 15% during GPS outages, in comparison with EKF. However, all filters produce comparable results when the GPS measurement updates are available.
Two-stage nonlinear filter for processing of scintigrams
International Nuclear Information System (INIS)
Pistor, P.; Hoener, J.; Walch, G.
1973-01-01
Linear filters which have been successfully used to process scintigrams can be modified in a meaningful manner by a preceding non-linear point operator, the Anscombe-transform. The advantages are: The scintigraphic noise becomes quasi-stationary and thus independent of the image. By these means the noise can be readily allowed for in the design of the convolutional operators. Transformed images with a stationary signal-to-noise ratio and a non-constant background t correspond to untransformed images with a signal-to-noise ratio that varies in certain limits. The filter chain automatically adapts to these changes. Our filter has the advantage over the majority of space-varying filters of being realizable by Fast Fourier Transform techniques. These advantages have to be paid for by reduced signal amplitude to background ratios. If the background is known, this shortcoming can be easily by-passed by processing trendfree scintigrams. If not, the filter chain should be completed by a third operator which reverses the Anscombe-transform. The Anscombe-transform influences the signal-to-noise ratio of cold spots and of hot spots in a different way. It remains an open question if this fact can be utilized to directly influence the detectability of the different kinds of spots
APPLICABILITY ANALYSIS OF CLOTH SIMULATION FILTERING ALGORITHM FOR MOBILE LIDAR POINT CLOUD
Directory of Open Access Journals (Sweden)
S. Cai
2018-04-01
Full Text Available Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging data post-processing. Cloth simulation filtering (CSF algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM, 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.
Particulate removal processes and hydraulics of porous gravel media filters
Minto, J. M.; Phoenix, V. R.; Dorea, C. C.; Haynes, H.; Sloan, W. T.
2013-12-01
clogging processes of gravel filters and are a considerable improvement on the inflow/outflow data most often available to monitor removal efficiency and clogging. Sub-section of the MRI derived geometry showing gravel (grey), pore space (blue), deposited particles (red) for 1) prior to clogging and 2) after clogging. The pore network skeleton (green) provided a reference point for comparing pore diameter change with clogging.
Extended Kalman Filter Modifications Based on an Optimization View Point
Skoglund, Martin; Hendeby, Gustaf; Axehill, Daniel
2015-01-01
The extended Kalman filter (EKF) has been animportant tool for state estimation of nonlinear systems sinceits introduction. However, the EKF does not possess the same optimality properties as the Kalman filter, and may perform poorly. By viewing the EKF as an optimization problem it is possible to, in many cases, improve its performance and robustness. The paper derives three variations of the EKF by applying different optimisation algorithms to the EKF costfunction and relate these to the it...
Filtering of a Markov Jump Process with Counting Observations
International Nuclear Information System (INIS)
Ceci, C.; Gerardi, A.
2000-01-01
This paper concerns the filtering of an R d -valued Markov pure jump process when only the total number of jumps are observed. Strong and weak uniqueness for the solutions of the filtering equations are discussed
Processing Terrain Point Cloud Data
DeVore, Ronald; Petrova, Guergana; Hielsberg, Matthew; Owens, Luke; Clack, Billy; Sood, Alok
2013-01-01
Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization
Inhomogeneous Markov point processes by transformation
DEFF Research Database (Denmark)
Jensen, Eva B. Vedel; Nielsen, Linda Stougaard
2000-01-01
We construct parametrized models for point processes, allowing for both inhomogeneity and interaction. The inhomogeneity is obtained by applying parametrized transformations to homogeneous Markov point processes. An interesting model class, which can be constructed by this transformation approach......, is that of exponential inhomogeneous Markov point processes. Statistical inference For such processes is discussed in some detail....
Matched Filter Processing for Asteroid Detection
Gural, Peter S.; Larsen, Jeffrey A.; Gleason, Arianna E.
2005-10-01
Matched filter (MF) processing has been shown to provide significant performance gains when processing stellar imagery used for asteroid detection, recovery, and tracking. This includes extending detection ranges to fainter magnitudes at the noise limit of the imagery and operating in dense cluttered star fields as encountered at low Galactic latitudes. The MF software has been shown to detect 40% more asteroids in high-quality Spacewatch imagery relative to the currently implemented approaches, which are based on moving target indicator (MTI) algorithms. In addition, MF detections were made in dense star fields and in situations in which the asteroid was collocated with a star in an image frame, cases in which the MTI algorithms failed. Thus, using legacy sensors and optics, improved detection sensitivity is achievable by simply upgrading the image-processing stream. This in turn permits surveys of the near-Earth asteroid (NEA) population farther from opposition, for smaller sizes, and in directions previously inaccessible to current NEA search programs. A software package has been developed and made available on the NASA data services Web site that can be used for asteroid detection and recovery operations utilizing the enhanced performance capabilities of MF processing.
Testing Local Independence between Two Point Processes
DEFF Research Database (Denmark)
Allard, Denis; Brix, Anders; Chadæuf, Joël
2001-01-01
Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush......Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush...
The use of fast digital filters for the processing of scintigraphic pictures
International Nuclear Information System (INIS)
Grochulski, W.; Penczek, P.
1982-01-01
A brief review of typical methods applied in the development of digital filters for the processing of scintigraphic pictures is given. A simple parametrisation of such filters in the frequency domain is proposed and successfully applied in the case of mathematically simulated IAEA phantoms. The FFT algorithm is used. A possible application of the fast Walsh transform is pointed out. (author)
Lisano, Michael E.
2007-01-01
Recent literature in applied estimation theory reflects growing interest in the sigma-point (also called unscented ) formulation for optimal sequential state estimation, often describing performance comparisons with extended Kalman filters as applied to specific dynamical problems [c.f. 1, 2, 3]. Favorable attributes of sigma-point filters are described as including a lower expected error for nonlinear even non-differentiable dynamical systems, and a straightforward formulation not requiring derivation or implementation of any partial derivative Jacobian matrices. These attributes are particularly attractive, e.g. in terms of enabling simplified code architecture and streamlined testing, in the formulation of estimators for nonlinear spaceflight mechanics systems, such as filter software onboard deep-space robotic spacecraft. As presented in [4], the Sigma-Point Consider Filter (SPCF) algorithm extends the sigma-point filter algorithm to the problem of consider covariance analysis. Considering parameters in a dynamical system, while estimating its state, provides an upper bound on the estimated state covariance, which is viewed as a conservative approach to designing estimators for problems of general guidance, navigation and control. This is because, whether a parameter in the system model is observable or not, error in the knowledge of the value of a non-estimated parameter will increase the actual uncertainty of the estimated state of the system beyond the level formally indicated by the covariance of an estimator that neglects errors or uncertainty in that parameter. The equations for SPCF covariance evolution are obtained in a fashion similar to the derivation approach taken with standard (i.e. linearized or extended) consider parameterized Kalman filters (c.f. [5]). While in [4] the SPCF and linear-theory consider filter (LTCF) were applied to an illustrative linear dynamics/linear measurement problem, in the present work examines the SPCF as applied to
Hydraulic modeling of clay ceramic water filters for point-of-use water treatment.
Schweitzer, Ryan W; Cunningham, Jeffrey A; Mihelcic, James R
2013-01-02
The acceptability of ceramic filters for point-of-use water treatment depends not only on the quality of the filtered water, but also on the quantity of water the filters can produce. This paper presents two mathematical models for the hydraulic performance of ceramic water filters under typical usage. A model is developed for two common filter geometries: paraboloid- and frustum-shaped. Both models are calibrated and evaluated by comparison to experimental data. The hydraulic models are able to predict the following parameters as functions of time: water level in the filter (h), instantaneous volumetric flow rate of filtrate (Q), and cumulative volume of water produced (V). The models' utility is demonstrated by applying them to estimate how the volume of water produced depends on factors such as the filter shape and the frequency of filling. Both models predict that the volume of water produced can be increased by about 45% if users refill the filter three times per day versus only once per day. Also, the models predict that filter geometry affects the volume of water produced: for two filters with equal volume, equal wall thickness, and equal hydraulic conductivity, a filter that is tall and thin will produce as much as 25% more water than one which is shallow and wide. We suggest that the models can be used as tools to help optimize filter performance.
An Application of Filtered Renewal Processes in Hydrology
Directory of Open Access Journals (Sweden)
Mario Lefebvre
2014-01-01
Full Text Available Filtered renewal processes are used to forecast daily river flows. For these processes, contrary to filtered Poisson processes, the time between consecutive events is not necessarily exponentially distributed, which is more realistic. The model is applied to obtain one- and two-day-ahead forecasts of the flows of the Delaware and Hudson Rivers, both located in the United States. Better results are obtained than with filtered Poisson processes, which are often used to model river flows.
Mathematic filters and digital processing in nuclear medicine
International Nuclear Information System (INIS)
Dimentein, R.
1992-01-01
The mathematic filters used in nuclear medicine were evaluated. Tomographic processing of a Jaszczak phantom, using separately Hanning, Butterworth and Wiener filters were presented. For each type of filter were made simulation, where the cut frequency and extenuation grade values were changed. (C.G.C.)
Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng
2014-06-01
The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.
Sustainable colloidal-silver-impregnated ceramic filter for point-of-use water treatment.
Oyanedel-Craver, Vinka A; Smith, James A
2008-02-01
Cylindrical colloidal-silver-impregnated ceramic filters for household (point-of-use) water treatment were manufactured and tested for performance in the laboratory with respect to flow rate and bacteria transport. Filters were manufactured by combining clay-rich soil with water, grog (previously fired clay), and flour, pressing them into cylinders, and firing them at 900 degrees C for 8 h. The pore-size distribution of the resulting ceramic filters was quantified by mercury porosimetry. Colloidal silver was applied to filters in different quantities and ways (dipping and painting). Filters were also tested without any colloidal-silver application. Hydraulic conductivity of the filters was quantified using changing-head permeability tests. [3H]H2O water was used as a conservative tracer to quantify advection velocities and the coefficient of hydrodynamic dispersion. Escherichia coli (E. coli) was used to quantify bacterial transport through the filters. Hydraulic conductivity and pore-size distribution varied with filter composition; hydraulic conductivities were on the order of 10(-5) cm/s and more than 50% of the pores for each filter had diameters ranging from 0.02 to 15 microm. The filters removed between 97.8% and 100% of the applied bacteria; colloidal-silver treatments improved filter performance, presumably by deactivation of bacteria. The quantity of colloidal silver applied per filter was more important to bacteria removal than the method of application. Silver concentrations in effluent filter water were initially greater than 0.1 mg/L, but dropped below this value after 200 min of continuous operation. These results indicate that colloidal-silver-impregnated ceramic filters, which can be made using primarily local materials and labor, show promise as an effective and sustainable point-of-use water treatment technology for the world's poorest communities.
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
LHCb Online event processing and filtering
Alessio, F.; Barandela, C.; Brarda, L.; Frank, M.; Franek, B.; Galli, D.; Gaspar, C.; Herwijnen, E. v.; Jacobsson, R.; Jost, B.; Köstner, S.; Moine, G.; Neufeld, N.; Somogyi, P.; Stoica, R.; Suman, S.
2008-07-01
The first level trigger of LHCb accepts one million events per second. After preprocessing in custom FPGA-based boards these events are distributed to a large farm of PC-servers using a high-speed Gigabit Ethernet network. Synchronisation and event management is achieved by the Timing and Trigger system of LHCb. Due to the complex nature of the selection of B-events, which are the main interest of LHCb, a full event-readout is required. Event processing on the servers is parallelised on an event basis. The reduction factor is typically 1/500. The remaining events are forwarded to a formatting layer, where the raw data files are formed and temporarily stored. A small part of the events is also forwarded to a dedicated farm for calibration and monitoring. The files are subsequently shipped to the CERN Tier0 facility for permanent storage and from there to the various Tier1 sites for reconstruction. In parallel files are used by various monitoring and calibration processes running within the LHCb Online system. The entire data-flow is controlled and configured by means of a SCADA system and several databases. After an overview of the LHCb data acquisition and its design principles this paper will emphasize the LHCb event filter system, which is now implemented using the final hardware and will be ready for data-taking for the LHC startup. Control, configuration and security aspects will also be discussed.
LHCb Online event processing and filtering
International Nuclear Information System (INIS)
Alessio, F; Barandela, C; Brarda, L; Frank, M; Gaspar, C; Herwijnen, E v; Jacobsson, R; Jost, B; Koestner, S; Moine, G; Neufeld, N; Somogyi, P; Stoica, R; Suman, S; Franek, B; Galli, D
2008-01-01
The first level trigger of LHCb accepts one million events per second. After preprocessing in custom FPGA-based boards these events are distributed to a large farm of PC-servers using a high-speed Gigabit Ethernet network. Synchronisation and event management is achieved by the Timing and Trigger system of LHCb. Due to the complex nature of the selection of B-events, which are the main interest of LHCb, a full event-readout is required. Event processing on the servers is parallelised on an event basis. The reduction factor is typically 1/500. The remaining events are forwarded to a formatting layer, where the raw data files are formed and temporarily stored. A small part of the events is also forwarded to a dedicated farm for calibration and monitoring. The files are subsequently shipped to the CERN Tier0 facility for permanent storage and from there to the various Tier1 sites for reconstruction. In parallel files are used by various monitoring and calibration processes running within the LHCb Online system. The entire data-flow is controlled and configured by means of a SCADA system and several databases. After an overview of the LHCb data acquisition and its design principles this paper will emphasize the LHCb event filter system, which is now implemented using the final hardware and will be ready for data-taking for the LHC startup. Control, configuration and security aspects will also be discussed
International Nuclear Information System (INIS)
Ermolaev, P; Volynsky, M
2014-01-01
Recurrent stochastic data processing algorithms using representation of interferometric signal as output of a dynamic system, which state is described by vector of parameters, in some cases are more effective, compared with conventional algorithms. Interferometric signals depend on phase nonlinearly. Consequently it is expedient to apply algorithms of nonlinear stochastic filtering, such as Kalman type filters. An application of the second order extended Kalman filter and Markov nonlinear filter that allows to minimize estimation error is described. Experimental results of signals processing are illustrated. Comparison of the algorithms is presented and discussed.
Residual analysis for spatial point processes
DEFF Research Database (Denmark)
Baddeley, A.; Turner, R.; Møller, Jesper
We define residuals for point process models fitted to spatial point pattern data, and propose diagnostic plots based on these residuals. The techniques apply to any Gibbs point process model, which may exhibit spatial heterogeneity, interpoint interaction and dependence on spatial covariates. Ou...... or covariate effects. Q-Q plots of the residuals are effective in diagnosing interpoint interaction. Some existing ad hoc statistics of point patterns (quadrat counts, scan statistic, kernel smoothed intensity, Berman's diagnostic) are recovered as special cases....
Lévy based Cox point processes
DEFF Research Database (Denmark)
Hellmund, Gunnar; Prokesová, Michaela; Jensen, Eva Bjørn Vedel
2008-01-01
In this paper we introduce Lévy-driven Cox point processes (LCPs) as Cox point processes with driving intensity function Λ defined by a kernel smoothing of a Lévy basis (an independently scattered, infinitely divisible random measure). We also consider log Lévy-driven Cox point processes (LLCPs......) with Λ equal to the exponential of such a kernel smoothing. Special cases are shot noise Cox processes, log Gaussian Cox processes, and log shot noise Cox processes. We study the theoretical properties of Lévy-based Cox processes, including moment properties described by nth-order product densities...
Hierarchical Threshold Adaptive for Point Cloud Filter Algorithm of Moving Surface Fitting
Directory of Open Access Journals (Sweden)
ZHU Xiaoxiao
2018-02-01
Full Text Available In order to improve the accuracy,efficiency and adaptability of point cloud filtering algorithm,a hierarchical threshold adaptive for point cloud filter algorithm of moving surface fitting was proposed.Firstly,the noisy points are removed by using a statistic histogram method.Secondly,the grid index is established by grid segmentation,and the surface equation is set up through the lowest point among the neighborhood grids.The real height and fit are calculated.The difference between the elevation and the threshold can be determined.Finally,in order to improve the filtering accuracy,hierarchical filtering is used to change the grid size and automatically set the neighborhood size and threshold until the filtering result reaches the accuracy requirement.The test data provided by the International Photogrammetry and Remote Sensing Society (ISPRS is used to verify the algorithm.The first and second error and the total error are 7.33%,10.64% and 6.34% respectively.The algorithm is compared with the eight classical filtering algorithms published by ISPRS.The experiment results show that the method has well-adapted and it has high accurate filtering result.
State estimation for temporal point processes
van Lieshout, Maria Nicolette Margaretha
2015-01-01
This paper is concerned with combined inference for point processes on the real line observed in a broken interval. For such processes, the classic history-based approach cannot be used. Instead, we adapt tools from sequential spatial point processes. For a range of models, the marginal and
Bayesian analysis of Markov point processes
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper
2006-01-01
Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....
International Nuclear Information System (INIS)
Shimazu, Y.; Rooijen, W.F.G. van
2014-01-01
Highlights: • Estimation of the reactivity of nuclear reactor based on neutron flux measurements. • Comparison of the traditional method, and the new approach based on Extended Kalman Filtering (EKF). • Estimation accuracy depends on filter parameters, the selection of which is described in this paper. • The EKF algorithm is preferred if the signal to noise ratio is low (low flux situation). • The accuracy of the EKF depends on the ratio of the filter coefficients. - Abstract: The Extended Kalman Filtering (EKF) technique has been applied for estimation of subcriticality with a good noise filtering and accuracy. The Inverse Point Kinetic (IPK) method has also been widely used for reactivity estimation. The important parameters for the EKF estimation are the process noise covariance, and the measurement noise covariance. However the optimal selection is quite difficult. On the other hand, there is only one parameter in the IPK method, namely the time constant for the first order delay filter. Thus, the selection of this parameter is quite easy. Thus, it is required to give certain idea for the selection of which method should be selected and how to select the required parameters. From this point of view, a qualitative performance comparison is carried out
Jia, Bin; Wang, Xiaodong
2013-12-17
: The extended Kalman filter (EKF) has been applied to inferring gene regulatory networks. However, it is well known that the EKF becomes less accurate when the system exhibits high nonlinearity. In addition, certain prior information about the gene regulatory network exists in practice, and no systematic approach has been developed to incorporate such prior information into the Kalman-type filter for inferring the structure of the gene regulatory network. In this paper, an inference framework based on point-based Gaussian approximation filters that can exploit the prior information is developed to solve the gene regulatory network inference problem. Different point-based Gaussian approximation filters, including the unscented Kalman filter (UKF), the third-degree cubature Kalman filter (CKF3), and the fifth-degree cubature Kalman filter (CKF5) are employed. Several types of network prior information, including the existing network structure information, sparsity assumption, and the range constraint of parameters, are considered, and the corresponding filters incorporating the prior information are developed. Experiments on a synthetic network of eight genes and the yeast protein synthesis network of five genes are carried out to demonstrate the performance of the proposed framework. The results show that the proposed methods provide more accurate inference results than existing methods, such as the EKF and the traditional UKF.
? filtering for stochastic systems driven by Poisson processes
Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya
2015-01-01
This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme.
Process and device for regulating an electromagnetic filter
International Nuclear Information System (INIS)
Dolle, Lucien.
1980-01-01
Process for regulating the operation of an electromagnetic filter and, in particular, for keeping the efficiency of the filter at a sufficiently high level irrespective of the degree of filter clogging, fluid flow rate and temperature of the fluid. The filter includes an envelope containing a filling that can be magnetized by a coil activated by a d.c. supply arranged around the envelope. The regulating process includes the following stages: - activating the coil by a current of lower intensity than that of the saturation current of the filling, - determining the pressure drop of the filter, fluid flow rate and fluid temperature, - increasing the intensity of the current activating the coil when the efficiency of the filter corresponding to the measured values drops below a given level [fr
FEATURES OF THE REGENERATION PROCESS OF THE FILTER
Directory of Open Access Journals (Sweden)
S. Yu. Panov
2015-01-01
Full Text Available The regeneration system exercises significant influence on the efficiency and reliability of the filters. During operation of the filter it continuously increases the hydraulic resistance and the gas permeability of the filter material decreases as the deposition of the disperse phase capturable on the filter element, and to maintain the bandwidth of the filter in the filter element within the set must be periodically changed or regenerated. Thus, regeneration of a process of removing part of the dust layer with the purpose of full or partial reduction of the initial filter partitioning properties. On the basis of theoretical synthesis, physico-chemical effects of dust in layers, analysis of energy effects, developed methods of intensification of the process of regeneration of particulate filters. Pneumopulse regeneration of bag filter has been investigated, and based on it a regression equation for regeneration efficiency has been derived. It has been shown that pulse pressure exerts the dominant influence on the regeneration efficiency. The obtained model was used for assessment and prediction of the efficiency of the pneumopulse system of regeneration of bag filters at a number of structural materials producing enterprises in the Voronezh region.
Adaptive Filtering for Non-Gaussian Processes
DEFF Research Database (Denmark)
Kidmose, Preben
2000-01-01
A new stochastic gradient robust filtering method, based on a non-linear amplitude transformation, is proposed. The method requires no a priori knowledge of the characteristics of the input signals and it is insensitive to the signals distribution and to the stationarity of the signals. A simulat...
Condition Monitoring of a Process Filter Applying Wireless Vibration Analysis
Directory of Open Access Journals (Sweden)
Pekka KOSKELA
2011-05-01
Full Text Available This paper presents a novel wireless vibration-based method for monitoring the degree of feed filter clogging. In process industry, these filters are applied to prevent impurities entering the process. During operation, the filters gradually become clogged, decreasing the feed flow and, in the worst case, preventing it. The cleaning of the filter should therefore be carried out predictively in order to avoid equipment damage and unnecessary process downtime. The degree of clogging is estimated by first calculating the time domain indices from low frequency accelerometer samples and then taking the median of the processed values. Nine different statistical quantities are compared based on the estimation accuracy and criteria for operating in resource-constrained environments with particular focus on energy efficiency. The initial results show that the method is able to detect the degree of clogging, and the approach may be applicable to filter clogging monitoring.
Bacterial treatment effectiveness of point-of-use ceramic water filters.
Bielefeldt, Angela R; Kowalski, Kate; Summers, R Scott
2009-08-01
Laboratory experiments were conducted on six point-of-use (POU) ceramic water filters that were manufactured in Nicaragua; two filters were used by families for ca. 4 years and the other filters had limited prior use in our lab. Water spiked with ca. 10(6)CFU/mL of Escherichia coli was dosed to the filters. Initial disinfection efficiencies ranged from 3 - 4.5 log, but the treatment efficiency decreased with subsequent batches of spiked water. Silver concentrations in the effluent water ranged from 0.04 - 1.75 ppb. Subsequent experiments that utilized feed water without a bacterial spike yielded 10(3)-10(5)CFU/mL bacteria in the effluent. Immediately after recoating four of the filters with a colloidal silver solution, the effluent silver concentrations increased to 36 - 45 ppb and bacterial disinfection efficiencies were 3.8-4.5 log. The treatment effectiveness decreased to 0.2 - 2.5 log after loading multiple batches of highly contaminated water. In subsequent loading of clean water, the effluent water contained filters. This indicates that the silver had some benefit to reducing bacterial contamination by the filter. In general these POU filters were found to be effective, but showed loss of effectiveness with time and indicated a release of microbes into subsequent volumes of water passed through the system.
Poisson point processes imaging, tracking, and sensing
Streit, Roy L
2010-01-01
This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.
Statistical aspects of determinantal point processes
DEFF Research Database (Denmark)
Lavancier, Frédéric; Møller, Jesper; Rubak, Ege
The statistical aspects of determinantal point processes (DPPs) seem largely unexplored. We review the appealing properties of DDPs, demonstrate that they are useful models for repulsiveness, detail a simulation procedure, and provide freely available software for simulation and statistical infer...
Computer processing of the scintigraphic image using digital filtering techniques
International Nuclear Information System (INIS)
Matsuo, Michimasa
1976-01-01
The theory of digital filtering was studied as a method for the computer processing of scintigraphic images. The characteristics and design techniques of finite impulse response (FIR) digital filters with linear phases were examined using the z-transform. The conventional data processing method, smoothing, could be recognized as one kind of linear phase FIR low-pass digital filtering. Ten representatives of FIR low-pass digital filters with various cut-off frequencies were scrutinized from the frequency domain in one-dimension and two-dimensions. These filters were applied to phantom studies with cold targets, using a Scinticamera-Minicomputer on-line System. These studies revealed that the resultant images had a direct connection with the magnitude response of the filter, that is, they could be estimated fairly well from the frequency response of the digital filter used. The filter, which was estimated from phantom studies as optimal for liver scintigrams using 198 Au-colloid, was successfully applied in clinical use for detecting true cold lesions and, at the same time, for eliminating spurious images. (J.P.N.)
Modeling fixation locations using spatial point processes.
Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix
2013-10-01
Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.
Optimal linear filtering of Poisson process with dead time
International Nuclear Information System (INIS)
Glukhova, E.V.
1993-01-01
The paper presents a derivation of an integral equation defining the impulsed transient of optimum linear filtering for evaluation of the intensity of the fluctuating Poisson process with allowance for dead time of transducers
Nonlinear consider covariance analysis using a sigma-point filter formulation
Lisano, Michael E.
2006-01-01
The research reported here extends the mathematical formulation of nonlinear, sigma-point estimators to enable consider covariance analysis for dynamical systems. This paper presents a novel sigma-point consider filter algorithm, for consider-parameterized nonlinear estimation, following the unscented Kalman filter (UKF) variation on the sigma-point filter formulation, which requires no partial derivatives of dynamics models or measurement models with respect to the parameter list. It is shown that, consistent with the attributes of sigma-point estimators, a consider-parameterized sigma-point estimator can be developed entirely without requiring the derivation of any partial-derivative matrices related to the dynamical system, the measurements, or the considered parameters, which appears to be an advantage over the formulation of a linear-theory sequential consider estimator. It is also demonstrated that a consider covariance analysis performed with this 'partial-derivative-free' formulation yields equivalent results to the linear-theory consider filter, for purely linear problems.
Simon, Donald L.; Litt, Jonathan S.
2010-01-01
This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.
Neural network training by Kalman filtering in process system monitoring
International Nuclear Information System (INIS)
Ciftcioglu, Oe.
1996-03-01
Kalman filtering approach for neural network training is described. Its extended form is used as an adaptive filter in a nonlinear environment of the form a feedforward neural network. Kalman filtering approach generally provides fast training as well as avoiding excessive learning which results in enhanced generalization capability. The network is used in a process monitoring application where the inputs are measurement signals. Since the measurement errors are also modelled in Kalman filter the approach yields accurate training with the implication of accurate neural network model representing the input and output relationships in the application. As the process of concern is a dynamic system, the input source of information to neural network is time dependent so that the training algorithm presents an adaptive form for real-time operation for the monitoring task. (orig.)
Discrete random signal processing and filtering primer with Matlab
Poularikas, Alexander D
2013-01-01
Engineers in all fields will appreciate a practical guide that combines several new effective MATLAB® problem-solving approaches and the very latest in discrete random signal processing and filtering.Numerous Useful Examples, Problems, and Solutions - An Extensive and Powerful ReviewWritten for practicing engineers seeking to strengthen their practical grasp of random signal processing, Discrete Random Signal Processing and Filtering Primer with MATLAB provides the opportunity to doubly enhance their skills. The author, a leading expert in the field of electrical and computer engineering, offe
Fingerprint Analysis with Marked Point Processes
DEFF Research Database (Denmark)
Forbes, Peter G. M.; Lauritzen, Steffen; Møller, Jesper
We present a framework for fingerprint matching based on marked point process models. An efficient Monte Carlo algorithm is developed to calculate the marginal likelihood ratio for the hypothesis that two observed prints originate from the same finger against the hypothesis that they originate from...... different fingers. Our model achieves good performance on an NIST-FBI fingerprint database of 258 matched fingerprint pairs....
Modern Statistics for Spatial Point Processes
DEFF Research Database (Denmark)
Møller, Jesper; Waagepetersen, Rasmus
2007-01-01
We summarize and discuss the current state of spatial point process theory and directions for future research, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications. In particular, we consider Poisson, Gibbs...
Modern statistics for spatial point processes
DEFF Research Database (Denmark)
Møller, Jesper; Waagepetersen, Rasmus
We summarize and discuss the current state of spatial point process theory and directions for future research, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications. In particular, we consider Poisson, Gibbs...
Evaluating the sustainability of ceramic filters for point-of-use drinking water treatment.
Ren, Dianjun; Colosi, Lisa M; Smith, James A
2013-10-01
This study evaluates the social, economic, and environmental sustainability of ceramic filters impregnated with silver nanoparticles for point-of-use (POU) drinking water treatment in developing countries. The functional unit for this analysis was the amount of water consumed by a typical household over ten years (37,960 L), as delivered by either the POU technology or a centralized water treatment and distribution system. Results indicate that the ceramic filters are 3-6 times more cost-effective than the centralized water system for reduction of waterborne diarrheal illness among the general population and children under five. The ceramic filters also exhibit better environmental performance for four of five evaluated life cycle impacts: energy use, water use, global warming potential, and particulate matter emissions (PM10). For smog formation potential, the centralized system is preferable to the ceramic filter POU technology. This convergence of social, economic, and environmental criteria offers clear indication that the ceramic filter POU technology is a more sustainable choice for drinking water treatment in developing countries than the centralized treatment systems that have been widely adopted in industrialized countries.
Extreme values, regular variation and point processes
Resnick, Sidney I
1987-01-01
Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...
Determinantal point process models on the sphere
DEFF Research Database (Denmark)
Møller, Jesper; Nielsen, Morten; Porcu, Emilio
defined on Sd × Sd . We review the appealing properties of such processes, including their specific moment properties, density expressions and simulation procedures. Particularly, we characterize and construct isotropic DPPs models on Sd , where it becomes essential to specify the eigenvalues......We consider determinantal point processes on the d-dimensional unit sphere Sd . These are finite point processes exhibiting repulsiveness and with moment properties determined by a certain determinant whose entries are specified by a so-called kernel which we assume is a complex covariance function...... and eigenfunctions in a spectral representation for the kernel, and we figure out how repulsive isotropic DPPs can be. Moreover, we discuss the shortcomings of adapting existing models for isotropic covariance functions and consider strategies for developing new models, including a useful spectral approach....
Estimating Function Approaches for Spatial Point Processes
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting
LHCb Online event processing and filtering
Alessio, F; Brarda, L; Frank, M; Franek, B; Galli, D; Gaspar, C; Van Herwijnen, E; Jacobsson, R; Jost, B; Köstner, S; Moine, G; Neufeld, N; Somogyi, P; Stoica, R; Suman, S
2008-01-01
The first level trigger of LHCb accepts one million events per second. After preprocessing in custom FPGA-based boards these events are distributed to a large farm of PC-servers using a high-speed Gigabit Ethernet network. Synchronisation and event management is achieved by the Timing and Trigger system of LHCb. Due to the complex nature of the selection of B-events, which are the main interest of LHCb, a full event-readout is required. Event processing on the servers is parallelised on an event basis. The reduction factor is typically 1/500. The remaining events are forwarded to a formatting layer, where the raw data files are formed and temporarily stored. A small part of the events is also forwarded to a dedicated farm for calibration and monitoring. The files are subsequently shipped to the CERN Tier0 facility for permanent storage and from there to the various Tier1 sites for reconstruction. In parallel files are used by various monitoring and calibration processes running within the LHCb Online system. ...
Point cloud processing for smart systems
Directory of Open Access Journals (Sweden)
Jaromír Landa
2013-01-01
Full Text Available High population as well as the economical tension emphasises the necessity of effective city management – from land use planning to urban green maintenance. The management effectiveness is based on precise knowledge of the city environment. Point clouds generated by mobile and terrestrial laser scanners provide precise data about objects in the scanner vicinity. From these data pieces the state of the roads, buildings, trees and other objects important for this decision-making process can be obtained. Generally, they can support the idea of “smart” or at least “smarter” cities.Unfortunately the point clouds do not provide this type of information automatically. It has to be extracted. This extraction is done by expert personnel or by object recognition software. As the point clouds can represent large areas (streets or even cities, usage of expert personnel to identify the required objects can be very time-consuming, therefore cost ineffective. Object recognition software allows us to detect and identify required objects semi-automatically or automatically.The first part of the article reviews and analyses the state of current art point cloud object recognition techniques. The following part presents common formats used for point cloud storage and frequently used software tools for point cloud processing. Further, a method for extraction of geospatial information about detected objects is proposed. Therefore, the method can be used not only to recognize the existence and shape of certain objects, but also to retrieve their geospatial properties. These objects can be later directly used in various GIS systems for further analyses.
High-dimensional change-point estimation: Combining filtering with convex optimization
Soh, Yong Sheng; Chandrasekaran, Venkat
2017-01-01
We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...
Parametric methods for spatial point processes
DEFF Research Database (Denmark)
Møller, Jesper
is studied in Section 4, and Bayesian inference in Section 5. On one hand, as the development in computer technology and computational statistics continues,computationally-intensive simulation-based methods for likelihood inference probably will play a increasing role for statistical analysis of spatial...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models......(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...
Statistical aspects of determinantal point processes
DEFF Research Database (Denmark)
Lavancier, Frédéric; Møller, Jesper; Rubak, Ege Holger
The statistical aspects of determinantal point processes (DPPs) seem largely unexplored. We review the appealing properties of DDPs, demonstrate that they are useful models for repulsiveness, detail a simulation procedure, and provide freely available software for simulation and statistical...... inference. We pay special attention to stationary DPPs, where we give a simple condition ensuring their existence, construct parametric models, describe how they can be well approximated so that the likelihood can be evaluated and realizations can be simulated, and discuss how statistical inference...
Nonlinear Statistical Signal Processing: A Particle Filtering Approach
International Nuclear Information System (INIS)
Candy, J.
2007-01-01
A introduction to particle filtering is discussed starting with an overview of Bayesian inference from batch to sequential processors. Once the evolving Bayesian paradigm is established, simulation-based methods using sampling theory and Monte Carlo realizations are discussed. Here the usual limitations of nonlinear approximations and non-gaussian processes prevalent in classical nonlinear processing algorithms (e.g. Kalman filters) are no longer a restriction to perform Bayesian inference. It is shown how the underlying hidden or state variables are easily assimilated into this Bayesian construct. Importance sampling methods are then discussed and shown how they can be extended to sequential solutions implemented using Markovian state-space models as a natural evolution. With this in mind, the idea of a particle filter, which is a discrete representation of a probability distribution, is developed and shown how it can be implemented using sequential importance sampling/resampling methods. Finally, an application is briefly discussed comparing the performance of the particle filter designs with classical nonlinear filter implementations
Transforming spatial point processes into Poisson processes using random superposition
DEFF Research Database (Denmark)
Møller, Jesper; Berthelsen, Kasper Klitgaaard
with a complementary spatial point process Y to obtain a Poisson process X∪Y with intensity function β. Underlying this is a bivariate spatial birth-death process (Xt,Yt) which converges towards the distribution of (X,Y). We study the joint distribution of X and Y, and their marginal and conditional distributions....... In particular, we introduce a fast and easy simulation procedure for Y conditional on X. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson...... process with intensity function β if and only if the true Papangelou intensity is used. Whether the superposition is actually such a Poisson process can easily be examined using well known results and fast simulation procedures for Poisson processes. We illustrate this approach to model checking...
Generalized Hofmann quantum process fidelity bounds for quantum filters
Sedlák, Michal; Fiurášek, Jaromír
2016-04-01
We propose and investigate bounds on the quantum process fidelity of quantum filters, i.e., probabilistic quantum operations represented by a single Kraus operator K . These bounds generalize the Hofmann bounds on the quantum process fidelity of unitary operations [H. F. Hofmann, Phys. Rev. Lett. 94, 160504 (2005), 10.1103/PhysRevLett.94.160504] and are based on probing the quantum filter with pure states forming two mutually unbiased bases. Determination of these bounds therefore requires far fewer measurements than full quantum process tomography. We find that it is particularly suitable to construct one of the probe bases from the right eigenstates of K , because in this case the bounds are tight in the sense that if the actual filter coincides with the ideal one, then both the lower and the upper bounds are equal to 1. We theoretically investigate the application of these bounds to a two-qubit optical quantum filter formed by the interference of two photons on a partially polarizing beam splitter. For an experimentally convenient choice of factorized input states and measurements we study the tightness of the bounds. We show that more stringent bounds can be obtained by more sophisticated processing of the data using convex optimization and we compare our methods for different choices of the input probe states.
Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu
2000-12-01
New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.
Prior knowledge processing for initial state of Kalman filter
Czech Academy of Sciences Publication Activity Database
Suzdaleva, Evgenia
2010-01-01
Roč. 24, č. 3 (2010), s. 188-202 ISSN 0890-6327 R&D Projects: GA ČR(CZ) GP201/06/P434 Institutional research plan: CEZ:AV0Z10750506 Keywords : Kalman filtering * prior knowledge * state-space model * initial state distribution Subject RIV: BC - Control Systems Theory Impact factor: 0.729, year: 2010 http://library.utia.cas.cz/separaty/2009/AS/suzdaleva-prior knowledge processing for initial state of kalman filter.pdf
Cura, Rémi; Perret, Julien; Paparoditis, Nicolas
2017-05-01
In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.
APPLYING OF COLLABORATIVE FILTERING ALGORITHM FOR PROCESSING OF MEDICAL DATA
Directory of Open Access Journals (Sweden)
Карина Владимировна МЕЛЬНИК
2015-05-01
Full Text Available The problem of improving of effectiveness of medical facility for implementation of social project is considered. There are different approaches to solve this problem, some of which require additional funding, which is usually absent. Therefore, it was proposed to use the approach of processing and application of patients’ data from medical records. The selection of a representative sample of patients was carried out using the technique of collaborative filtering. Review of the methods of collaborative filtering is performed, which showed that there are three main groups of methods. The first group calculates various measures of similarity between the object. The second group is data mining techniques. The third group of methods is a hybrid approach. The Gower coefficient for calculation of similarity measure of medical records of patients is considered in the article. A model of risk assessment of diseases based on collaborative filtering techniques is developed.
Precomputing Process Noise Covariance for Onboard Sequential Filters
Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell
2017-01-01
Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis studies is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.
Liu, Wanli
2017-03-08
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.
Hydrodynamics of heavy liquid metal coolant processes and filtering apparatus
International Nuclear Information System (INIS)
Albert K Papovyants; Yuri I Orlov; Pyotr N Martynov; Yuri D Boltoev
2005-01-01
to S ≤ 0,2 d p . It is demonstrated that the filtration efficiency can be significantly influenced by the properties of the capillary-porous structure of the filter material: the fiber diameter, type of braiding providing the availability of stagnant zones, porosity and wetting angle. With some simplifying prerequisites, the evaluation of the dynamics of the sedimentation growth on the porous partition has been performed as a function of time. Analysis of the conditions of the hydrodynamic separation of filter entrained particles (d p ≅ 2 μm) by the coolant flow revealed that to realize this process, it is necessary that the wall flow velocity be about V = 0,2 m/s. The object of investigations was a broad class of filter materials, including metallo-ceramics, metallic grids, carbon cloth, glass-fibers, needle-pierced cloth made of metallic fibers, grainy materials (made of aluminium oxides). By the complex of technical characteristics, with the thermal stability, cleaning efficiency (fineness), impurity retention capacity and hydraulic resistance considered, the multi-layer siliceous textured cloth (SiO 2 >95%, t 400 deg. C) and needle-pierced cloth made of 40 μm-d. metallic fibers (X18H10T steel, t ≤ 400-550 deg. C) are recommended for HLMC cleaning. The routine monitoring of the filter operation is implemented based on its resistance and the reduction of the flow rate through the filter, induced by its clogging by impurities, the clogging being dependent on the concentration of suspensions in coolant. The investigations as conducted made it possible to construct high temperature filter specimens, including those for an output capacity of 900 m 3 /h, in reference to operation and maintenance conditions of heavy liquid metal cooled nuclear power installations. (authors)
Removal of virus to protozoan sized particles in point-of-use ceramic water filters.
Bielefeldt, Angela R; Kowalski, Kate; Schilling, Cherylynn; Schreier, Simon; Kohler, Amanda; Scott Summers, R
2010-03-01
The particle removal performance of point-of-use ceramic water filters (CWFs) was characterized in the size range of 0.02-100 microm using carboxylate-coated polystyrene fluorescent microspheres, natural particles and clay. Particles were spiked into dechlorinated tap water, and three successive water batches treated in each of six different CWFs. Particle removal generally increased with increasing size. The removal of virus-sized 0.02 and 0.1 microm spheres were highly variable between the six filters, ranging from 63 to 99.6%. For the 0.5 microm spheres removal was less variable and in the range of 95.1-99.6%, while for the 1, 2, 4.5, and 10 microm spheres removal was >99.6%. Recoating four of the CWFs with colloidal silver solution improved removal of the 0.02 microm spheres, but had no significant effects on the other particle sizes. Log removals of 1.8-3.2 were found for natural turbidity and spiked kaolin clay particles; however, particles as large as 95 microm were detected in filtered water. Copyright 2009 Elsevier Ltd. All rights reserved.
Ventilation filters as sources of air pollution – Processes occurring on surfaces of used filters
DEFF Research Database (Denmark)
Bekö, Gabriel; Halás, Oto; Clausen, Geo
2004-01-01
Ozone concentrations were monitored upstream and downstream of used filter samples following 24hours of ventilation with ozone- filtered air. The ozone concentration in the air upstream of the filters was maintained at ~75 ppb while the concentration downstream of the filters was initially betwee...
Intrinsic low pass filtering improves signal-to-noise ratio in critical-point flexure biosensors
International Nuclear Information System (INIS)
Jain, Ankit; Alam, Muhammad Ashraful
2014-01-01
A flexure biosensor consists of a suspended beam and a fixed bottom electrode. The adsorption of the target biomolecules on the beam changes its stiffness and results in change of beam's deflection. It is now well established that the sensitivity of sensor is maximized close to the pull-in instability point, where effective stiffness of the beam vanishes. The question: “Do the signal-to-noise ratio (SNR) and the limit-of-detection (LOD) also improve close to the instability point?”, however remains unanswered. In this article, we systematically analyze the noise response to evaluate SNR and establish LOD of critical-point flexure sensors. We find that a flexure sensor acts like an effective low pass filter close to the instability point due to its relatively small resonance frequency, and rejects high frequency noise, leading to improved SNR and LOD. We believe that our conclusions should establish the uniqueness and the technological relevance of critical-point biosensors.
International Nuclear Information System (INIS)
Yang Zhengqiang; Li Linsun
2011-01-01
This paper aims to describe the history and current situation of the clinical application of inferior venal cava (IVC) filters. As there is a possible tendency for physicians to abuse the IVC filters in clinical practice, the authors think that it is necessary now to judge the advantages and disadvantages of the use of IVC filters again and to conscientiously reconsider what kind of patients are suitable for IVC filter implantation. In this article, the proper characteristics that an ideal IVC filter should possess are introduced, the indications for IVC filter implantation are discussed and the complications occurred after IVC filter implantation are analyzed. The authors believe that the retrievable filters will gradually substitute for permanent filters, for this reason, studies concerning IVC retrievable filters will become the hot spots of research in the near future. (authors)
Kim, Jeremie S; Senol Cali, Damla; Xin, Hongyi; Lee, Donghyuk; Ghose, Saugata; Alser, Mohammed; Hassan, Hasan; Ergin, Oguz; Alkan, Can; Mutlu, Onur
2018-05-09
Seed location filtering is critical in DNA read mapping, a process where billions of DNA fragments (reads) sampled from a donor are mapped onto a reference genome to identify genomic variants of the donor. State-of-the-art read mappers 1) quickly generate possible mapping locations for seeds (i.e., smaller segments) within each read, 2) extract reference sequences at each of the mapping locations, and 3) check similarity between each read and its associated reference sequences with a computationally-expensive algorithm (i.e., sequence alignment) to determine the origin of the read. A seed location filter comes into play before alignment, discarding seed locations that alignment would deem a poor match. The ideal seed location filter would discard all poor match locations prior to alignment such that there is no wasted computation on unnecessary alignments. We propose a novel seed location filtering algorithm, GRIM-Filter, optimized to exploit 3D-stacked memory systems that integrate computation within a logic layer stacked under memory layers, to perform processing-in-memory (PIM). GRIM-Filter quickly filters seed locations by 1) introducing a new representation of coarse-grained segments of the reference genome, and 2) using massively-parallel in-memory operations to identify read presence within each coarse-grained segment. Our evaluations show that for a sequence alignment error tolerance of 0.05, GRIM-Filter 1) reduces the false negative rate of filtering by 5.59x-6.41x, and 2) provides an end-to-end read mapper speedup of 1.81x-3.65x, compared to a state-of-the-art read mapper employing the best previous seed location filtering algorithm. GRIM-Filter exploits 3D-stacked memory, which enables the efficient use of processing-in-memory, to overcome the memory bandwidth bottleneck in seed location filtering. We show that GRIM-Filter significantly improves the performance of a state-of-the-art read mapper. GRIM-Filter is a universal seed location filter that can be
Method of processing cellulose filter sludge containing radioactive waste
International Nuclear Information System (INIS)
Shibata, Setsuo; Shibuya, Hidetoshi; Kusakabe, Takao; Kawakami, Hiroshi.
1991-01-01
To cellulose filter sludges deposited with radioactive wastes, 1 to 15% of cellulase based on the solid content of the filter sludges is caused to act in an aqueous medium with 4 to 8 pH at 10 to 50degC. If the pH value exceeds 8, hydrolyzing effect of cellulase is decreased, whereas a tank is corroded if the pH value is 4 or lower. If temperature is lower than 10degC, the rate of the hydrolysis reaction is too low to be practical. It is appropriate that the temperature is at the order of 40degC. If it exceeds 50degC, the cellulase itself becomes unstable. It is most effective that the amount of cellulase is about 8% and its addition by more than 15% is not effective. In this way, liquids in which most of filter sludges are hydrolyzed are processed as low level radioactive wastes. (T.M.)
International Nuclear Information System (INIS)
Castillo, Edward; Guerrero, Thomas; Castillo, Richard; White, Benjamin; Rojo, Javier
2012-01-01
Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. (paper)
Directory of Open Access Journals (Sweden)
Lin Jia-Horng
2016-01-01
Full Text Available This study proposes making filter materials with polypropylene (PP and low-melting point (LPET fibers. The influences of temperatures and times of heat treatment on the morphology of thermal bonding points and average pore size of the PP/LPET filter materials. The test results indicate that the morphology of thermal bonding points is highly correlated with the average pore size. When the temperature of heat treatment is increased, the fibers are joined first with the thermal bonding points, and then with the large thermal bonding areas, thereby decreasing the average pore size of the PP/LPET filter materials. A heat treatment of 110 °C for 60 seconds can decrease the pore size from 39.6 μm to 12.0 μm.
Some probabilistic properties of fractional point processes
Garra, Roberto
2017-05-16
In this article, the first hitting times of generalized Poisson processes N-f (t), related to Bernstein functions f are studied. For the spacefractional Poisson processes, N alpha (t), t > 0 ( corresponding to f = x alpha), the hitting probabilities P{T-k(alpha) < infinity} are explicitly obtained and analyzed. The processes N-f (t) are time-changed Poisson processes N( H-f (t)) with subordinators H-f (t) and here we study N(Sigma H-n(j= 1)f j (t)) and obtain probabilistic features of these extended counting processes. A section of the paper is devoted to processes of the form N( G(H,v) (t)) where G(H,v) (t) are generalized grey Brownian motions. This involves the theory of time-dependent fractional operators of the McBride form. While the time-fractional Poisson process is a renewal process, we prove that the space-time Poisson process is no longer a renewal process.
A logistic regression estimating function for spatial Gibbs point processes
DEFF Research Database (Denmark)
Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege
We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...
Characterization of filters and filtration process using X-ray computerized tomography
International Nuclear Information System (INIS)
Maschio, Celio; Arruda, Antonio Celso Fonseca de
1999-01-01
The objective of this work is to present the potential of X-Ray computerized tomography as a tool for internal characterization of filters used in the solid-liquid separation, mainly the water filters. Cartridge filters (for industrial and domestic applications) contaminated with glass beads were used. The scanning process was carried out both with and without contaminant in the filter to compare the attenuation coefficient of the clean filter and the contaminated filter. The images showed that is possible the mapping the internal structure of the filters and the distribution of the contaminant, permitting a local analysis, that is not possible through the standard tests used by the manufactures. These standard tests reveal only global characteristics of the filter media. The possibility of application for manufacturing process control was also shown, because the non invasive nature is a important advantage of the technique, which also permitted damage detection in filters submitted to severe operational conditions. (author)
Some probabilistic properties of fractional point processes
Garra, Roberto; Orsingher, Enzo; Scavino, Marco
2017-01-01
P{T-k(alpha) < infinity} are explicitly obtained and analyzed. The processes N-f (t) are time-changed Poisson processes N( H-f (t)) with subordinators H-f (t) and here we study N(Sigma H-n(j= 1)f j (t)) and obtain probabilistic features
On statistical analysis of compound point process
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2006-01-01
Roč. 35, 2-3 (2006), s. 389-396 ISSN 1026-597X R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : counting process * compound process * hazard function * Cox -model Subject RIV: BB - Applied Statistics, Operational Research
Intensity-dependent point spread image processing
International Nuclear Information System (INIS)
Cornsweet, T.N.; Yellott, J.I.
1984-01-01
There is ample anatomical, physiological and psychophysical evidence that the mammilian retina contains networks that mediate interactions among neighboring receptors, resulting in intersecting transformations between input images and their corresponding neural output patterns. The almost universally accepted view is that the principal form of interaction involves lateral inhibition, resulting in an output pattern that is the convolution of the input with a ''Mexican hat'' or difference-of-Gaussians spread function, having a positive center and a negative surround. A closely related process is widely applied in digital image processing, and in photography as ''unsharp masking''. The authors show that a simple and fundamentally different process, involving no inhibitory or subtractive terms can also account for the physiological and psychophysical findings that have been attributed to lateral inhibition. This process also results in a number of fundamental effects that occur in mammalian vision and that would be of considerable significance in robotic vision, but which cannot be explained by lateral inhibitory interaction
Estimating Uncertainty of Point-Cloud Based Single-Tree Segmentation with Ensemble Based Filtering
Directory of Open Access Journals (Sweden)
Matthew Parkan
2018-02-01
Full Text Available Individual tree crown segmentation from Airborne Laser Scanning data is a nodal problem in forest remote sensing. Focusing on single layered spruce and fir dominated coniferous forests, this article addresses the problem of directly estimating 3D segment shape uncertainty (i.e., without field/reference surveys, using a probabilistic approach. First, a coarse segmentation (marker controlled watershed is applied. Then, the 3D alpha hull and several descriptors are computed for each segment. Based on these descriptors, the alpha hulls are grouped to form ensembles (i.e., groups of similar tree shapes. By examining how frequently regions of a shape occur within an ensemble, it is possible to assign a shape probability to each point within a segment. The shape probability can subsequently be thresholded to obtain improved (filtered tree segments. Results indicate this approach can be used to produce segmentation reliability maps. A comparison to manually segmented tree crowns also indicates that the approach is able to produce more reliable tree shapes than the initial (unfiltered segmentation.
Schulz, Maria; Gerber, Alexander; Groneberg, David A
2016-04-16
Environmental tobacco smoke (ETS) is associated with human morbidity and mortality, particularly chronic obstructive pulmonary disease (COPD and lung cancer. Although direct DNA-damage is a leading pathomechanism in active smokers, passive smoking is enough to induce bronchial asthma, especially in children. Particulate matter (PM) demonstrably plays an important role in this ETS-associated human morbidity, constituting a surrogate parameter for ETS exposure. Using an Automatic Environmental Tobacco Smoke Emitter (AETSE) and an in-house developed, non-standard smoking regime, we tried to imitate the smoking process of human smokers to demonstrate the significance of passive smoking. Mean concentration (C(mean)) and area under the curve (AUC) of particulate matter (PM2.5) emitted by 3R4F reference cigarettes and the popular filter-tipped and non-filter brand cigarettes "Roth-Händle" were measured and compared. The cigarettes were not conditioned prior to smoking. The measurements were tested for Gaussian distribution and significant differences. C(mean) PM2.5 of the 3R4F reference cigarette: 3911 µg/m³; of the filter-tipped Roth-Händle: 3831 µg/m³; and of the non-filter Roth-Händle: 2053 µg/m³. AUC PM2.5 of the 3R4F reference cigarette: 1,647,006 µg/m³·s; of the filter-tipped Roth-Händle: 1,608,000 µg/m³·s; and of the non-filter Roth-Händle: 858,891 µg/m³·s. The filter-tipped cigarettes (the 3R4F reference cigarette and filter-tipped Roth-Händle) emitted significantly more PM2.5 than the non-filter Roth-Händle. Considering the harmful potential of PM, our findings note that the filter-tipped cigarettes are not a less harmful alternative for passive smokers. Tobacco taxation should be reconsidered and non-smoking legislation enforced.
Comparison of various filtering methods for digital X-ray image processing
International Nuclear Information System (INIS)
Pfluger, T.; Reinfelder, H.E.; Dorschky, K.; Oppelt, A.; Siemens A.G., Erlangen
1987-01-01
Three filtering methods are explained and compared that are used for border edge enhancement of digitally processed X-ray images. The filters are compared by two examples, a radiograph of the chest, and one of the knee joint. The unsharpness mask is found to yield the best compromise between edge enhancement and image noise intensifying effect, whereas the results obtained by the high-pass filter or the Wallis filter are less good for diagnostic evaluation. The filtered images better display narrow lines, structural borders and edges, and finely spotted areas, than the original radiograph, so that diagnostic evaluation is easier after image filtering. (orig.) [de
HEPA filter leaching concept validation trials at the Idaho Chemical Processing Plant
International Nuclear Information System (INIS)
Chakravartty, A.C.
1995-04-01
The enclosed report documents six New Waste Calcining Facility (NWCF) HEPA filter leaching trials conducted at the Idaho Chemical Processing Plant using a filter leaching system to validate the filter leaching treatment concept. The test results show that a modified filter leaching system will be able to successfully remove both hazardous and radiological constituents to RCRA disposal levels. Based on the success of the filter leach trials, the existing leaching system will be modified to provide a safe, simple, effective, and operationally flexible filter leaching system
Statistical properties of several models of fractional random point processes
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
Effect of the pressed ceramic filters on the reoxidation process
Directory of Open Access Journals (Sweden)
Brůna Marek
2018-01-01
Full Text Available Article is focused on reoxidation processes during filtration of aluminium alloys. Many of our experiments proven, that when using filtration media in gating system, reoxidation occurs. Main goal of this article is to point out reoxidation problems occurring during filtration of aluminium alloys. Reason to conduct this work was anomalies discovered by experimental casts, which pointed out that after filtration of primary alloys, small oxide films was detected. Oxides was absent, when we poured without filtration media. Simulation software ProCAST was used during our research.
Microbial profile and critical control points during processing of 'robo ...
African Journals Online (AJOL)
Microbial profile and critical control points during processing of 'robo' snack from ... the relevant critical control points especially in relation to raw materials and ... to the quality of the various raw ingredients used were the roasting using earthen
Point processes and the position distribution of infinite boson systems
International Nuclear Information System (INIS)
Fichtner, K.H.; Freudenberg, W.
1987-01-01
It is shown that to each locally normal state of a boson system one can associate a point process that can be interpreted as the position distribution of the state. The point process contains all information one can get by position measurements and is determined by the latter. On the other hand, to each so-called Σ/sup c/-point process Q they relate a locally normal state with position distribution Q
Self-exciting point process in modeling earthquake occurrences
International Nuclear Information System (INIS)
Pratiwi, H.; Slamet, I.; Respatiwulan; Saputro, D. R. S.
2017-01-01
In this paper, we present a procedure for modeling earthquake based on spatial-temporal point process. The magnitude distribution is expressed as truncated exponential and the event frequency is modeled with a spatial-temporal point process that is characterized uniquely by its associated conditional intensity process. The earthquakes can be regarded as point patterns that have a temporal clustering feature so we use self-exciting point process for modeling the conditional intensity function. The choice of main shocks is conducted via window algorithm by Gardner and Knopoff and the model can be fitted by maximum likelihood method for three random variables. (paper)
Point-of-use water purification using clay pot water filters and copper ...
African Journals Online (AJOL)
2011-11-24
Nov 24, 2011 ... clay pot water filters (CPWFs) were fabricated using terracotta clay and sawdust. The sawdust was .... developed by educational initiatives and non-governmental .... est filtration rate, it had the disadvantage of not being able to.
Practical Gammatone-Like Filters for Auditory Processing
Directory of Open Access Journals (Sweden)
R. F. Lyon
2007-12-01
Full Text Available This paper deals with continuous-time filter transfer functions that resemble tuning curves at particular set of places on the basilar membrane of the biological cochlea and that are suitable for practical VLSI implementations. The resulting filters can be used in a filterbank architecture to realize cochlea implants or auditory processors of increased biorealism. To put the reader into context, the paper starts with a short review on the gammatone filter and then exposes two of its variants, namely, the differentiated all-pole gammatone filter (DAPGF and one-zero gammatone filter (OZGF, filter responses that provide a robust foundation for modeling cochlea transfer functions. The DAPGF and OZGF responses are attractive because they exhibit certain characteristics suitable for modeling a variety of auditory data: level-dependent gain, linear tail for frequencies well below the center frequency, asymmetry, and so forth. In addition, their form suggests their implementation by means of cascades of N identical two-pole systems which render them as excellent candidates for efficient analog or digital VLSI realizations. We provide results that shed light on their characteristics and attributes and which can also serve as Ã¢Â€Âœdesign curvesÃ¢Â€Â for fitting these responses to frequency-domain physiological data. The DAPGF and OZGF responses are essentially a Ã¢Â€Âœmissing linkÃ¢Â€Â between physiological, electrical, and mechanical models for auditory filtering.
Bayesian signal processing classical, modern, and particle filtering methods
Candy, James V
2016-01-01
This book aims to give readers a unified Bayesian treatment starting from the basics (Baye's rule) to the more advanced (Monte Carlo sampling), evolving to the next-generation model-based techniques (sequential Monte Carlo sampling). This next edition incorporates a new chapter on "Sequential Bayesian Detection," a new section on "Ensemble Kalman Filters" as well as an expansion of Case Studies that detail Bayesian solutions for a variety of applications. These studies illustrate Bayesian approaches to real-world problems incorporating detailed particle filter designs, adaptive particle filters and sequential Bayesian detectors. In addition to these major developments a variety of sections are expanded to "fill-in-the gaps" of the first edition. Here metrics for particle filter (PF) designs with emphasis on classical "sanity testing" lead to ensemble techniques as a basic requirement for performance analysis. The expansion of information theory metrics and their application to PF designs is fully developed an...
A phase-equalized digital multirate filter for 50 Hz signal processing
Energy Technology Data Exchange (ETDEWEB)
Vainio, O. [Tampere University of Technology, Signal Processing Laboratory, Tampere (Finland)
1997-12-31
A new multistage digital filter is proposed for 50 Hz line frequency signal processing in zero-crossing detectors and synchronous power systems. The purpose of the filter is to extract the fundamental sinusoidal signal from noise and impulsive disturbances so that the output is accurately in phase with the primary input signal. This is accomplished with a cascade of a median filter, a linear-phase FIR filter, and a phase corrector. A 10 kHz output timing resolution is achieved by up-sampling with a customized interpolation filter. (orig.) 15 refs.
2018-01-01
ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a
Non-parametric Bayesian inference for inhomogeneous Markov point processes
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper; Johansen, Per Michael
is a shot noise process, and the interaction function for a pair of points depends only on the distance between the two points and is a piecewise linear function modelled by a marked Poisson process. Simulation of the resulting posterior using a Metropolis-Hastings algorithm in the "conventional" way...
A tutorial on Palm distributions for spatial point processes
DEFF Research Database (Denmark)
Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge
2017-01-01
This tutorial provides an introduction to Palm distributions for spatial point processes. Initially, in the context of finite point processes, we give an explicit definition of Palm distributions in terms of their density functions. Then we review Palm distributions in the general case. Finally, we...
SHAPE FROM TEXTURE USING LOCALLY SCALED POINT PROCESSES
Directory of Open Access Journals (Sweden)
Eva-Maria Didden
2015-09-01
Full Text Available Shape from texture refers to the extraction of 3D information from 2D images with irregular texture. This paper introduces a statistical framework to learn shape from texture where convex texture elements in a 2D image are represented through a point process. In a first step, the 2D image is preprocessed to generate a probability map corresponding to an estimate of the unnormalized intensity of the latent point process underlying the texture elements. The latent point process is subsequently inferred from the probability map in a non-parametric, model free manner. Finally, the 3D information is extracted from the point pattern by applying a locally scaled point process model where the local scaling function represents the deformation caused by the projection of a 3D surface onto a 2D image.
Process for structural geologic analysis of topography and point data
Eliason, Jay R.; Eliason, Valerie L. C.
1987-01-01
A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.
Point-of-use water purification using clay pot water filters and copper ...
African Journals Online (AJOL)
All other critical parameters such as total hardness, turbidity, electrical conductivity and ions in the filtered water were also within acceptable levels for drinking water quality. The filtration rate of the pot was also measured as a function of grain size of the sawdust and height of the water column in it. The filtration rate was ...
Ibey, Bennett; Subramanian, Hariharan; Ericson, Nance; Xu, Weijian; Wilson, Mark; Cote, Gerard L.
2005-03-01
A blood perfusion and oxygenation sensor has been developed for in situ monitoring of transplanted organs. In processing in situ data, motion artifacts due to increased perfusion can create invalid oxygenation saturation values. In order to remove the unwanted artifacts from the pulsatile signal, adaptive filtering was employed using a third wavelength source centered at 810nm as a reference signal. The 810 nm source resides approximately at the isosbestic point in the hemoglobin absorption curve where the absorbance of light is nearly equal for oxygenated and deoxygenated hemoglobin. Using an autocorrelation based algorithm oxygenation saturation values can be obtained without the need for large sampling data sets allowing for near real-time processing. This technique has been shown to be more reliable than traditional techniques and proven to adequately improve the measurement of oxygenation values in varying perfusion states.
Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin
2017-10-01
Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary
Process of recovery and reuse of the washing liquids from filters
International Nuclear Information System (INIS)
Flynn, G.C.
1975-01-01
The present invention relates to a backflush procedure intended for a repeated recovery and reuse of the liquid used for washing a 'pre-layer' filter. This type of filter is used for purifying the water of the steam generator circuit in electric energy generating plants equipped with steam turbines. Said 'pre-layer' filter can also be applied in the auxiliary facilities of electric energy generating plants, especially for treating the fuel ponds in nuclear reactors and for radioactive waste processing [fr
Vertical removable filters in shielded casing for radioactive cells and process gaseous wastes
International Nuclear Information System (INIS)
Prinz, M.
1983-01-01
The installation of shielded filtration casing is necessary for highly contaminated active cells and process gaseous wastes containing active aerosols. SGN and COGEMA have developed two filtration casings (for 500 and 3000 m 3 /h flow rates) equipped with a vertically removable filter element. The filter elements fitted with high efficiency glass fiber media, are cylindrical in shape. The top flange of the filter is equipped with a gasket to ensure sealing between the filter element and its casing. The filter element is blindly installed and removed and its orientation, inside the casing, is immaterial. The shielding casing is made of a cast iron, or steel, shielding slab under which is secured the filtration casing itself. This shielding slab is settled on side shielding walls made of concrete or cast iron. The filter element, integral with a plug, is placed in the horizontal slab. The attachment of the filter element under the plug is necessary so that the plug and filter may be removed as one unit, and to keep the filter on its sealing surfaces, according to sealing and seismic resistance requirements. Filter removal is performed with the help of an intervention cask, centered over a removable trap door provided on the shielding slab of the casing. First, the plug and filter element assembly is raised into the cask. Then, the filtering element may be separated from the plug which is decontaminated and salvaged. The whole plug and filter assembly may also be sent to the conditioning waste storage. The installation of a clean filter element in the casing, is also performed with the help of the intervention cask, proceeding as above, but in reverse order. The same intervention cask may also be used to remove the upstream and downstream dampers from the top of the casing
DEFF Research Database (Denmark)
Møller, Jesper; Ghorbani, Mohammad; Rubak, Ege Holger
We show how a spatial point process, where to each point there is associated a random quantitative mark, can be identified with a spatio-temporal point process specified by a conditional intensity function. For instance, the points can be tree locations, the marks can express the size of trees......, and the conditional intensity function can describe the distribution of a tree (i.e., its location and size) conditionally on the larger trees. This enable us to construct parametric statistical models which are easily interpretable and where likelihood-based inference is tractable. In particular, we consider maximum...
International Nuclear Information System (INIS)
Leibold, H.; Leiber, T.; Doeffert, I.; Wilhelm, J.G.
1993-08-01
HEPA filter operation at high concentrations of fine dusts requires the periodic recleaning of the filter units in their service locations. Due to the low mechanical stress induced during the recleaning process the regenration via low pressure reverse flow is a very suitable technique. Recleanability of HEPA filter had been attained for particle diameter >0,4 μm at air velocities up to 1 m/s, but filter clogging occurred in case of smaller particles. The recleaning forces are too weak for particles [de
An adaptive deep-coupled GNSS/INS navigation system with hybrid pre-filter processing
Wu, Mouyan; Ding, Jicheng; Zhao, Lin; Kang, Yingyao; Luo, Zhibin
2018-02-01
The deep-coupling of a global navigation satellite system (GNSS) with an inertial navigation system (INS) can provide accurate and reliable navigation information. There are several kinds of deeply-coupled structures. These can be divided mainly into coherent and non-coherent pre-filter based structures, which have their own strong advantages and disadvantages, especially in accuracy and robustness. In this paper, the existing pre-filters of the deeply-coupled structures are analyzed and modified to improve them firstly. Then, an adaptive GNSS/INS deeply-coupled algorithm with hybrid pre-filters processing is proposed to combine the advantages of coherent and non-coherent structures. An adaptive hysteresis controller is designed to implement the hybrid pre-filters processing strategy. The simulation and vehicle test results show that the adaptive deeply-coupled algorithm with hybrid pre-filters processing can effectively improve navigation accuracy and robustness, especially in a GNSS-challenged environment.
Post-Processing in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars Vabbersgaard
The material-point method (MPM) is a numerical method for dynamic or static analysis of solids using a discretization in time and space. The method has shown to be successful in modelling physical problems involving large deformations, which are difficult to model with traditional numerical tools...... such as the finite element method. In the material-point method, a set of material points is utilized to track the problem in time and space, while a computational background grid is utilized to obtain spatial derivatives relevant to the physical problem. Currently, the research within the material-point method......-point method. The first idea involves associating a volume with each material point and displaying the deformation of this volume. In the discretization process, the physical domain is divided into a number of smaller volumes each represented by a simple shape; here quadrilaterals are chosen for the presented...
Multivariate Product-Shot-noise Cox Point Process Models
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Mateu, Jorge
We introduce a new multivariate product-shot-noise Cox process which is useful for model- ing multi-species spatial point patterns with clustering intra-specific interactions and neutral, negative or positive inter-specific interactions. The auto and cross pair correlation functions of the process...... can be obtained in closed analytical forms and approximate simulation of the process is straightforward. We use the proposed process to model interactions within and among five tree species in the Barro Colorado Island plot....
PROCESSING UAV AND LIDAR POINT CLOUDS IN GRASS GIS
Directory of Open Access Journals (Sweden)
V. Petras
2016-06-01
Full Text Available Today’s methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM, and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM. Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL, Point Cloud Library (PCL, and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
International Nuclear Information System (INIS)
Docimo, D.J.; Ghanaatpishe, M.; Mamun, A.
2017-01-01
This paper develops an algorithm for estimating photovoltaic (PV) module temperature and effective irradiation level. The power output of a PV system depends directly on both of these states. Estimating the temperature and irradiation allows for improved state-based control methods while eliminating the need of additional sensors. Thermal models and irradiation estimators have been developed in the literature, but none incorporate feedback for estimation. This paper outlines an Extended Kalman Filter for temperature and irradiation estimation. These estimates are, in turn, used within a novel state-based controller that tracks the maximum power point of the PV system. Simulation results indicate this state-based controller provides up to an 8.5% increase in energy produced per day as compared to an impedance matching controller. A sensitivity analysis is provided to examine the impact state estimate errors have on the ability to find the optimal operating point of the PV system. - Highlights: • Developed a temperature and irradiation estimator for photovoltaic systems. • Designed an Extended Kalman Filter to handle model and measurement uncertainty. • Developed a state-based controller for maximum power point tracking (MPPT). • Validated combined estimator/controller algorithm for different weather conditions. • Algorithm increases energy captured up to 8.5% over traditional MPPT algorithms.
Scattering analysis of point processes and random measures
International Nuclear Information System (INIS)
Hanisch, K.H.
1984-01-01
In the present paper scattering analysis of point processes and random measures is studied. Known formulae which connect the scattering intensity with the pair distribution function of the studied structures are proved in a rigorous manner with tools of the theory of point processes and random measures. For some special fibre processes the scattering intensity is computed. For a class of random measures, namely for 'grain-germ-models', a new formula is proved which yields the pair distribution function of the 'grain-germ-model' in terms of the pair distribution function of the underlying point process (the 'germs') and of the mean structure factor and the mean squared structure factor of the particles (the 'grains'). (author)
Dew point vs bubble point : a misunderstood constraint on gravity drainage processes
Energy Technology Data Exchange (ETDEWEB)
Nenninger, J. [N-Solv Corp., Calgary, AB (Canada); Gunnewiek, L. [Hatch Ltd., Mississauga, ON (Canada)
2009-07-01
This study demonstrated that gravity drainage processes that use blended fluids such as solvents have an inherently unstable material balance due to differences between dew point and bubble point compositions. The instability can lead to the accumulation of volatile components within the chamber, and impair mass and heat transfer processes. Case studies were used to demonstrate the large temperature gradients within the vapour chamber caused by temperature differences between the bubble point and dew point for blended fluids. A review of published data showed that many experiments on in-situ processes do not account for unstable material balances caused by a lack of steam trap control. A study of temperature profiles during steam assisted gravity drainage (SAGD) studies showed significant temperature depressions caused by methane accumulations at the outside perimeter of the steam chamber. It was demonstrated that the condensation of large volumes of purified solvents provided an efficient mechanism for the removal of methane from the chamber. It was concluded that gravity drainage processes can be optimized by using pure propane during the injection process. 22 refs., 1 tab., 18 figs.
Processing of a neutrographic image, using Bosso Filter
International Nuclear Information System (INIS)
Pereda, C.; Bustamante, M.; Henriquez, C.
2006-01-01
The following paper shows the result of the treatment of a neutron radiographic image, obtained in the RECH-1 experimental reactor, making use of the computational image treatment techniques of the IDL software, which are complemented with the Bosso filter method already tested to improve quality in medical diagnosis. These techniques possess an undeniable value as an auxiliary to neutrography, which results can be noticed through this first try with an auxiliary neutrographic image used in PGNAA. These results insinuate that this method should give all its advantages to the neutrographic analysis standards: structural images, density variations, etc
Corbetta, Matteo; Sbarufatti, Claudio; Giglio, Marco; Todd, Michael D.
2018-05-01
The present work critically analyzes the probabilistic definition of dynamic state-space models subject to Bayesian filters used for monitoring and predicting monotonic degradation processes. The study focuses on the selection of the random process, often called process noise, which is a key perturbation source in the evolution equation of particle filtering. Despite the large number of applications of particle filtering predicting structural degradation, the adequacy of the picked process noise has not been investigated. This paper reviews existing process noise models that are typically embedded in particle filters dedicated to monitoring and predicting structural damage caused by fatigue, which is monotonic in nature. The analysis emphasizes that existing formulations of the process noise can jeopardize the performance of the filter in terms of state estimation and remaining life prediction (i.e., damage prognosis). This paper subsequently proposes an optimal and unbiased process noise model and a list of requirements that the stochastic model must satisfy to guarantee high prognostic performance. These requirements are useful for future and further implementations of particle filtering for monotonic system dynamics. The validity of the new process noise formulation is assessed against experimental fatigue crack growth data from a full-scale aeronautical structure using dedicated performance metrics.
A MARKED POINT PROCESS MODEL FOR VEHICLE DETECTION IN AERIAL LIDAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
A. Börcs
2012-07-01
Full Text Available In this paper we present an automated method for vehicle detection in LiDAR point clouds of crowded urban areas collected from an aerial platform. We assume that the input cloud is unordered, but it contains additional intensity and return number information which are jointly exploited by the proposed solution. Firstly, the 3-D point set is segmented into ground, vehicle, building roof, vegetation and clutter classes. Then the points with the corresponding class labels and intensity values are projected to the ground plane, where the optimal vehicle configuration is described by a Marked Point Process (MPP model of 2-D rectangles. Finally, the Multiple Birth and Death algorithm is utilized to find the configuration with the highest confidence.
Gu, Wenjun; Zhang, Weizhi; Wang, Jin; Amini Kashani, M. R.; Kavehrad, Mohsen
2015-01-01
Over the past decade, location based services (LBS) have found their wide applications in indoor environments, such as large shopping malls, hospitals, warehouses, airports, etc. Current technologies provide wide choices of available solutions, which include Radio-frequency identification (RFID), Ultra wideband (UWB), wireless local area network (WLAN) and Bluetooth. With the rapid development of light-emitting-diodes (LED) technology, visible light communications (VLC) also bring a practical approach to LBS. As visible light has a better immunity against multipath effect than radio waves, higher positioning accuracy is achieved. LEDs are utilized both for illumination and positioning purpose to realize relatively lower infrastructure cost. In this paper, an indoor positioning system using VLC is proposed, with LEDs as transmitters and photo diodes as receivers. The algorithm for estimation is based on received-signalstrength (RSS) information collected from photo diodes and trilateration technique. By appropriately making use of the characteristics of receiver movements and the property of trilateration, estimation on three-dimensional (3-D) coordinates is attained. Filtering technique is applied to enable tracking capability of the algorithm, and a higher accuracy is reached compare to raw estimates. Gaussian mixture Sigma-point particle filter (GM-SPPF) is proposed for this 3-D system, which introduces the notion of Gaussian Mixture Model (GMM). The number of particles in the filter is reduced by approximating the probability distribution with Gaussian components.
The effect of bathymetric filtering on nearshore process model results
Plant, N.G.; Edwards, K.L.; Kaihatu, J.M.; Veeramony, J.; Hsu, L.; Holland, K.T.
2009-01-01
Nearshore wave and flow model results are shown to exhibit a strong sensitivity to the resolution of the input bathymetry. In this analysis, bathymetric resolution was varied by applying smoothing filters to high-resolution survey data to produce a number of bathymetric grid surfaces. We demonstrate that the sensitivity of model-predicted wave height and flow to variations in bathymetric resolution had different characteristics. Wave height predictions were most sensitive to resolution of cross-shore variability associated with the structure of nearshore sandbars. Flow predictions were most sensitive to the resolution of intermediate scale alongshore variability associated with the prominent sandbar rhythmicity. Flow sensitivity increased in cases where a sandbar was closer to shore and shallower. Perhaps the most surprising implication of these results is that the interpolation and smoothing of bathymetric data could be optimized differently for the wave and flow models. We show that errors between observed and modeled flow and wave heights are well predicted by comparing model simulation results using progressively filtered bathymetry to results from the highest resolution simulation. The damage done by over smoothing or inadequate sampling can therefore be estimated using model simulations. We conclude that the ability to quantify prediction errors will be useful for supporting future data assimilation efforts that require this information.
Directory of Open Access Journals (Sweden)
S. Ju. Panov
2012-01-01
Full Text Available Data are presented on the successful modernization of the filter to replace the mechanical cleaning system for low pressure pulsed jet regeneration in order to improve system performance dedusting aspiration emission of grain processing enterprises.
Directory of Open Access Journals (Sweden)
H. Enayati
2015-12-01
segmented image is added to raster of elevation and vegetation elevation is detected. Results is showing that point clouds’ texture is a good data for filtering vegetation and generating DEM automatically.
Guided filtering for solar image/video processing
Directory of Open Access Journals (Sweden)
Long Xu
2017-06-01
Full Text Available A new image enhancement algorithm employing guided filtering is proposed in this work for enhancement of solar images and videos, so that users can easily figure out important fine structures imbedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image enhancement algorithms, thus facilitating easier determination of interesting solar burst activities from recorded images/movies.
Context-based adaptive filtering of interest points in image retrieval
DEFF Research Database (Denmark)
Nguyen, Phuong Giang; Andersen, Hans Jørgen
2009-01-01
Interest points have been used as local features with success in many computer vision applications such as image/video retrieval and object recognition. However, a major issue when using this approach is a large number of interest points detected from each image and created a dense feature space...... a subset of features. Our approach differs from others in a fact that selected feature is based on the context of the given image. Our experimental results show a significant reduction rate of features while preserving the retrieval performance....
MODELLING AND SIMULATION OF A NEUROPHYSIOLOGICAL EXPERIMENT BY SPATIO-TEMPORAL POINT PROCESSES
Directory of Open Access Journals (Sweden)
Viktor Beneš
2011-05-01
Full Text Available We present a stochastic model of an experimentmonitoring the spiking activity of a place cell of hippocampus of an experimental animal moving in an arena. Doubly stochastic spatio-temporal point process is used to model and quantify overdispersion. Stochastic intensity is modelled by a Lévy based random field while the animal path is simplified to a discrete random walk. In a simulation study first a method suggested previously is used. Then it is shown that a solution of the filtering problem yields the desired inference to the random intensity. Two approaches are suggested and the new one based on finite point process density is applied. Using Markov chain Monte Carlo we obtain numerical results from the simulated model. The methodology is discussed.
Pointo - a Low Cost Solution to Point Cloud Processing
Houshiar, H.; Winkler, S.
2017-11-01
With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a
Application of Kalman Filter for Estimating a Process Disturbance in a Building Space
Directory of Open Access Journals (Sweden)
Deuk-Woo Kim
2017-10-01
Full Text Available This paper addresses an application of the Kalman filter for estimating a time-varying process disturbance in a building space. The process disturbance means a synthetic composite of heat gains and losses caused by internal heat sources e.g., people, lights, equipment, and airflows. It is difficult to measure and quantify the internal heat sources and airflows due to their dynamic nature and time-lag impact on indoor environment. To address this issue, a Kalman filter estimation method was used in this study. The Kalman filtering is well suited for situations when state variables of interest cannot be measured. Based on virtual and real experiments conducted in this study, it was found that the Kalman filter can be used to estimate the time-varying process disturbance in a building space.
Low-pass parabolic FFT filter for airborne and satellite lidar signal processing.
Jiao, Zhongke; Liu, Bo; Liu, Enhai; Yue, Yongjian
2015-10-14
In order to reduce random errors of the lidar signal inversion, a low-pass parabolic fast Fourier transform filter (PFFTF) was introduced for noise elimination. A compact airborne Raman lidar system was studied, which applied PFFTF to process lidar signals. Mathematics and simulations of PFFTF along with low pass filters, sliding mean filter (SMF), median filter (MF), empirical mode decomposition (EMD) and wavelet transform (WT) were studied, and the practical engineering value of PFFTF for lidar signal processing has been verified. The method has been tested on real lidar signal from Wyoming Cloud Lidar (WCL). Results show that PFFTF has advantages over the other methods. It keeps the high frequency components well and reduces much of the random noise simultaneously for lidar signal processing.
International Nuclear Information System (INIS)
Singh, Vimal
2007-01-01
In [Singh V. Elimination of overflow oscillations in fixed-point state-space digital filters using saturation arithmetic. IEEE Trans Circ Syst 1990;37(6):814-8], a frequency-domain criterion for the suppression of limit cycles in fixed-point state-space digital filters using saturation overflow arithmetic was presented. The passivity property owing to the presence of multiple saturation nonlinearities was exploited therein. In the present paper, a new notion of passivity, namely, that involving the state variables is considered, thereby arriving at an entirely new frequency-domain criterion for the suppression of limit cycles in such filters
Investigation of Random Switching Driven by a Poisson Point Process
DEFF Research Database (Denmark)
Simonsen, Maria; Schiøler, Henrik; Leth, John-Josef
2015-01-01
This paper investigates the switching mechanism of a two-dimensional switched system, when the switching events are generated by a Poisson point process. A model, in the shape of a stochastic process, for such a system is derived and the distribution of the trajectory's position is developed...... together with marginal density functions for the coordinate functions. Furthermore, the joint probability distribution is given explicitly....
On estimation of the intensity function of a point process
Lieshout, van M.N.M.
2010-01-01
Abstract. Estimation of the intensity function of spatial point processes is a fundamental problem. In this paper, we interpret the Delaunay tessellation field estimator recently introduced by Schaap and Van de Weygaert as an adaptive kernel estimator and give explicit expressions for the mean and
A case study on point process modelling in disease mapping
Czech Academy of Sciences Publication Activity Database
Beneš, Viktor; Bodlák, M.; Moller, J.; Waagepetersen, R.
2005-01-01
Roč. 24, č. 3 (2005), s. 159-168 ISSN 1580-3139 R&D Projects: GA MŠk 0021620839; GA ČR GA201/03/0946 Institutional research plan: CEZ:AV0Z10750506 Keywords : log Gaussian Cox point process * Bayesian estimation Subject RIV: BB - Applied Statistics, Operational Research
A J–function for inhomogeneous point processes
M.N.M. van Lieshout (Marie-Colette)
2010-01-01
htmlabstractWe propose new summary statistics for intensity-reweighted moment stationary point processes that generalise the well known J-, empty space, and nearest-neighbour distance dis- tribution functions, represent them in terms of generating functionals and conditional intensities, and relate
Factors Affecting the Levels of Heavy Metals in Juices Processed with Filter Aids.
Wang, Zhengfang; Jackson, Lauren S; Jablonski, Joseph E
2017-06-01
This study investigated factors that may contribute to the presence of arsenic and other heavy metals in apple and grape juices processed with filter aids. Different types and grades of filter aids were analyzed for arsenic, lead, and cadmium with inductively coupled plasma-tandem mass spectrometry. Potential factors affecting the transfer of heavy metals to juices during filtration treatments were evaluated. Effects of washing treatments on removal of heavy metals from filter aids were also determined. Results showed that diatomaceous earth (DE) generally contained a higher level of arsenic than perlite, whereas perlite had a higher lead content than DE. Cellulose contained the lowest level of arsenic among the surveyed filter aids. All samples of food-grade filter aids contained arsenic and lead levels that were below the U.S. Pharmacopeia and National Formulary limits of 10 ppm of total leachable arsenic and lead for food-grade DE filter aids. Two samples of arsenic-rich (>3 ppm) food-grade filter aids raised the level of arsenic in apple and grape juices during laboratory-scale filtration treatments, whereas three samples of low-arsenic (filter aids did not affect arsenic levels in filtered juices. Filtration tests with simulated juices (pH 2.9 to 4.1, Brix [°Bx] 8.2 to 18.1, total suspended solids [TSS] 0.1 to 0.5%) showed that pH or sugar content had no effect on arsenic levels of filtered juices, whereas arsenic content of filtered juice was elevated when higher amounts of filter aid were used for filtration. Authentic unfiltered apple juice (pH 3.6, °Bx 12.9, TSS 0.4%) and grape juice (pH 3.3, °Bx 16.2, TSS 0.05%) were used to verify results obtained with simulated juices. However, body feed ratio did not affect the arsenic content of filtered authentic juices. Washing treatments were effective at reducing arsenic, but not cadmium or lead, concentrations in a DE filter aid. This study identified ways to reduce the amount of arsenic transferred to juices
Some properties of point processes in statistical optics
International Nuclear Information System (INIS)
Picinbono, B.; Bendjaballah, C.
2010-01-01
The analysis of the statistical properties of the point process (PP) of photon detection times can be used to determine whether or not an optical field is classical, in the sense that its statistical description does not require the methods of quantum optics. This determination is, however, more difficult than ordinarily admitted and the first aim of this paper is to illustrate this point by using some results of the PP theory. For example, it is well known that the analysis of the photodetection of classical fields exhibits the so-called bunching effect. But this property alone cannot be used to decide the nature of a given optical field. Indeed, we have presented examples of point processes for which a bunching effect appears and yet they cannot be obtained from a classical field. These examples are illustrated by computer simulations. Similarly, it is often admitted that for fields with very low light intensity the bunching or antibunching can be described by using the statistical properties of the distance between successive events of the point process, which simplifies the experimental procedure. We have shown that, while this property is valid for classical PPs, it has no reason to be true for nonclassical PPs, and we have presented some examples of this situation also illustrated by computer simulations.
Shot-noise-weighted processes : a new family of spatial point processes
M.N.M. van Lieshout (Marie-Colette); I.S. Molchanov (Ilya)
1995-01-01
textabstractThe paper suggests a new family of of spatial point processes distributions. They are defined by means of densities with respect to the Poisson point process within a bounded set. These densities are given in terms of a functional of the shot-noise process with a given influence
The Application of Paired Parallel Filters for Ultra-Wideband Signal Processing
Directory of Open Access Journals (Sweden)
S. L. Chernyshev
2015-01-01
Full Text Available The paper considers a unit in which the parallel filters on regular lines are pair-attached. This connection allows to reduce a side line impedance at the point of connection. At the same time these lines become narrow, and the possibility to excite higher modes in the joint reduces.Consider the scattering matrix of four identical lines connection. Then find the scattering matrix of connection in which two side lines are connected with filters. Particular cases of the reflection coefficients of different filters are considered. It is shown that only in the case of identical filters there remained a linear relationship between the input filter coefficients of reflection and transmission coefficient of the unit. It facilitates the solution of the problem of synthesis. Restrictions on the transfer coefficient are found. In transition to the time domain impulse response of connection under consideration and the expression for the synthesis were defined. The paper considers an example of implementation of the matched filtering in this connection. In this case, the output signal is a half-sum of the input signal and their autocorrelation function.
Forecasting optimal duration of a beer main fermentation process using the Kalman filter
Niyonsaba T.; Pavlov V.A.
2016-01-01
One of the most important processes of beer production is the main process of fermentation. In this process, the wort transforms into beer. The quality of beer depends on the dynamics of wort parameters. The main fermentation process continues for 10 days and requires high costs. Therefore, the main purpose of this article is to forecast the optimal duration of the beer main fermentation process and provide its optimal control. The Kalman filter can provide optimal control of the main ferment...
Two-step estimation for inhomogeneous spatial point processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus; Guan, Yongtao
This paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second order properties (K-function). Regression parameters are estimated using a Poisson likelihood score estimating function and in a second...... step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rain forests....
Directory of Open Access Journals (Sweden)
A. Calantropio
2018-05-01
Full Text Available Due to the increasing number of low-cost sensors, widely accessible on the market, and because of the supposed granted correctness of the semi-automatic workflow for 3D reconstruction, highly implemented in the recent commercial software, more and more users operate nowadays without following the rigorousness of classical photogrammetric methods. This behaviour often naively leads to 3D products that lacks metric quality assessment. This paper proposes and analyses an approach that gives the users the possibility to preserve the trustworthiness of the metric information inherent in the 3D model, without sacrificing the automation offered by modern photogrammetry software. At the beginning, the importance of Data Quality Assessment is outlined, together with some recall of photogrammetry best practices. With the purpose of guiding the user through a correct pipeline for a certified 3D model reconstruction, an operative workflow is proposed, focusing on the first part of the object reconstruction steps (tie-points extraction, camera calibration, and relative orientation. A new GUI (Graphical User Interface developed for the open source MicMac suite is then presented, and a sample dataset is used for the evaluation of the photogrammetric block orientation using statistically obtained quality descriptors. The results and the future directions are then presented and discussed.
A case study on point process modelling in disease mapping
DEFF Research Database (Denmark)
Møller, Jesper; Waagepetersen, Rasmus Plenge; Benes, Viktor
2005-01-01
of the risk on the covariates. Instead of using the common areal level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo...... methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence...... the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics....
Si(Li) x-ray spectrometer with signal processing system based on digital filtering
International Nuclear Information System (INIS)
Lakatos, Tamas
1985-01-01
A new signal processing system is under development at ATOMKI, Debrecen, Hungary, based on digital filtering by a microprocessor. The advantages of the new method are summarized. Dead time can be decreased and the speed of signal processing can be increased. Computer simulations verified the theoretical conclusions. (D.Gy.)
Energy Technology Data Exchange (ETDEWEB)
Hoelter, H.
1976-10-28
Particularly when cutting hard rock, the cutting room to be provided with suction is wetted with water from nozzles, which, when sucking out air containing dust with high humidity leads to encrustation in the filter cloth. In order to avoid this, it is proposed that the air should be heated, using heat from the motor driving the ventilator, so that one avoids dropping below the dew point in the filter.
A Marked Point Process Framework for Extracellular Electrical Potentials
Directory of Open Access Journals (Sweden)
Carlos A. Loza
2017-12-01
Full Text Available Neuromodulations are an important component of extracellular electrical potentials (EEP, such as the Electroencephalogram (EEG, Electrocorticogram (ECoG and Local Field Potentials (LFP. This spatially temporal organized multi-frequency transient (phasic activity reflects the multiscale spatiotemporal synchronization of neuronal populations in response to external stimuli or internal physiological processes. We propose a novel generative statistical model of a single EEP channel, where the collected signal is regarded as the noisy addition of reoccurring, multi-frequency phasic events over time. One of the main advantages of the proposed framework is the exceptional temporal resolution in the time location of the EEP phasic events, e.g., up to the sampling period utilized in the data collection. Therefore, this allows for the first time a description of neuromodulation in EEPs as a Marked Point Process (MPP, represented by their amplitude, center frequency, duration, and time of occurrence. The generative model for the multi-frequency phasic events exploits sparseness and involves a shift-invariant implementation of the clustering technique known as k-means. The cost function incorporates a robust estimation component based on correntropy to mitigate the outliers caused by the inherent noise in the EEP. Lastly, the background EEP activity is explicitly modeled as the non-sparse component of the collected signal to further improve the delineation of the multi-frequency phasic events in time. The framework is validated using two publicly available datasets: the DREAMS sleep spindles database and one of the Brain-Computer Interface (BCI competition datasets. The results achieve benchmark performance and provide novel quantitative descriptions based on power, event rates and timing in order to assess behavioral correlates beyond the classical power spectrum-based analysis. This opens the possibility for a unifying point process framework of
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
Simple computation of reaction–diffusion processes on point clouds
Macdonald, Colin B.; Merriman, Barry; Ruuth, Steven J.
2013-01-01
The study of reaction-diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction-diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction-diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces.
Simple computation of reaction–diffusion processes on point clouds
Macdonald, Colin B.
2013-05-20
The study of reaction-diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction-diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction-diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces.
Research on signal processing of shock absorber test bench based on zero-phase filter
Wu, Yi; Ding, Guoqing
2017-10-01
The quality of force-displacement diagram is significant to help evaluate the performance of shock absorbers. Damping force sampling data is often interfered by Gauss white noise, 50Hz power interference and its harmonic wave during the process of testing; data de-noising has become the core problem of drawing true, accurate and real-time indicator diagram. The noise and interference can be filtered out through generic IIR or FIR low-pass filter, but addition phase lag of useful signal will be caused due to the inherent attribute of IIR and FIR filter. The paper uses FRR method to realize zero-phase digital filtering in a software way based on mutual cancellation of phase lag between the forward and reverse sequences after through the filter. High-frequency interference above 40Hz are filtered out completely and noise attenuation is more than -40dB, with no additional phase lag. The method is able to restore the true signal as far as possible. Theoretical simulation and practical test indicate high-frequency noises have been effectively inhibited in multiple typical speed cases, signal-to-noise ratio being greatly improved; the curve in indicator diagram has better smoothness and fidelity. The FRR algorithm has low computational complexity, fast running time, and can be easily transplanted in multiple platforms.
Filters in 2D and 3D Cardiac SPECT Image Processing
Directory of Open Access Journals (Sweden)
Maria Lyra
2014-01-01
Full Text Available Nuclear cardiac imaging is a noninvasive, sensitive method providing information on cardiac structure and physiology. Single photon emission tomography (SPECT evaluates myocardial perfusion, viability, and function and is widely used in clinical routine. The quality of the tomographic image is a key for accurate diagnosis. Image filtering, a mathematical processing, compensates for loss of detail in an image while reducing image noise, and it can improve the image resolution and limit the degradation of the image. SPECT images are then reconstructed, either by filter back projection (FBP analytical technique or iteratively, by algebraic methods. The aim of this study is to review filters in cardiac 2D, 3D, and 4D SPECT applications and how these affect the image quality mirroring the diagnostic accuracy of SPECT images. Several filters, including the Hanning, Butterworth, and Parzen filters, were evaluated in combination with the two reconstruction methods as well as with a specified MatLab program. Results showed that for both 3D and 4D cardiac SPECT the Butterworth filter, for different critical frequencies and orders, produced the best results. Between the two reconstruction methods, the iterative one might be more appropriate for cardiac SPECT, since it improves lesion detectability due to the significant improvement of image contrast.
Statistical representation of a spray as a point process
International Nuclear Information System (INIS)
Subramaniam, S.
2000-01-01
The statistical representation of a spray as a finite point process is investigated. One objective is to develop a better understanding of how single-point statistical information contained in descriptions such as the droplet distribution function (ddf), relates to the probability density functions (pdfs) associated with the droplets themselves. Single-point statistical information contained in the droplet distribution function (ddf) is shown to be related to a sequence of single surrogate-droplet pdfs, which are in general different from the physical single-droplet pdfs. It is shown that the ddf contains less information than the fundamental single-point statistical representation of the spray, which is also described. The analysis shows which events associated with the ensemble of spray droplets can be characterized by the ddf, and which cannot. The implications of these findings for the ddf approach to spray modeling are discussed. The results of this study also have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single-point statistics such as the droplet number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets. Implications of these findings for large eddy simulations of multiphase flows are also discussed. (c) 2000 American Institute of Physics
Energy risk management through self-exciting marked point process
International Nuclear Information System (INIS)
Herrera, Rodrigo
2013-01-01
Crude oil is a dynamically traded commodity that affects many economies. We propose a collection of marked self-exciting point processes with dependent arrival rates for extreme events in oil markets and related risk measures. The models treat the time among extreme events in oil markets as a stochastic process. The main advantage of this approach is its capability to capture the short, medium and long-term behavior of extremes without involving an arbitrary stochastic volatility model or a prefiltration of the data, as is common in extreme value theory applications. We make use of the proposed model in order to obtain an improved estimate for the Value at Risk in oil markets. Empirical findings suggest that the reliability and stability of Value at Risk estimates improve as a result of finer modeling approach. This is supported by an empirical application in the representative West Texas Intermediate (WTI) and Brent crude oil markets. - Highlights: • We propose marked self-exciting point processes for extreme events in oil markets. • This approach captures the short and long-term behavior of extremes. • We improve the estimates for the VaR in the WTI and Brent crude oil markets
Weak convergence of marked point processes generated by crossings of multivariate jump processes
DEFF Research Database (Denmark)
Tamborrino, Massimiliano; Sacerdote, Laura; Jacobsen, Martin
2014-01-01
We consider the multivariate point process determined by the crossing times of the components of a multivariate jump process through a multivariate boundary, assuming to reset each component to an initial value after its boundary crossing. We prove that this point process converges weakly...... process converging to a multivariate Ornstein–Uhlenbeck process is discussed as a guideline for applying diffusion limits for jump processes. We apply our theoretical findings to neural network modeling. The proposed model gives a mathematical foundation to the generalization of the class of Leaky...
Variational approach for spatial point process intensity estimation
DEFF Research Database (Denmark)
Coeurjolly, Jean-Francois; Møller, Jesper
is assumed to be of log-linear form β+θ⊤z(u) where z is a spatial covariate function and the focus is on estimating θ. The variational estimator is very simple to implement and quicker than alternative estimation procedures. We establish its strong consistency and asymptotic normality. We also discuss its...... finite-sample properties in comparison with the maximum first order composite likelihood estimator when considering various inhomogeneous spatial point process models and dimensions as well as settings were z is completely or only partially known....
Two-step estimation for inhomogeneous spatial point processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus; Guan, Yongtao
2009-01-01
The paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second-order properties (K-function). Regression parameters are estimated by using a Poisson likelihood score estimating function and in the ...... and in the second step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rainforests....
Multiple Monte Carlo Testing with Applications in Spatial Point Processes
DEFF Research Database (Denmark)
Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute
with a function as the test statistic, 3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which shows which subtest or which distances of the used test function......(s) lead to the rejection at the prescribed significance level of the test. Examples of null hypothesis from point process and random set statistics are used to demonstrate the strength of the rank envelope test. The examples include goodness-of-fit test with several test functions, goodness-of-fit test...
International Nuclear Information System (INIS)
Samman, F.A.; Pongyupinpanich Surapong; Spies, C.; Glesner, M.
2012-01-01
A hardware implementation of an adaptive phase and magnitude detector and filter of a beam-phase control system in a heavy ion synchrotron application is presented in this paper. The main components of the hardware are adaptive LMS (Least-Mean-Square) filters and phase and magnitude detectors. The phase detectors are implemented by using a CORDIC (Coordinate Rotation Digital Computer) algorithm based on 32-bit binary floating-point arithmetic data formats. The floating-point-based hardware is designed to improve the precision of the past hardware implementation that were based on fixed-point arithmetics. The hardware of the detector and the adaptive LMS filter have been implemented on a programmable logic device (FPGA) for hardware acceleration purpose. The ideal Matlab/Simulink model of the hardware and the VHDL model of the adaptive LMS filter and the phase and magnitude detector are compared. The comparison result shows that the output signal of the floating-point based adaptive FIR filter as well as the phase and magnitude detector agree with the expected output signal of the ideal Matlab/Simulink model. (authors)
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
CLINSULF sub-dew-point process for sulphur recovery
Energy Technology Data Exchange (ETDEWEB)
Heisel, M.; Marold, F.
1988-01-01
In a 2-reactor system, the CLINSULF process allows very high sulphur recovery rates. When operated at 100/sup 0/C at the outlet, i.e. below the sulphur solidification point, a sulphur recovery rate of more than 99.2% was achieved in a 2-reactor series. Assuming a 70% sulphur recovery in an upstream Claus furnace plus sulphur condenser, an overall sulphur recovery of more than 99.8% results for the 2-reactor system. This is approximately 2% higher than in conventional Claus plus SDP units, which mostly consist of 4 reactors or more. This means the the CLINSULF SSP process promises to be an improvement both in respect of efficiency and low investment cost.
Self-Exciting Point Process Modeling of Conversation Event Sequences
Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo
Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.
Imitation learning of Non-Linear Point-to-Point Robot Motions using Dirichlet Processes
DEFF Research Database (Denmark)
Krüger, Volker; Tikhanoff, Vadim; Natale, Lorenzo
2012-01-01
In this paper we discuss the use of the infinite Gaussian mixture model and Dirichlet processes for learning robot movements from demonstrations. Starting point of this work is an earlier paper where the authors learn a non-linear dynamic robot movement model from a small number of observations....... The model in that work is learned using a classical finite Gaussian mixture model (FGMM) where the Gaussian mixtures are appropriately constrained. The problem with this approach is that one needs to make a good guess for how many mixtures the FGMM should use. In this work, we generalize this approach...... our algorithm on the same data that was used in [5], where the authors use motion capture devices to record the demonstrations. As further validation we test our approach on novel data acquired on our iCub in a different demonstration scenario in which the robot is physically driven by the human...
Jagodzinski, Jeremy James
2007-12-01
The development to date of a diode-laser based velocimeter providing point-velocity-measurements in unseeded flows using molecular Rayleigh scattering is discussed. The velocimeter is based on modulated filtered Rayleigh scattering (MFRS), a novel variation of filtered Rayleigh scattering (FRS), utilizing modulated absorption spectroscopy techniques to detect a strong absorption of a relatively weak Rayleigh scattered signal. A rubidium (Rb) vapor filter is used to provide the relatively strong absorption; alkali metal vapors have a high optical depth at modest vapor pressures, and their narrow linewidth is ideally suited for high-resolution velocimetry. Semiconductor diode lasers are used to generate the relatively weak Rayleigh scattered signal; due to their compact, rugged construction diode lasers are ideally suited for the environmental extremes encountered in many experiments. The MFRS technique utilizes the frequency-tuning capability of diode lasers to implement a homodyne detection scheme using lock-in amplifiers. The optical frequency of the diode-based laser system used to interrogate the flow is rapidly modulated about a reference frequency in the D2-line of Rb. The frequency modulation is imposed on the Rayleigh scattered light that is collected from the probe volume in the flow under investigation. The collected frequency modulating Rayleigh scattered light is transmitted through a Rb vapor filter before being detected. The detected modulated absorption signal is fed to two lock-in amplifers synchronized with the modulation frequency of the source laser. High levels of background rejection are attained since the lock-ins are both frequency and phase selective. The two lock-in amplifiers extract different Fourier components of the detected modulated absorption signal, which are ratioed to provide an intensity normalized frequency dependent signal from a single detector. A Doppler frequency shift in the collected Rayleigh scattered light due to a change
Low-cost domestic water filter: The case for a process-based ...
African Journals Online (AJOL)
Low-cost domestic water filter: The case for a process-based approach for the development of a rural technology product. ... Since the project aims at technology transfer to the rural poor for generating rural livelihoods, appropriate financial models and the general sustainability issues for such an activity are briefly discussed ...
Benchmarking of radiological departments. Starting point for successful process optimization
International Nuclear Information System (INIS)
Busch, Hans-Peter
2010-01-01
Continuous optimization of the process of organization and medical treatment is part of the successful management of radiological departments. The focus of this optimization can be cost units such as CT and MRI or the radiological parts of total patient treatment. Key performance indicators for process optimization are cost- effectiveness, service quality and quality of medical treatment. The potential for improvements can be seen by comparison (benchmark) with other hospitals and radiological departments. Clear definitions of key data and criteria are absolutely necessary for comparability. There is currently little information in the literature regarding the methodology and application of benchmarks especially from the perspective of radiological departments and case-based lump sums, even though benchmarking has frequently been applied to radiological departments by hospital management. The aim of this article is to describe and discuss systematic benchmarking as an effective starting point for successful process optimization. This includes the description of the methodology, recommendation of key parameters and discussion of the potential for cost-effectiveness analysis. The main focus of this article is cost-effectiveness (efficiency and effectiveness) with respect to cost units and treatment processes. (orig.)
Baron, Julianne L; Peters, Tammy; Shafer, Raymond; MacMurray, Brian; Stout, Janet E
2014-11-01
Opportunistic waterborne pathogens (eg, Legionella, Pseudomonas) may persist in water distribution systems despite municipal chlorination and secondary disinfection and can cause health care-acquired infections. Point-of-use (POU) filtration can limit exposure to pathogens; however, their short maximum lifetime and membrane clogging have limited their use. A new faucet filter rated at 62 days was evaluated at a cancer center in Northwestern Pennsylvania. Five sinks were equipped with filters, and 5 sinks served as controls. Hot water was collected weekly for 17 weeks and cultured for Legionella, Pseudomonas, and total bacteria. Legionella was removed from all filtered samples for 12 weeks. One colony was recovered from 1 site at 13 weeks; however, subsequent tests were negative through 17 weeks of testing. Total bacteria were excluded for the first 2 weeks, followed by an average of 1.86 log reduction in total bacteria compared with controls. No Pseudomonas was recovered from filtered or control faucets. This next generation faucet filter eliminated Legionella beyond the 62 day manufacturers' recommended maximum duration of use. These new POU filters will require fewer change-outs than standard filters and could be a cost-effective method for preventing exposure to Legionella and other opportunistic waterborne pathogens in hospitals with high-risk patients. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Bachmann-Machnik, Anna; Meyer, Daniel; Waldhoff, Axel; Fuchs, Stephan; Dittmer, Ulrich
2018-04-01
Retention Soil Filters (RSFs), a form of vertical flow constructed wetlands specifically designed for combined sewer overflow (CSO) treatment, have proven to be an effective tool to mitigate negative impacts of CSOs on receiving water bodies. Long-term hydrologic simulations are used to predict the emissions from urban drainage systems during planning of stormwater management measures. So far no universally accepted model for RSF simulation exists. When simulating hydraulics and water quality in RSFs, an appropriate level of detail must be chosen for reasonable balancing between model complexity and model handling, considering the model input's level of uncertainty. The most crucial parameters determining the resultant uncertainties of the integrated sewer system and filter bed model were identified by evaluating a virtual drainage system with a Retention Soil Filter for CSO treatment. To determine reasonable parameter ranges for RSF simulations, data of 207 events from six full-scale RSF plants in Germany were analyzed. Data evaluation shows that even though different plants with varying loading and operation modes were examined, a simple model is sufficient to assess relevant suspended solids (SS), chemical oxygen demand (COD) and NH4 emissions from RSFs. Two conceptual RSF models with different degrees of complexity were assessed. These models were developed based on evaluation of data from full scale RSF plants and column experiments. Incorporated model processes are ammonium adsorption in the filter layer and degradation during subsequent dry weather period, filtration of SS and particulate COD (XCOD) to a constant background concentration and removal of solute COD (SCOD) by a constant removal rate during filter passage as well as sedimentation of SS and XCOD in the filter overflow. XCOD, SS and ammonium loads as well as ammonium concentration peaks are discharged primarily via RSF overflow not passing through the filter bed. Uncertainties of the integrated
Application Of Kalman Filter In Navigation Process Of Automated Guided Vehicles
Directory of Open Access Journals (Sweden)
Śmieszek Mirosław
2015-09-01
Full Text Available In the paper an example of application of the Kalman filtering in the navigation process of automatically guided vehicles was presented. The basis for determining the position of automatically guided vehicles is odometry – the navigation calculation. This method of determining the position of a vehicle is affected by many errors. In order to eliminate these errors, in modern vehicles additional systems to increase accuracy in determining the position of a vehicle are used. In the latest navigation systems during route and position adjustments the probabilistic methods are used. The most frequently applied are Kalman filters.
Using Gaussian Process Annealing Particle Filter for 3D Human Tracking
Directory of Open Access Journals (Sweden)
Michael Rudzsky
2008-01-01
Full Text Available We present an approach for human body parts tracking in 3D with prelearned motion models using multiple cameras. Gaussian process annealing particle filter is proposed for tracking in order to reduce the dimensionality of the problem and to increase the tracker's stability and robustness. Comparing with a regular annealed particle filter-based tracker, we show that our algorithm can track better for low frame rate videos. We also show that our algorithm is capable of recovering after a temporal target loss.
Akhbari, Mahsa; Shamsollahi, Mohammad B; Jutten, Christian; Armoundas, Antonis A; Sayadi, Omid
2016-02-01
In this paper we propose an efficient method for denoising and extracting fiducial point (FP) of ECG signals. The method is based on a nonlinear dynamic model which uses Gaussian functions to model ECG waveforms. For estimating the model parameters, we use an extended Kalman filter (EKF). In this framework called EKF25, all the parameters of Gaussian functions as well as the ECG waveforms (P-wave, QRS complex and T-wave) in the ECG dynamical model, are considered as state variables. In this paper, the dynamic time warping method is used to estimate the nonlinear ECG phase observation. We compare this new approach with linear phase observation models. Using linear and nonlinear EKF25 for ECG denoising and nonlinear EKF25 for fiducial point extraction and ECG interval analysis are the main contributions of this paper. Performance comparison with other EKF-based techniques shows that the proposed method results in higher output SNR with an average SNR improvement of 12 dB for an input SNR of -8 dB. To evaluate the FP extraction performance, we compare the proposed method with a method based on partially collapsed Gibbs sampler and an established EKF-based method. The mean absolute error and the root mean square error of all FPs, across all databases are 14 ms and 22 ms, respectively, for our proposed method, with an advantage when using a nonlinear phase observation. These errors are significantly smaller than errors obtained with other methods. For ECG interval analysis, with an absolute mean error and a root mean square error of about 22 ms and 29 ms, the proposed method achieves better accuracy and smaller variability with respect to other methods.
International Nuclear Information System (INIS)
Sutoto
2000-01-01
To increase of the safety, quality and to easy maintenance of the incinerator media of bag house filter, coating of the surface filter media by CaCO 3 powder were done. In the incinerator process, the CaCO 3 powder will scrub of fly ash as secondary waste. And finally, both of the secondary waste and CaCO 3 will immobilized by cement matrix. The research has an objective to study and characterizing of the CaCO 3 as secondary waste on their cemented product. The research were done on block samples with content of CaCO 3 and the properties characterized by compressive strength and density. From this research known that on their solidified, each quantity of CaCO 3 will be impact to decreasing of the quality cementation product. The optimum formula for solidification of bag house filter scrubbed is CaCO 3 : cement: water is 3 : 10 : 7. (author)
Comprehensive Utilization of Filter Residue from the Preparation Process of Zeolite-Based Catalysts
Directory of Open Access Journals (Sweden)
Shu-Qin Zheng
2016-05-01
Full Text Available A novel utilization method of filter residue from the preparation process of zeolite-based catalysts was investigated. Y zeolite and a fluid catalytic cracking (FCC catalyst were synthesized from filter residue. Compared to the Y zeolite synthesized by the conventional method, the Y zeolite synthesized from filter residue exhibited better thermal stability. The catalyst possessed wide-pore distribution. In addition, the pore volume, specific surface area, attrition resistance were superior to those of the reference catalyst. The yields of gasoline and light oil increased by 1.93 and 1.48 %, respectively. At the same time, the coke yield decreased by 0.41 %. The catalyst exhibited better gasoline and coke selectivity. The quality of the cracked gasoline had been improved.
Dehydrating process experiment on spent ion-exchange resin sludge by Funda Filter
International Nuclear Information System (INIS)
Hasegawa, Tatsuo; Ishino, Kazuyuki
1977-01-01
In nuclear power plants, Funda Filters are employed to dehydrate spent powdery ion-exchange resin sludge. The Funda Filter is very effective for eliminating small rust components contained in spent powdery resin slurry; however, in the drying process, the complete drying of spent powdery resin is very difficult because the filter cake of resin on the horizontal filter leaf is likely to crack and let out steam and hot air through the cracks. This paper deals with the results of experiments conducted to clarify the detailed phenomena of dehydration so the above problem could be solved. The above experiments were made on the precoating and drying of granular ion-exchange resin slurry that had not yet been put to practical use. The experiments were composed of one fundamental and one operational stage. In the fundamental experiment, the dehydration properties and dehydration mechanism of resins were made clear, and the most effective operational method was established through the operational experiments conducted using large-scale Funda Filter test equipment under various conditions. (auth.)
Research of process of filtration of salt water by bulk filters with the use of vibration
Directory of Open Access Journals (Sweden)
A. I. Krikun
2018-01-01
Full Text Available For the purification of process water from impurities at fish processing plants, a large number of filtering devices are currently used, differing in their design parameters (mesh, woven, disco, etc.. However, in practice, these filtering devices are mainly used as the first stage of water treatment, since they can not provide sufficient quality of the filtrate. The most effective, as numerous studies of scientists of our country and the world show, are bulk granular filters. Their main advantages over other devices of similar designation are: they have a simple and reliable design; resistant to aggressive operating conditions; they are capable of effectively purifying seawater from mechanical impurities at relatively low pressure; most economical; have a filtering load capable of a long time to work without regeneration (the approximate service life of a grain-loading is 3 to 5 years etc. In this article, the influence of vibration effects on the filtration of sea water in a designed and fabricated filter unit with bulk granular materials of natural and artificial origin, the design of which is protected by two patents for the utility model. The results of the study are presented, revealing the degree of influence of the intensity of vibration of the perforated partitioning wall on the state of bulk granular materials located on it (segregation by size, stratified vibro-packing, compacting or loosening of a layer of granular material. The dependences of the capacity of the filtration unit on the amplitude, frequency and the vibration intensity factor have been experimentally established, which made it possible to establish rational vibration parameters of the perforated septum, under which the filtering layer becomes denser, the porosity of the loading decreases, and the precipitate does not break into the filtrate.
A CASE STUDY ON POINT PROCESS MODELLING IN DISEASE MAPPING
Directory of Open Access Journals (Sweden)
Viktor Beneš
2011-05-01
Full Text Available We consider a data set of locations where people in Central Bohemia have been infected by tick-borne encephalitis (TBE, and where population census data and covariates concerning vegetation and altitude are available. The aims are to estimate the risk map of the disease and to study the dependence of the risk on the covariates. Instead of using the common area level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics.
Mean-field inference of Hawkes point processes
International Nuclear Information System (INIS)
Bacry, Emmanuel; Gaïffas, Stéphane; Mastromatteo, Iacopo; Muzy, Jean-François
2016-01-01
We propose a fast and efficient estimation method that is able to accurately recover the parameters of a d-dimensional Hawkes point-process from a set of observations. We exploit a mean-field approximation that is valid when the fluctuations of the stochastic intensity are small. We show that this is notably the case in situations when interactions are sufficiently weak, when the dimension of the system is high or when the fluctuations are self-averaging due to the large number of past events they involve. In such a regime the estimation of a Hawkes process can be mapped on a least-squares problem for which we provide an analytic solution. Though this estimator is biased, we show that its precision can be comparable to the one of the maximum likelihood estimator while its computation speed is shown to be improved considerably. We give a theoretical control on the accuracy of our new approach and illustrate its efficiency using synthetic datasets, in order to assess the statistical estimation error of the parameters. (paper)
Corner-point criterion for assessing nonlinear image processing imagers
Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory
2017-10-01
Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to
Zaldívar Huerta, Ignacio E.; Pérez Montaña, Diego F.; Nava, Pablo Hernández; Juárez, Alejandro García; Asomoza, Jorge Rodríguez; Leal Cruz, Ana L.
2013-12-01
We experimentally demonstrate the use of an electro-optical transmission system for distribution of video over long-haul optical point-to-point links using a microwave photonic filter in the frequency range of 0.01-10 GHz. The frequency response of the microwave photonic filter consists of four band-pass windows centered at frequencies that can be tailored to the function of the spectral free range of the optical source, the chromatic dispersion parameter of the optical fiber used, as well as the length of the optical link. In particular, filtering effect is obtained by the interaction of an externally modulated multimode laser diode emitting at 1.5 μm associated to the length of a dispersive optical fiber. Filtered microwave signals are used as electrical carriers to transmit TV-signal over long-haul optical links point-to-point. Transmission of TV-signal coded on the microwave band-pass windows located at 4.62, 6.86, 4.0 and 6.0 GHz are achieved over optical links of 25.25 km and 28.25 km, respectively. Practical applications for this approach lie in the field of the FTTH access network for distribution of services as video, voice, and data.
Processing-Efficient Distributed Adaptive RLS Filtering for Computationally Constrained Platforms
Directory of Open Access Journals (Sweden)
Noor M. Khan
2017-01-01
Full Text Available In this paper, a novel processing-efficient architecture of a group of inexpensive and computationally incapable small platforms is proposed for a parallely distributed adaptive signal processing (PDASP operation. The proposed architecture runs computationally expensive procedures like complex adaptive recursive least square (RLS algorithm cooperatively. The proposed PDASP architecture operates properly even if perfect time alignment among the participating platforms is not available. An RLS algorithm with the application of MIMO channel estimation is deployed on the proposed architecture. Complexity and processing time of the PDASP scheme with MIMO RLS algorithm are compared with sequentially operated MIMO RLS algorithm and liner Kalman filter. It is observed that PDASP scheme exhibits much lesser computational complexity parallely than the sequential MIMO RLS algorithm as well as Kalman filter. Moreover, the proposed architecture provides an improvement of 95.83% and 82.29% decreased processing time parallely compared to the sequentially operated Kalman filter and MIMO RLS algorithm for low doppler rate, respectively. Likewise, for high doppler rate, the proposed architecture entails an improvement of 94.12% and 77.28% decreased processing time compared to the Kalman and RLS algorithms, respectively.
Effects of noise, nonlinear processing, and linear filtering on perceived music quality.
Arehart, Kathryn H; Kates, James M; Anderson, Melinda C
2011-03-01
The purpose of this study was to determine the relative impact of different forms of hearing aid signal processing on quality ratings of music. Music quality was assessed using a rating scale for three types of music: orchestral classical music, jazz instrumental, and a female vocalist. The music stimuli were subjected to a wide range of simulated hearing aid processing conditions including, (1) noise and nonlinear processing, (2) linear filtering, and (3) combinations of noise, nonlinear, and linear filtering. Quality ratings were measured in a group of 19 listeners with normal hearing and a group of 15 listeners with sensorineural hearing impairment. Quality ratings in both groups were generally comparable, were reliable across test sessions, were impacted more by noise and nonlinear signal processing than by linear filtering, and were significantly affected by the genre of music. The average quality ratings for music were reasonably well predicted by the hearing aid speech quality index (HASQI), but additional work is needed to optimize the index to the wide range of music genres and processing conditions included in this study.
Multiplicative point process as a model of trading activity
Gontis, V.; Kaulakys, B.
2004-11-01
Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.
Fate of dissolved organic nitrogen in two stage trickling filter process.
Simsek, Halis; Kasi, Murthy; Wadhawan, Tanush; Bye, Christopher; Blonigen, Mark; Khan, Eakalak
2012-10-15
Dissolved organic nitrogen (DON) represents a significant portion of nitrogen in the final effluent of wastewater treatment plants (WWTPs). Biodegradable portion of DON (BDON) can support algal growth and/or consume dissolved oxygen in the receiving waters. The fate of DON and BDON has not been studied for trickling filter WWTPs. DON and BDON data were collected along the treatment train of a WWTP with a two-stage trickling filter process. DON concentrations in the influent and effluent were 27% and 14% of total dissolved nitrogen (TDN). The plant removed about 62% and 72% of the influent DON and BDON mainly by the trickling filters. The final effluent BDON values averaged 1.8 mg/L. BDON was found to be between 51% and 69% of the DON in raw wastewater and after various treatment units. The fate of DON and BDON through the two-stage trickling filter treatment plant was modeled. The BioWin v3.1 model was successfully applied to simulate ammonia, nitrite, nitrate, TDN, DON and BDON concentrations along the treatment train. The maximum growth rates for ammonia oxidizing bacteria (AOB) and nitrite oxidizing bacteria, and AOB half saturation constant influenced ammonia and nitrate output results. Hydrolysis and ammonification rates influenced all of the nitrogen species in the model output, including BDON. Copyright © 2012 Elsevier Ltd. All rights reserved.
Seeking a fingerprint: analysis of point processes in actigraphy recording
Gudowska-Nowak, Ewa; Ochab, Jeremi K.; Oleś, Katarzyna; Beldzik, Ewa; Chialvo, Dante R.; Domagalik, Aleksandra; Fąfrowicz, Magdalena; Marek, Tadeusz; Nowak, Maciej A.; Ogińska, Halszka; Szwed, Jerzy; Tyburczyk, Jacek
2016-05-01
Motor activity of humans displays complex temporal fluctuations which can be characterised by scale-invariant statistics, thus demonstrating that structure and fluctuations of such kinetics remain similar over a broad range of time scales. Previous studies on humans regularly deprived of sleep or suffering from sleep disorders predicted a change in the invariant scale parameters with respect to those for healthy subjects. In this study we investigate the signal patterns from actigraphy recordings by means of characteristic measures of fractional point processes. We analyse spontaneous locomotor activity of healthy individuals recorded during a week of regular sleep and a week of chronic partial sleep deprivation. Behavioural symptoms of lack of sleep can be evaluated by analysing statistics of duration times during active and resting states, and alteration of behavioural organisation can be assessed by analysis of power laws detected in the event count distribution, distribution of waiting times between consecutive movements and detrended fluctuation analysis of recorded time series. We claim that among different measures characterising complexity of the actigraphy recordings and their variations implied by chronic sleep distress, the exponents characterising slopes of survival functions in resting states are the most effective biomarkers distinguishing between healthy and sleep-deprived groups.
Audit filters for improving processes of care and clinical outcomes in trauma systems.
Evans, Christopher; Howes, Daniel; Pickett, William; Dagnone, Luigi
2009-10-07
Traumatic injuries represent a considerable public health burden with significant personal and societal costs. The care of the severely injured patient in a trauma system progresses along a continuum that includes numerous interventions being provided by a multidisciplinary group of healthcare personnel. Despite the recent emphasis on quality of care in medicine, there has been little research to direct trauma clinicians and administrators on how optimally to monitor and improve upon the quality of care delivered within a trauma system. Audit filters are one mechanism for improving quality of care and are defined as specific clinical processes or outcomes of care that, when they occur, represent unfavorable deviations from an established norm and which prompt review and feedback. Although audit filters are widely utilized for performance improvement in trauma systems they have not been subjected to systematic review of their effectiveness. To determine the effectiveness of using audit filters for improving processes of care and clinical outcomes in trauma systems. Our search strategy included an electronic search of the Cochrane Injuries Group Specialized Register, the Cochrane EPOC Group Specialized Register, CENTRAL (The Cochrane Library 2008, Issue 4), MEDLINE, PubMed, EMBASE, CINAHL, and ISI Web of Science: (SCI-EXPANDED and CPCI-S). We handsearched the Journal of Trauma, Injury, Annals of Emergency Medicine, Academic Emergency Medicine, and Injury Prevention. We searched two clinical trial registries: 1) The World Health Organization International Clinical Trials Registry Platform and, 2) Clinical Trials.gov. We also contacted content experts for further articles. The most recent electronic search was completed in December 2008 and the handsearch was completed up to February 2009. We searched for randomized controlled trials, controlled clinical trials, controlled before-and-after studies, and interrupted time series studies that used audit filters as an
Magnetic filter apparatus and method for generating cold plasma in semicoductor processing
Vella, Michael C.
1996-01-01
Disclosed herein is a system and method for providing a plasma flood having a low electron temperature to a semiconductor target region during an ion implantation process. The plasma generator providing the plasma is coupled to a magnetic filter which allows ions and low energy electrons to pass therethrough while retaining captive the primary or high energy electrons. The ions and low energy electrons form a "cold plasma" which is diffused in the region of the process surface while the ion implantation process takes place.
Magnetic filter apparatus and method for generating cold plasma in semiconductor processing
Vella, M.C.
1996-08-13
Disclosed herein is a system and method for providing a plasma flood having a low electron temperature to a semiconductor target region during an ion implantation process. The plasma generator providing the plasma is coupled to a magnetic filter which allows ions and low energy electrons to pass therethrough while retaining captive the primary or high energy electrons. The ions and low energy electrons form a ``cold plasma`` which is diffused in the region of the process surface while the ion implantation process takes place. 15 figs.
Shared filtering processes link attentional and visual short-term memory capacity limits.
Bettencourt, Katherine C; Michalka, Samantha W; Somers, David C
2011-09-30
Both visual attention and visual short-term memory (VSTM) have been shown to have capacity limits of 4 ± 1 objects, driving the hypothesis that they share a visual processing buffer. However, these capacity limitations also show strong individual differences, making the degree to which these capacities are related unclear. Moreover, other research has suggested a distinction between attention and VSTM buffers. To explore the degree to which capacity limitations reflect the use of a shared visual processing buffer, we compared individual subject's capacities on attentional and VSTM tasks completed in the same testing session. We used a multiple object tracking (MOT) and a VSTM change detection task, with varying levels of distractors, to measure capacity. Significant correlations in capacity were not observed between the MOT and VSTM tasks when distractor filtering demands differed between the tasks. Instead, significant correlations were seen when the tasks shared spatial filtering demands. Moreover, these filtering demands impacted capacity similarly in both attention and VSTM tasks. These observations fail to support the view that visual attention and VSTM capacity limits result from a shared buffer but instead highlight the role of the resource demands of underlying processes in limiting capacity.
Schulz, Maria; Gerber, Alexander; Groneberg, David A.
2016-01-01
Background: Environmental tobacco smoke (ETS) is associated with human morbidity and mortality, particularly chronic obstructive pulmonary disease (COPD and lung cancer. Although direct DNA-damage is a leading pathomechanism in active smokers, passive smoking is enough to induce bronchial asthma, especially in children. Particulate matter (PM) demonstrably plays an important role in this ETS-associated human morbidity, constituting a surrogate parameter for ETS exposure. Methods: Using an Automatic Environmental Tobacco Smoke Emitter (AETSE) and an in-house developed, non-standard smoking regime, we tried to imitate the smoking process of human smokers to demonstrate the significance of passive smoking. Mean concentration (Cmean) and area under the curve (AUC) of particulate matter (PM2.5) emitted by 3R4F reference cigarettes and the popular filter-tipped and non-filter brand cigarettes “Roth-Händle” were measured and compared. The cigarettes were not conditioned prior to smoking. The measurements were tested for Gaussian distribution and significant differences. Results: Cmean PM2.5 of the 3R4F reference cigarette: 3911 µg/m3; of the filter-tipped Roth-Händle: 3831 µg/m3; and of the non-filter Roth-Händle: 2053 µg/m3. AUC PM2.5 of the 3R4F reference cigarette: 1,647,006 µg/m3·s; of the filter-tipped Roth-Händle: 1,608,000 µg/m3·s; and of the non-filter Roth-Händle: 858,891 µg/m3·s. Conclusion: The filter-tipped cigarettes (the 3R4F reference cigarette and filter-tipped Roth-Händle) emitted significantly more PM2.5 than the non-filter Roth-Händle. Considering the harmful potential of PM, our findings note that the filter-tipped cigarettes are not a less harmful alternative for passive smokers. Tobacco taxation should be reconsidered and non-smoking legislation enforced. PMID:27092519
Parallel processing architecture for H.264 deblocking filter on multi-core platforms
Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao
2012-03-01
filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.
Energy Technology Data Exchange (ETDEWEB)
Richir, Patrice; Dzbikowicz, Zdzislaw [Institute for Transuranium Elements (ITU), Joint Research Centre (JRC), European Commission, Ispra, Varese (Italy)
2012-06-15
Reprocessing plants require continuous and integrated safeguards activities by inspectors of the IAEA and Euratom because of their proliferation-sensitivity as complex facilities handling large quantities of direct use nuclear material. In support of both organizations, the JRC has developed a solution monitoring software package (DAI, Data Analysis and Interpretation) which has been implemented in the main commercial European reprocessing plants and which allows enhanced monitoring of nuclear materials in the processed solutions. This tool treats data acquired from different sensor types (e.g. from pressure transducers monitoring the solution levels in tanks). Collected signals are often noisy because of the instrumentation itself and/or because of ambient and operational conditions (e.g. pumps, ventilation systems or electromagnetic interferences) and therefore require filtering. Filtering means reduction of information and has to be applied correctly to avoid misinterpretation of the process steps. This paper describes the study of some filters one of which is the centered moving median which has been revealed as a powerful tool for solution monitoring.
Sequencing of Dust Filter Production Process Using Design Structure Matrix (DSM)
Sari, R. M.; Matondang, A. R.; Syahputri, K.; Anizar; Siregar, I.; Rizkya, I.; Ursula, C.
2018-01-01
Metal casting company produces machinery spare part for manufactures. One of the product produced is dust filter. Most of palm oil mill used this product. Since it is used in most of palm oil mill, company often have problems to address this product. One of problem is the disordered of production process. It carried out by the job sequencing. The important job that should be solved first, least implement, while less important job and could be completed later, implemented first. Design Structure Matrix (DSM) used to analyse and determine priorities in the production process. DSM analysis is sort of production process through dependency sequencing. The result of dependency sequences shows the sequence process according to the inter-process linkage considering before and after activities. Finally, it demonstrates their activities to the coupled activities for metal smelting, refining, grinding, cutting container castings, metal expenditure of molds, metal casting, coating processes, and manufacture of molds of sand.
Harmonic reduction by using single-tuned passive filter in plastic processing industry
Fahmi, M. I.; Baafai, U.; Hazmi, A.; Nasution, T. H.
2018-02-01
The using of non-linear loads generated by industrial machines may result inconsistent harmonics that do not reach the IEEE 519 - 1992 standards. This study discusses the use of single-tuned passive filters in reducing harmonics in the plastics processing industry. The system modeling using matlab / simulink simulation resulted in total harmonic distortion (THD) of 15.55%, can be reduced to 4.77% harmonics in accordance with IEEE 519 - 1992 standards. From the simulation results also seen that single-tuned passive filter can reduce the harmonics of the current 82.23% harmonic that wants to be reduced and also can reduce other orders harmonics between 7% to 8%.
Extended Kalman filter (EKF) application in vitamin C two-step fermentation process.
Wei, D; Yuan, W; Yuan, Z; Yin, G; Chen, M
1993-01-01
Based on kinetic model study of vitamin C two-step fermentation, the extended Kalman filter (EKF) theory is conducted for studying the process which is disturbed by white noise to some extent caused by the model, the fermentation system and operation fluctuation. EKF shows that calculated results from estimated process parameters agree with the experimental results considerably better than model prediction without using estimated parameters. Parameter analysis gives a better understanding of the kinetics and provides a basis for state estimation and state prediction.
Osada, Hirofumi; Osada, Shota
2018-01-01
We prove tail triviality of determinantal point processes μ on continuous spaces. Tail triviality has been proved for such processes only on discrete spaces, and hence we have generalized the result to continuous spaces. To do this, we construct tree representations, that is, discrete approximations of determinantal point processes enjoying a determinantal structure. There are many interesting examples of determinantal point processes on continuous spaces such as zero points of the hyperbolic Gaussian analytic function with Bergman kernel, and the thermodynamic limit of eigenvalues of Gaussian random matrices for Sine_2 , Airy_2 , Bessel_2 , and Ginibre point processes. Our main theorem proves all these point processes are tail trivial.
Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A
2015-08-01
This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.
International Nuclear Information System (INIS)
Vieira, Fabio P.B.; Bevilacqua, Joyce S.
2014-01-01
The use of electron paramagnetic resonance spectrometers - EPR - in radiation dosimetry is known for more than four decades. It is an important tool in the retrospective determination of doses absorbed. To estimate the dose absorbed by the sample, it is necessary to know the amplitude of the peak to peak signature of the substance in its EPR spectrum. This information can be compromised by the presence of spurious information: noise - of random and low intensity nature; and the behavior of the baseline - coming from the coupling between the resonator tube and the sample analyzed. Due to the intrinsic characteristics of the three main components of the signal, i.e. signature, noise, and baseline - the analysis in the frequency domain allows, through post-processing techniques to filter spurious information. In this work, an algorithm that retrieves the signature of a substance has been implemented. The Discrete Fourier Transform is applied to the signal and without user intervention, the noise is filtered. From the filtered signal, recovers the signature by Inverse Discrete Fourier Transform. The peak to peak amplitude, and the absorbed dose is calculated with an error of less than 1% for signals wherein the base line is linearized. Some more general cases are under investigation and with little user intervention, you can get the same error
Three-State Locally Adaptive Texture Preserving Filter for Radar and Optical Image Processing
Directory of Open Access Journals (Sweden)
Jaakko T. Astola
2005-05-01
Full Text Available Textural features are one of the most important types of useful information contained in images. In practice, these features are commonly masked by noise. Relatively little attention has been paid to texture preserving properties of noise attenuation methods. This stimulates solving the following tasks: (1 to analyze the texture preservation properties of various filters; and (2 to design image processing methods capable to preserve texture features well and to effectively reduce noise. This paper deals with examining texture feature preserving properties of different filters. The study is performed for a set of texture samples and different noise variances. The locally adaptive three-state schemes are proposed for which texture is considered as a particular class. For Ã¢Â€ÂœdetectionÃ¢Â€Â of texture regions, several classifiers are proposed and analyzed. As shown, an appropriate trade-off of the designed filter properties is provided. This is demonstrated quantitatively for artificial test images and is confirmed visually for real-life images.
Figueredo-Cardero, Alvio; Chico, Ernesto; Castilho, Leda; de Andrade Medronho, Ricardo
2012-01-01
In the present work, the main fluid flow features inside a rotating cylindrical filtration (RCF) system used as external cell retention device for animal cell perfusion processes were investigated using particle image velocimetry (PIV). The motivation behind this work was to provide experimental fluid dynamic data for such turbulent flow using a high-permeability filter, given the lack of information about this system in the literature. The results shown herein gave evidence that, at the boundary between the filter mesh and the fluid, a slip velocity condition in the tangential direction does exist, which had not been reported in the literature so far. In the RCF system tested, this accounted for a fluid velocity 10% lower than that of the filter tip, which could be important for the cake formation kinetics during filtration. Evidence confirming the existence of Taylor vortices under conditions of turbulent flow and high permeability, typical of animal cell perfusion RCF systems, was obtained. Second-order turbulence statistics were successfully calculated. The radial behavior of the second-order turbulent moments revealed that turbulence in this system is highly anisotropic, which is relevant for performing numerical simulations of this system. Copyright © 2012 American Institute of Chemical Engineers (AIChE).
Equivalence of functional limit theorems for stationary point processes and their Palm distributions
Nieuwenhuis, G.
1989-01-01
Let P be the distribution of a stationary point process on the real line and let P0 be its Palm distribution. In this paper we consider two types of functional limit theorems, those in terms of the number of points of the point process in (0, t] and those in terms of the location of the nth point
International Nuclear Information System (INIS)
LaFrate, P.J. Jr.; Stout, D.S.; Elliott, J.W.
1996-01-01
The Los Alamos National Laboratory (LANL) Decommissioning Project has decontaminated, demolished, and decommissioned a process exhaust system, two filter plenum buildings, and a firescreen plenum structure at Technical Area 21 (TA-2 1). The project began in August 1995 and was completed in January 1996. These high-efficiency particulate air (HEPA) filter plenums and associated ventilation ductwork provided process exhaust to fume hoods and glove boxes in TA-21 Buildings 2 through 5 when these buildings were active plutonium and uranium processing and research facilities. This paper summarizes the history of TA-21 plutonium and uranium processing and research activities and provides a detailed discussion of integrated work process controls, characterize-as-you-go methodology, unique engineering controls, decontamination techniques, demolition methodology, waste minimization, and volume reduction. Also presented in detail are the challenges facing the LANL Decommissioning Project to safely and economically decontaminate and demolish surplus facilities and the unique solutions to tough problems. This paper also shows the effectiveness of the integrated work package concept to control work through all phases
International Nuclear Information System (INIS)
LaFrate, P.J. Jr.; Stout, D.S.; Elliott, J.W.
1996-01-01
The Los Alamos National Laboratory (LANL) Decommissioning Project has decontaminated, demolished, and decommissioned a process exhaust system, two filter plenum buildings, and a firescreen plenum structure at Technical Area 21 (TA-21). The project began in August 1995 and was completed in January 1996. These high-efficiency particulate air (HEPA) filter plenums and associated ventilation ductwork provided process exhaust to fume hoods and glove boxes in TA-21 Buildings 2 through 5 when these buildings were active plutonium and uranium processing and research facilities. This paper summarizes the history of TA-21 plutonium and uranium processing and research activities and provides a detailed discussion of integrated work process controls, characterize-as-you-go methodology, unique engineering controls, decontamination techniques, demolition methodology, waste minimization, and volume reduction. Also presented in detail are the challenges facing the LANL Decommissioning Project to safely and economically decontaminate and demolish surplus facilities and the unique solutions to tough problems. This paper also shows the effectiveness of the integrated work package concept to control work through all phases
Sun, M.; Yu, P. F.; Fu, J. X.; Ji, X. Q.; Jiang, T.
2017-08-01
The optimal process parameters and conditions for the treatment of slaughterhouse wastewater by coagulation sedimentation-AF - biological contact oxidation process were studied to solve the problem of high concentration organic wastewater treatment in the production of small and medium sized slaughter plants. The suitable water temperature and the optimum reaction time are determined by the experiment of precipitation to study the effect of filtration rate and reflux ratio on COD and SS in anaerobic biological filter and the effect of biofilm thickness and gas water ratio on NH3-N and COD in biological contact oxidation tank, and results show that the optimum temperature is 16-24°C, reaction time is 20 min in coagulating sedimentation, the optimum filtration rate is 0.6 m/h, and the optimum reflux ratio is 300% in anaerobic biological filter reactor. The most suitable biological film thickness range of 1.8-2.2 mm and the most suitable gas water ratio is 12:1-14:1 in biological contact oxidation pool. In the coupling process of continuous operation for 80 days, the average effluent’s mass concentrations of COD, TP and TN were 15.57 mg/L, 40 mg/L and 0.63 mg/L, the average removal rates were 98.93%, 86.10%, 88.95%, respectively. The coupling process has stable operation effect and good effluent quality, and is suitable for the industrial application.
Microbial profile and critical control points during processing of 'robo ...
African Journals Online (AJOL)
STORAGESEVER
2009-05-18
May 18, 2009 ... frying, surface fat draining, open-air cooling, and holding/packaging in polyethylene films during sales and distribution. The product was, however, classified under category III with respect to risk and the significance of monitoring and evaluation of quality using the hazard analysis critical control point.
Discussion of "Modern statistics for spatial point processes"
DEFF Research Database (Denmark)
Jensen, Eva Bjørn Vedel; Prokesová, Michaela; Hellmund, Gunnar
2007-01-01
ABSTRACT. The paper ‘Modern statistics for spatial point processes’ by Jesper Møller and Rasmus P. Waagepetersen is based on a special invited lecture given by the authors at the 21st Nordic Conference on Mathematical Statistics, held at Rebild, Denmark, in June 2006. At the conference, Antti...
Geometric anisotropic spatial point pattern analysis and Cox processes
DEFF Research Database (Denmark)
Møller, Jesper; Toftaker, Håkon
. In particular we study Cox process models with an elliptical pair correlation function, including shot noise Cox processes and log Gaussian Cox processes, and we develop estimation procedures using summary statistics and Bayesian methods. Our methodology is illustrated on real and synthetic datasets of spatial...
1983-05-20
Poisson processes is introduced: the amplitude has a law which is spherically invariant and the filter is real, linear and causal. It is shown how such a model can be identified from experimental data. (Author)
Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.
2015-12-01
Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.
Robust estimation of autoregressive processes using a mixture-based filter-bank
Czech Academy of Sciences Publication Activity Database
Šmídl, V.; Anthony, Q.; Kárný, Miroslav; Guy, Tatiana Valentine
2005-01-01
Roč. 54, č. 4 (2005), s. 315-323 ISSN 0167-6911 R&D Projects: GA AV ČR IBS1075351; GA ČR GA102/03/0049; GA ČR GP102/03/P010; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Bayesian estimation * probabilistic mixtures * recursive estimation Subject RIV: BC - Control Systems Theory Impact factor: 1.239, year: 2005 http://library.utia.cas.cz/separaty/historie/karny-robust estimation of autoregressive processes using a mixture-based filter- bank .pdf
Effects of reactive filters based on modified zeolite in dairy industry wastewater treatment process
Directory of Open Access Journals (Sweden)
Kolaković Srđan
2013-01-01
Full Text Available Application of adsorbents based on organo-zeolites has certain advantages over conventional methods applied in food industry wastewater treatment process. The case study presented in this paper examines the possibilities and effects of treatment of dairy industry wastewater by using adsorbents based on organo-zeolites. The obtained results indicate favorable filtration properties of organo-zeolite, their high level of adsorption of organic matter and nitrate nitrogen in the analyzed wastewater. This paper concludes with recommendations of optimal technical and technological parameters for the application of these filters in practice.
A Bayesian MCMC method for point process models with intractable normalising constants
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper
2004-01-01
to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...... and pairwise interaction point processes....
Topobathymetric LiDAR point cloud processing and landform classification in a tidal environment
Skovgaard Andersen, Mikkel; Al-Hamdani, Zyad; Steinbacher, Frank; Rolighed Larsen, Laurids; Brandbyge Ernstsen, Verner
2017-04-01
Historically it has been difficult to create high resolution Digital Elevation Models (DEMs) in land-water transition zones due to shallow water depth and often challenging environmental conditions. This gap of information has been reflected as a "white ribbon" with no data in the land-water transition zone. In recent years, the technology of airborne topobathymetric Light Detection and Ranging (LiDAR) has proven capable of filling out the gap by simultaneously capturing topographic and bathymetric elevation information, using only a single green laser. We collected green LiDAR point cloud data in the Knudedyb tidal inlet system in the Danish Wadden Sea in spring 2014. Creating a DEM from a point cloud requires the general processing steps of data filtering, water surface detection and refraction correction. However, there is no transparent and reproducible method for processing green LiDAR data into a DEM, specifically regarding the procedure of water surface detection and modelling. We developed a step-by-step procedure for creating a DEM from raw green LiDAR point cloud data, including a procedure for making a Digital Water Surface Model (DWSM) (see Andersen et al., 2017). Two different classification analyses were applied to the high resolution DEM: A geomorphometric and a morphological classification, respectively. The classification methods were originally developed for a small test area; but in this work, we have used the classification methods to classify the complete Knudedyb tidal inlet system. References Andersen MS, Gergely Á, Al-Hamdani Z, Steinbacher F, Larsen LR, Ernstsen VB (2017). Processing and performance of topobathymetric lidar data for geomorphometric and morphological classification in a high-energy tidal environment. Hydrol. Earth Syst. Sci., 21: 43-63, doi:10.5194/hess-21-43-2017. Acknowledgements This work was funded by the Danish Council for Independent Research | Natural Sciences through the project "Process-based understanding and
Gómez Valverde, Juan J.; Ortuño, Juan E.; Guerra, Pedro; Hermann, Boris; Zabihian, Behrooz; Rubio-Guivernau, José L.; Santos, Andrés.; Drexler, Wolfgang; Ledesma-Carbayo, Maria J.
2015-07-01
Optical Coherence Tomography (OCT) has shown a great potential as a complementary imaging tool in the diagnosis of skin diseases. Speckle noise is the most prominent artifact present in OCT images and could limit the interpretation and detection capabilities. In this work we propose a new speckle reduction process and compare it with various denoising filters with high edge-preserving potential, using several sets of dermatological OCT B-scans. To validate the performance we used a custom-designed spectral domain OCT and two different data set groups. The first group consisted in five datasets of a single B-scan captured N times (with N<20), the second were five 3D volumes of 25 Bscans. As quality metrics we used signal to noise (SNR), contrast to noise (CNR) and equivalent number of looks (ENL) ratios. Our results show that a process based on a combination of a 2D enhanced sigma digital filter and a wavelet compounding method achieves the best results in terms of the improvement of the quality metrics. In the first group of individual B-scans we achieved improvements in SNR, CNR and ENL of 16.87 dB, 2.19 and 328 respectively; for the 3D volume datasets the improvements were 15.65 dB, 3.44 and 1148. Our results suggest that the proposed enhancement process may significantly reduce speckle, increasing SNR, CNR and ENL and reducing the number of extra acquisitions of the same frame.
DEFF Research Database (Denmark)
Bekö, Gabriel; Clausen, Geo; Weschler, Charles J.
2007-01-01
understanding of such processes. The measured ratio of downstream to upstream submicron particle concentrations increased when ozone was added to air passing through samples from loaded particle filters. Such an observation is consistent with low volatility oxidation products desorbing from the filter...... been in service from 2 to 16 weeks found that ozone removal efficiencies changed in a manner that indicated at least two different removal mechanisms-reactions with compounds present on the filter media following manufacturing and reactions with compounds associated with captured particles....... The contribution from the former varies with the type and manufacturer of the filter, while that of the latter varies with the duration of service and nature of the captured particles. In complimentary experiments, a filter sample protected from ozone during its 9 weeks of service had higher ozone removal...
Pre- and post-processing filters for improvement of blood velocity estimation
DEFF Research Database (Denmark)
Schlaikjer, Malene; Jensen, Jørgen Arendt
2000-01-01
with different signal-to-noise ratios (SNR). The exact extent of the vessel and the true velocities are thereby known. Velocity estimates were obtained by employing Kasai's autocorrelator on the data. The post-processing filter was used on the computed 2D velocity map. An improvement of the RMS error...... velocity in the vessels. Post-processing is beneficial to obtain an image that minimizes the variation, and present the important information to the clinicians. Applying the theory of fluid mechanics introduces restrictions on the variations possible in a flow field. Neighboring estimates in time and space...... should be highly correlated, since transitions should occur smoothly. This idea is the basis of the algorithm developed in this study. From Bayesian image processing theory an a posteriori probability distribution for the velocity field is computed based on constraints on smoothness. An estimate...
INHOMOGENEITY IN SPATIAL COX POINT PROCESSES – LOCATION DEPENDENT THINNING IS NOT THE ONLY OPTION
Directory of Open Access Journals (Sweden)
Michaela Prokešová
2010-11-01
Full Text Available In the literature on point processes the by far most popular option for introducing inhomogeneity into a point process model is the location dependent thinning (resulting in a second-order intensity-reweighted stationary point process. This produces a very tractable model and there are several fast estimation procedures available. Nevertheless, this model dilutes the interaction (or the geometrical structure of the original homogeneous model in a special way. When concerning the Markov point processes several alternative inhomogeneous models were suggested and investigated in the literature. But it is not so for the Cox point processes, the canonical models for clustered point patterns. In the contribution we discuss several other options how to define inhomogeneous Cox point process models that result in point patterns with different types of geometric structure. We further investigate the possible parameter estimation procedures for such models.
Marked point process for modelling seismic activity (case study in Sumatra and Java)
Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.
2018-05-01
Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.
Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model
Marsh, John E.; Campbell, Tom A.
2016-01-01
The rostral brainstem receives both “bottom-up” input from the ascending auditory system and “top-down” descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e
Applying Enhancement Filters in the Pre-processing of Images of Lymphoma
International Nuclear Information System (INIS)
Silva, Sérgio Henrique; Do Nascimento, Marcelo Zanchetta; Neves, Leandro Alves; Batista, Valério Ramos
2015-01-01
Lymphoma is a type of cancer that affects the immune system, and is classified as Hodgkin or non-Hodgkin. It is one of the ten types of cancer that are the most common on earth. Among all malignant neoplasms diagnosed in the world, lymphoma ranges from three to four percent of them. Our work presents a study of some filters devoted to enhancing images of lymphoma at the pre-processing step. Here the enhancement is useful for removing noise from the digital images. We have analysed the noise caused by different sources like room vibration, scraps and defocusing, and in the following classes of lymphoma: follicular, mantle cell and B-cell chronic lymphocytic leukemia. The filters Gaussian, Median and Mean-Shift were applied to different colour models (RGB, Lab and HSV). Afterwards, we performed a quantitative analysis of the images by means of the Structural Similarity Index. This was done in order to evaluate the similarity between the images. In all cases we have obtained a certainty of at least 75%, which rises to 99% if one considers only HSV. Namely, we have concluded that HSV is an important choice of colour model at pre-processing histological images of lymphoma, because in this case the resulting image will get the best enhancement
Lasso and probabilistic inequalities for multivariate point processes
DEFF Research Database (Denmark)
Hansen, Niels Richard; Reynaud-Bouret, Patricia; Rivoirard, Vincent
2015-01-01
Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function parameter to be estimated by linear combinations of a fixed dictionary. To select...... for multivariate Hawkes processes are proven, which allows us to check these assumptions by considering general dictionaries based on histograms, Fourier or wavelet bases. Motivated by problems of neuronal activity inference, we finally carry out a simulation study for multivariate Hawkes processes and compare our...... methodology with the adaptive Lasso procedure proposed by Zou in (J. Amer. Statist. Assoc. 101 (2006) 1418–1429). We observe an excellent behavior of our procedure. We rely on theoretical aspects for the essential question of tuning our methodology. Unlike adaptive Lasso of (J. Amer. Statist. Assoc. 101 (2006...
Modelling financial high frequency data using point processes
DEFF Research Database (Denmark)
Hautsch, Nikolaus; Bauwens, Luc
In this chapter written for a forthcoming Handbook of Financial Time Series to be published by Springer-Verlag, we review the econometric literature on dynamic duration and intensity processes applied to high frequency financial data, which was boosted by the work of Engle and Russell (1997...
International Nuclear Information System (INIS)
Su Xiaoxing; Zhang Chuanzeng; Ma Tianxue; Wang Yuesheng
2012-01-01
When three-dimensional (3D) phononic band structures are calculated by using the finite difference time domain (FDTD) method with a relatively small number of iterations, the results can be effectively improved by post-processing the FDTD time series (FDTD-TS) based on the filter diagonalization method (FDM), instead of the classical fast Fourier transform. In this paper, we propose a way to further improve the performance of the FDM-based post-processing method by introducing a relatively large number of observing points to record the FDTD-TS. To this end, the existing scheme of FDTD-TS preprocessing is modified. With the new preprocessing scheme, the processing efficiency of a single FDTD-TS can be improved significantly, and thus the entire post-processing method can have sufficiently high efficiency even when a relatively large number of observing points are used. The feasibility of the proposed method for improvement is verified by the numerical results.
Nematzadeh, Nasim; Powers, David M W; Lewis, Trent W
2017-12-01
Why does our visual system fail to reconstruct reality, when we look at certain patterns? Where do Geometrical illusions start to emerge in the visual pathway? How far should we take computational models of vision with the same visual ability to detect illusions as we do? This study addresses these questions, by focusing on a specific underlying neural mechanism involved in our visual experiences that affects our final perception. Among many types of visual illusion, 'Geometrical' and, in particular, 'Tilt Illusions' are rather important, being characterized by misperception of geometric patterns involving lines and tiles in combination with contrasting orientation, size or position. Over the last decade, many new neurophysiological experiments have led to new insights as to how, when and where retinal processing takes place, and the encoding nature of the retinal representation that is sent to the cortex for further processing. Based on these neurobiological discoveries, we provide computer simulation evidence from modelling retinal ganglion cells responses to some complex Tilt Illusions, suggesting that the emergence of tilt in these illusions is partially related to the interaction of multiscale visual processing performed in the retina. The output of our low-level filtering model is presented for several types of Tilt Illusion, predicting that the final tilt percept arises from multiple-scale processing of the Differences of Gaussians and the perceptual interaction of foreground and background elements. The model is a variation of classical receptive field implementation for simple cells in early stages of vision with the scales tuned to the object/texture sizes in the pattern. Our results suggest that this model has a high potential in revealing the underlying mechanism connecting low-level filtering approaches to mid- and high-level explanations such as 'Anchoring theory' and 'Perceptual grouping'.
Lasso and probabilistic inequalities for multivariate point processes
Hansen, Niels Richard; Reynaud-Bouret, Patricia; Rivoirard, Vincent
2012-01-01
Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function parameter to be estimated by linear combinations of a fixed dictionary. To select coefficients, we propose an adaptive $\\ell_{1}$-penalization methodology, where data-driven weights of the penalty are derived from new Bernstein type inequalities for martingales. Oracle inequalities...
The S-Process Branching-Point at 205PB
Tonchev, Anton; Tsoneva, N.; Bhatia, C.; Arnold, C. W.; Goriely, S.; Hammond, S. L.; Kelley, J. H.; Kwan, E.; Lenske, H.; Piekarewicz, J.; Raut, R.; Rusev, G.; Shizuma, T.; Tornow, W.
2017-09-01
Accurate neutron-capture cross sections for radioactive nuclei near the line of beta stability are crucial for understanding s-process nucleosynthesis. However, neutron-capture cross sections for short-lived radionuclides are difficult to measure due to the fact that the measurements require both highly radioactive samples and intense neutron sources. We consider photon scattering using monoenergetic and 100% linearly polarized photon beams to obtain the photoabsorption cross section on 206Pb below the neutron separation energy. This observable becomes an essential ingredient in the Hauser-Feshbach statistical model for calculations of capture cross sections on 205Pb. The newly obtained photoabsorption information is also used to estimate the Maxwellian-averaged radiative cross section of 205Pb(n,g)206Pb at 30 keV. The astrophysical impact of this measurement on s-process nucleosynthesis will be discussed. This work was performed under the auspices of US DOE by LLNL under Contract DE-AC52-07NA27344.
The Hinkley Point decision: An analysis of the policy process
International Nuclear Information System (INIS)
Thomas, Stephen
2016-01-01
In 2006, the British government launched a policy to build nuclear power reactors based on a claim that the power produced would be competitive with fossil fuel and would require no public subsidy. A decade later, it is not clear how many, if any, orders will be placed and the claims on costs and subsidies have proved false. Despite this failure to deliver, the policy is still being pursued with undiminished determination. The finance model that is now proposed is seen as a model other European countries can follow so the success or otherwise of the British nuclear programme will have implications outside the UK. This paper contends that the checks and balances that should weed out misguided policies, have failed. It argues that the most serious failure is with the civil service and its inability to provide politicians with high quality advice – truth to power. It concludes that the failure is likely to be due to the unwillingness of politicians to listen to opinions that conflict with their beliefs. Other weaknesses include the lack of energy expertise in the media, the unwillingness of the public to engage in the policy process and the impotence of Parliamentary Committees. - Highlights: •Britain's nuclear power policy is failing due to high costs and problems of finance. •This has implications for European countries who want to use the same financing model. •The continued pursuit of a failing policy is due to poor advice from civil servants. •Lack of expertise in the media and lack of public engagement have contributed. •Parliamentary processes have not provided proper critical scrutiny.
Application and Optimization of Kalman Filter for Baseband Signal Processing of GPS Receivers
Directory of Open Access Journals (Sweden)
He Yanpin
2016-01-01
Full Text Available High sensitivity tracking in GPS receiver is required in many weak signal circumstances. The key of improving sensitivity is the optimization of the loop filter in tracking. As Kalman filter is the most optimized linear filter, it is used in many engineering fields. This article introduced the application of Kalman filter as the loop filter of the carrier tracking loop in GPS receiver, to improve tracking sensitivity. The traditional loop filter is replaced. Simulation results show that the new structure improves the tracking sensitivity by 6dB and can make the tracking loop more robust when the navigation signal is languishing. The optimization of theKalman filter is also analysed, which further improves the sensitivity by 4dB.
GPR Raw-Data Order Statistic Filtering and Split-Spectrum Processing to Detect Moisture
Directory of Open Access Journals (Sweden)
Gokhan Kilic
2014-05-01
Full Text Available Considerable research into the area of bridge health monitoring has been undertaken; however, information is still lacking on the effects of certain defects, such as moisture ingress, on the results of ground penetrating radar (GPR surveying. In this paper, this issue will be addressed by examining the results of a GPR bridge survey, specifically the effect of moisture in the predicted position of the rebars. It was found that moisture ingress alters the radargram to indicate distortion or skewing of the steel reinforcements, when in fact destructive testing was able to confirm that no such distortion or skewing had occurred. Additionally, split-spectrum processing with order statistic filters was utilized to detect moisture ingress from the GPR raw data.
Rare-earth doped transparent ceramics for spectral filtering and quantum information processing
Kunkel, Nathalie; Ferrier, Alban; Thiel, Charles W.; Ramírez, Mariola O.; Bausá, Luisa E.; Cone, Rufus L.; Ikesue, Akio; Goldner, Philippe
2015-09-01
Homogeneous linewidths below 10 kHz are reported for the first time in high-quality Eu3+ doped Y 2O3 transparent ceramics. This result is obtained on the 7F0→5D0 transition in Eu3+ doped Y 2O3 ceramics and corresponds to an improvement of nearly one order of magnitude compared to previously reported values in transparent ceramics. Furthermore, we observed spectral hole lifetimes of ˜15 min that are long enough to enable efficient optical pumping of the nuclear hyperfine levels. Additionally, different Eu3+ concentrations (up to 1.0%) were studied, resulting in an increase of up to a factor of three in the peak absorption coefficient. These results suggest that transparent ceramics can be useful in applications where narrow and deep spectral holes can be burned into highly absorbing lines, such as quantum information processing and spectral filtering.
Two-dimensional signal processing using a morphological filter for holographic memory
Kondo, Yo; Shigaki, Yusuke; Yamamoto, Manabu
2012-03-01
Today, along with the wider use of high-speed information networks and multimedia, it is increasingly necessary to have higher-density and higher-transfer-rate storage devices. Therefore, research and development into holographic memories with three-dimensional storage areas is being carried out to realize next-generation large-capacity memories. However, in holographic memories, interference between bits, which affect the detection characteristics, occurs as a result of aberrations such as the deviation of a wavefront in an optical system. In this study, we pay particular attention to the nonlinear factors that cause bit errors, where filters with a Volterra equalizer and the morphologies are investigated as a means of signal processing.
Directory of Open Access Journals (Sweden)
Z. Lari
2012-07-01
Full Text Available Over the past few years, LiDAR systems have been established as a leading technology for the acquisition of high density point clouds over physical surfaces. These point clouds will be processed for the extraction of geo-spatial information. Local point density is one of the most important properties of the point cloud that highly affects the performance of data processing techniques and the quality of extracted information from these data. Therefore, it is necessary to define a standard methodology for the estimation of local point density indices to be considered for the precise processing of LiDAR data. Current definitions of local point density indices, which only consider the 2D neighbourhood of individual points, are not appropriate for 3D LiDAR data and cannot be applied for laser scans from different platforms. In order to resolve the drawbacks of these methods, this paper proposes several approaches for the estimation of the local point density index which take the 3D relationship among the points and the physical properties of the surfaces they belong to into account. In the simplest approach, an approximate value of the local point density for each point is defined while considering the 3D relationship among the points. In the other approaches, the local point density is estimated by considering the 3D neighbourhood of the point in question and the physical properties of the surface which encloses this point. The physical properties of the surfaces enclosing the LiDAR points are assessed through eigen-value analysis of the 3D neighbourhood of individual points and adaptive cylinder methods. This paper will discuss these approaches and highlight their impact on various LiDAR data processing activities (i.e., neighbourhood definition, region growing, segmentation, boundary detection, and classification. Experimental results from airborne and terrestrial LiDAR data verify the efficacy of considering local point density variation for
Directory of Open Access Journals (Sweden)
Zi-Ming Feng
2016-01-01
Full Text Available Hydrolysed polyacrylamide (HPAM mother liquor is mainly used to extract oil. The HPAM solution is needed to filter the impurity using a bag filter before it is injected into the oil well. Therefore, the pressure drop of HPAM mother liquor must be less than 0.02 MPa in the processing of impurity filtration. The influence factors on pressure drop need to be researched. In this work, the computational fluid dynamics program (CFD was used to research some key influence factors on pressure drop, such as porosity, outlet pressure of filter, inlet flow rate and viscosity of mother liquor. The simulation results indicated that with increasing porosity, outlet pressure, inlet flow rate and mother liquor viscosity, the pressure drop had increased after flowing through the filter bag.
The effects of using shell filters in the process of depuration for the survival of Anadara sp.
Pursetyo, K. T.; Sulmartiwi, L.; Alamsjah, M. A.; Tjahjaningsih, W.; Rosmarini, A. S.; Nikmah, M.
2018-04-01
Anadara sp. is one of the shellfish that has a source of animal protein is high and has a high economic value. However, to obtain a safe source of food, the products must meet the standard by the government, one of which is the limitations in heavy metals in the shells. In the standard of sanitation of shellfish is required to do the depuration process to remove the contaminants be it bacteria or heavy metals. In this study, randomized design with 5 treatments was used: P0 (control / without filter), P1 (25 % filter with shells), P2 (50 % filter with shells), P3 (75 % filter with shells), P4 (100 % filter with shells). Each treatment was replicated 4 times. The results showed that filtering of shell in depuration process could cause the highest shell death for 24 hours occurred in P4 of 24.39 % and the highest death during 48 hours also happened at the treatment of P4 which was equal to 61.71 %. During the research, water quality measurement was measured at 29-30 °C, pH 7.2-7.7, dissolved oxygen (DO) 4-4.4 mg/L and salinity 28-30 ppt.
Karacan, C Özgen; Olea, Ricardo A
2013-08-01
Coal seam degasification and its success are important for controlling methane, and thus for the health and safety of coal miners. During the course of degasification, properties of coal seams change. Thus, the changes in coal reservoir conditions and in-place gas content as well as methane emission potential into mines should be evaluated by examining time-dependent changes and the presence of major heterogeneities and geological discontinuities in the field. In this work, time-lapsed reservoir and fluid storage properties of the New Castle coal seam, Mary Lee/Blue Creek seam, and Jagger seam of Black Warrior Basin, Alabama, were determined from gas and water production history matching and production forecasting of vertical degasification wellbores. These properties were combined with isotherm and other important data to compute gas-in-place (GIP) and its change with time at borehole locations. Time-lapsed training images (TIs) of GIP and GIP difference corresponding to each coal and date were generated by using these point-wise data and Voronoi decomposition on the TI grid, which included faults as discontinuities for expansion of Voronoi regions. Filter-based multiple-point geostatistical simulations, which were preferred in this study due to anisotropies and discontinuities in the area, were used to predict time-lapsed GIP distributions within the study area. Performed simulations were used for mapping spatial time-lapsed methane quantities as well as their uncertainties within the study area. The systematic approach presented in this paper is the first time in literature that history matching, TIs of GIPs and filter simulations are used for degasification performance evaluation and for assessing GIP for mining safety. Results from this study showed that using production history matching of coalbed methane wells to determine time-lapsed reservoir data could be used to compute spatial GIP and representative GIP TIs generated through Voronoi decomposition
International Nuclear Information System (INIS)
Theodorsen, A; Garcia, O E; Rypdal, M
2017-01-01
Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type. (paper)
International Nuclear Information System (INIS)
Kim, Cheol Jung; Kim, Min Suk; Baik, Sung Hoon; Chung, Chin Man
2000-06-01
The application of high power Nd: YAG lasers for precision welding in industry has been growing quite fast these days in diverse areas such as the automobile, the electronics and the aerospace industries. These diverse applications also require the new developments for the precise control and the reliable process monitoring. Due to the hostile environment in laser welding, a remote monitoring is required. The present development relates in general to weld process monitoring techniques, and more particularly to improved methods and apparatus for real-time monitoring of thermal radiation of a weld pool to monitor a size variation and a focus shift of the weld pool for weld process control, utilizing the chromatic aberration of focusing lens or lenses. The monitoring technique of the size variation and the focus shift of a weld pool is developed by using the chromatic filtering of the thermal radiation from a weld pool. The monitoring of weld pool size variation can also be used to monitor the weld depth in a laser welding. Furthermore, the monitoring of the size variation of a weld pool is independent of the focus shift of a weld pool and the monitoring of the focus shift of a weld pool is independent of the size variation of a weld pool
Energy Technology Data Exchange (ETDEWEB)
Kim, Cheol Jung; Kim, Min Suk; Baik, Sung Hoon; Chung, Chin Man
2000-06-01
The application of high power Nd: YAG lasers for precision welding in industry has been growing quite fast these days in diverse areas such as the automobile, the electronics and the aerospace industries. These diverse applications also require the new developments for the precise control and the reliable process monitoring. Due to the hostile environment in laser welding, a remote monitoring is required. The present development relates in general to weld process monitoring techniques, and more particularly to improved methods and apparatus for real-time monitoring of thermal radiation of a weld pool to monitor a size variation and a focus shift of the weld pool for weld process control, utilizing the chromatic aberration of focusing lens or lenses. The monitoring technique of the size variation and the focus shift of a weld pool is developed by using the chromatic filtering of the thermal radiation from a weld pool. The monitoring of weld pool size variation can also be used to monitor the weld depth in a laser welding. Furthermore, the monitoring of the size variation of a weld pool is independent of the focus shift of a weld pool and the monitoring of the focus shift of a weld pool is independent of the size variation of a weld pool.
Processes of microbial pesticide degradation in rapid sand filters for treatment of drinking water
DEFF Research Database (Denmark)
Hedegaard, Mathilde Jørgensen; Albrechtsen, Hans-Jørgen
Aerobic rapid sand filters for treatment of groundwater at waterworks were investigated for the ability to remove pesticides. The potential, kinetics and mechanisms of microbial pesticide removal was investigated in microcosms consisting of filter sand, treated water and pesticides in initial...... concentrations of 0.04-2.4 μg/L. The pesticides were removed from the water in microcosms with filter sand from all three investigated sand filters. Within the experimental periode of six to 13 days, 65-85% of the bentazone, 86-93% of the glyphosate, 97-99% of the p-nitrophenol was removed from the water phase...
Locally-adaptive Myriad Filters for Processing ECG Signals in Real Time
Directory of Open Access Journals (Sweden)
Nataliya Tulyakova
2017-03-01
Full Text Available The locally adaptive myriad filters to suppress noise in electrocardiographic (ECG signals in almost in real time are proposed. Statistical estimates of efficiency according to integral values of such criteria as mean square error (MSE and signal-to-noise ratio (SNR for the test ECG signals sampled at 400 Hz embedded in additive Gaussian noise with different values of variance are obtained. Comparative analysis of adaptive filters is carried out. High efficiency of ECG filtering and high quality of signal preservation are demonstrated. It is shown that locally adaptive myriad filters provide higher degree of suppressing additive Gaussian noise with possibility of real time implementation.
An alarm filtering system for an automated process: a multiple-agent approach
International Nuclear Information System (INIS)
Khoualdi, Kamel
1994-01-01
Nowadays, the supervision process of industrial installations is more and more complex involving the automation of their control. A malfunction generates an avalanche of alarms. The operator, in charge of the supervision, must face the incident and execute right actions to recover a normal situation. Generally, he is drowned under the great number of alarms. Our aim, in the frame of our researches, is to perform an alarm filtering system for an automated metro line, to help the operator finding the main alarm responsible for the malfunction. Our works are divided into two parts, both dealing with study and development of an alarm filtering system but using two different approaches. The first part is developed in the frame of the SARA project (an operator assistance system for an automated metro line) which is an expert system prototype helping the operators of a command center. In this part, a centralized approach has been used representing the events with a single event graph and using a global procedure to perform diagnosis. This approach has itself shown its limits. In the second part of our works, we have considered the distributed artificial intelligence (DAI) techniques, and more especially the multi-agent approach. The multi-agent approach has been motivated by the natural distribution of the metro line equipment and by the fact that each equipment has its own local control and knowledge. Thus, each equipment has been considered as an autonomous agent. Through agents cooperation, the system is able to determine the main alarm and the faulty equipment responsible for the incident. A prototype, written in SPIRAL (a tool for knowledge-based system) is running on a workstation. This prototype has allowed the concretization and the validation of our multi-agent approach. (author) [fr
Development and evaluation of spatial point process models for epidermal nerve fibers.
Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila
2013-06-01
We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.
Critical Control Points in the Processing of Cassava Tuber for Ighu ...
African Journals Online (AJOL)
Determination of the critical control points in the processing of cassava tuber into Ighu was carried out. The critical control points were determined according to the Codex guidelines for the application of the HACCP system by conducting hazard analysis. Hazard analysis involved proper examination of each processing step ...
Distinguishing different types of inhomogeneity in Neyman-Scott point processes
Czech Academy of Sciences Publication Activity Database
Mrkvička, Tomáš
2014-01-01
Roč. 16, č. 2 (2014), s. 385-395 ISSN 1387-5841 Institutional support: RVO:60077344 Keywords : clustering * growing clusters * inhomogeneous cluster centers * inhomogeneous point process * location dependent scaling * Neyman-Scott point process Subject RIV: BA - General Mathematics Impact factor: 0.913, year: 2014
The importance of topographically corrected null models for analyzing ecological point processes.
McDowall, Philip; Lynch, Heather J
2017-07-01
Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.
Distribution and rate of microbial processes in ammonia-loaded air filter biofilm
DEFF Research Database (Denmark)
Juhler, Susanne; Nielsen, Lars Peter; Schramm, Andreas
2009-01-01
The in situ activity and distribution of heterotrophic and nitrifying bacteria and their potential interactions were investigated in a full-scale, two-section, trickling filter designed for biological degradation of volatile organics and NH3 in ventilation air from pig farms. The filter biofilm...
Directory of Open Access Journals (Sweden)
Hongliang Zhu
2018-01-01
Full Text Available With the development of cloud computing, the advantages of low cost and high computation ability meet the demands of complicated computation of multimedia processing. Outsourcing computation of cloud could enable users with limited computing resources to store and process distributed multimedia application data without installing multimedia application software in local computer terminals, but the main problem is how to protect the security of user data in untrusted public cloud services. In recent years, the privacy-preserving outsourcing computation is one of the most common methods to solve the security problems of cloud computing. However, the existing computation cannot meet the needs for the large number of nodes and the dynamic topologies. In this paper, we introduce a novel privacy-preserving outsourcing computation method which combines GM homomorphic encryption scheme and Bloom filter together to solve this problem and propose a new privacy-preserving outsourcing set intersection computation protocol. Results show that the new protocol resolves the privacy-preserving outsourcing set intersection computation problem without increasing the complexity and the false positive probability. Besides, the number of participants, the size of input secret sets, and the online time of participants are not limited.
Directory of Open Access Journals (Sweden)
Audrey Barbakoff
2011-03-01
Full Text Available In the Library with the Lead Pipe welcomes Audrey Barbakoff, a librarian at the Milwaukee Public Library, and Ahniwa Ferrari, Virtual Experience Manager at the Pierce County Library System in Washington, for a point-counterpoint piece on filtering in libraries. The opinions expressed here are those of the authors, and are not endorsed by their employers. [...
Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang
2014-01-01
Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.
Durantin, Gautier; Scannella, Sébastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frédéric
2015-01-01
Working memory (WM) is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor WM as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering, and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces (BCI). We used functional near infrared spectroscopy as it has been already successfully tested to measure WM capacity in complex environment with air traffic controllers (ATC), pilots, or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with nine participants involving a basic WM task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with ATC instructions (two levels of difficulty). The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence (MACD) filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in WM. In addition, the use of a Kalman filter increased
Directory of Open Access Journals (Sweden)
Gautier eDurantin
2016-01-01
Full Text Available Working memory is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor working memory as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces. We used functional near infrared spectroscopy as it has been already successfully tested to measure working memory capacity in complex environment with air traffic controllers, pilots or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with 9 participants involving a basic working memory task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with air traffic controller instructions (two levels of difficulty. The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in working
International Nuclear Information System (INIS)
Park, S.M.; Yang, H.Y.; Song, M.J.
2001-01-01
Evaporation system for liquid radioactive waste process has been used in Korean PWR nuclear power plants. The system is the most desirable process for decontamination factor (DF) theoretically. However, during the operation of the system, various problems have been arising such as scaling, carry over, etc. Because these problems make DF low, advanced technologies for liquid radwaste process have been world widely developed instead of keeping evaporation system. The main goal of new technologies is ALARA, ease of operation, cost effectiveness and minimization of environmental effect. Korea Electric Power Corporation is currently developing a combined treatment process for liquid radwaste using Micro-filter, Ultra-filter, Reverse Osmosis (RO) membrane, etc for the purpose of partly enhancement of evaporator and of having an alternative liquid radwaste process system for new reactors. As a part of the above project, the feasibility study using the Rolled Fiber-Filter (RFF) and RO membrane has been carried out. This paper reports the results of lab-test from the combined process of the fiber filtration and RO membrane module for cobalt and organics removal. The study was especially focused on the boric acid permeation in the RO unit. Because boric acid occupies large volume of the final waste after evaporation process, the new technology such as RO process has to be studied on the boron process. (author)
Energy Technology Data Exchange (ETDEWEB)
Ben Youssef, C; Dahhou, B; Roux, G [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France); Rols, J L [Institut National des Sciences Appliquees (INSA), 31 - Toulouse (France)
1996-12-31
Controlling the process of a fixed bed bioreactor imply solving filtering and adaptative control problems. Estimation processes have been developed for unmeasurable parameters. An adaptative non linear control has been built, instead of conventional approaches trying to linearize the system and apply a linear control system. (D.L.) 10 refs.
Mishra, Alok; Swati, D
2015-09-01
Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.
Efficient design of FIR filter based low-pass differentiators for biomedical signal processing
Directory of Open Access Journals (Sweden)
Wulf Michael
2016-09-01
Full Text Available This paper describes an alternative design of linear phase low-pass differentiators with a finite impulse response (type III FIR filter. To reduce the number of necessary filter coefficients, the differentiator’s transfer function is approximated by a Fourier series of a triangle function. Thereby the filter’s transition steepness towards the stopband is intentionally reduced. It can be shown that the proposed design of low-pass differentiators yields to similar results as other published design recommendations, while the filter order can be significantly reduced.
Adepeju, M.; Rosser, G.; Cheng, T.
2016-01-01
Many physical and sociological processes are represented as discrete events in time and space. These spatio-temporal point processes are often sparse, meaning that they cannot be aggregated and treated with conventional regression models. Models based on the point process framework may be employed instead for prediction purposes. Evaluating the predictive performance of these models poses a unique challenge, as the same sparseness prevents the use of popular measures such as the root mean squ...
International Nuclear Information System (INIS)
Zhang, Bin; E, Jiaqiang; Gong, Jinke; Yuan, Wenhua; Zuo, Wei; Li, Yu; Fu, Jun
2016-01-01
Highlights: • The multidisciplinary design optimization (MDO) for the DPF is presented. • MDO model and multi-objective functions of the DPF are established. • The optimal design parameters are obtained and DPF’s performances are improved. • The optimized results are verified by experiments. • The composite regeneration process of the optimized DPF allows a higher energy saving. - Abstract: In our previous works, the diesel particulate filter (DPF) using a new composite regeneration mode by coupling microwave and ceria-manganese base catalysts is verified as an effective way to reduce the particulate matter emission of the diesel engine. In order to improve the overall performance of this DPF, its multidisciplinary design optimization (MDO) model is established based on objective functions such as pressure drop, regeneration performance, microwave energy consumption, and thermal shock resistance. Then, the DPF is optimized by using MDO method based on adaptive mutative scale chaos optimization algorithm. The optimization results show that with the help of MDO, DPF’s pressure drop is decreased by 14.5%, regeneration efficiency is increased by 17.3%, microwave energy consumption is decreased by 17.6%, and thermal deformation is decreased by 25.3%. The optimization results are also verified by experiments, and the experimental results indicate that the optimized DPF has larger filtration efficiency, better emission performance and regeneration performance, smaller pressure drop, lower wall temperature and temperature gradient, and lower microwave energy consumption.
Fan, Li; Ni, Jinren; Wu, Yanjun; Zhang, Yongyong
2009-03-15
The wastewater originated from the production of bromoamine acid was treated in a sequential system of micro-electrolysis (ME) and biological aerobic filter (BAF). Decolorization and COD(Cr) removal rate of the proposed system was investigated with full consideration of the influence of two major controlling factors such as organic loading rate (OLR) and hydraulic retention time (HRT). The removal rate of COD(Cr) was 81.2% and that of chrominance could be up to 96.6% at an OLR of 0.56 kg m(-3)d(-1) when the total HRT was 43.4h. Most of the chrominance was removed by the ME treatment, however, the BAF process was more effective for COD(Cr) removal. The GC-MS and HPLC-MS analysis of the contaminants revealed that 1-aminoanthraquinone, bromoamine acid and mono-sulfonated 1,2-dichlorobenzene were the main organic components in the wastewater. The reductive transformation of the anthraquinone derivatives in the ME reactor improved the biodegradability of the wastewater, and rendered the decolorization. After long-term of operation, it was observed that the predominant microorganisms immobilized on the BAF carriers were rod-shaped and globular. Four bacterial strains with apparent 16S rDNA fragments in the Denaturing Gradient Gel Electrophoresis (DGGE) profiles of BAF samples were identified as Variovorax sp., Sphingomonas sp., Mycobacterium sp., and Microbacterium sp.
Energy Technology Data Exchange (ETDEWEB)
Fan Li [Shenzhen Graduate School, Peking University, Key Laboratory for Environmental and Urban Sciences, Guang Dong 518055 (China); Department of Environmental Engineering, Peking University, Key Laboratory of Water and Sediment Sciences, Ministry of Education, Beijing 100871 (China); Ni Jinren [Shenzhen Graduate School, Peking University, Key Laboratory for Environmental and Urban Sciences, Guang Dong 518055 (China); Department of Environmental Engineering, Peking University, Key Laboratory of Water and Sediment Sciences, Ministry of Education, Beijing 100871 (China)], E-mail: nijinren@iee.pku.edu.cn; Wu Yanjun; Zhang Yongyong [Shenzhen Graduate School, Peking University, Key Laboratory for Environmental and Urban Sciences, Guang Dong 518055 (China); Department of Environmental Engineering, Peking University, Key Laboratory of Water and Sediment Sciences, Ministry of Education, Beijing 100871 (China)
2009-03-15
The wastewater originated from the production of bromoamine acid was treated in a sequential system of micro-electrolysis (ME) and biological aerobic filter (BAF). Decolorization and COD{sub Cr} removal rate of the proposed system was investigated with full consideration of the influence of two major controlling factors such as organic loading rate (OLR) and hydraulic retention time (HRT). The removal rate of COD{sub Cr} was 81.2% and that of chrominance could be up to 96.6% at an OLR of 0.56 kg m{sup -3} d{sup -1} when the total HRT was 43.4 h. Most of the chrominance was removed by the ME treatment, however, the BAF process was more effective for COD{sub Cr} removal. The GC-MS and HPLC-MS analysis of the contaminants revealed that 1-aminoanthraquinone, bromoamine acid and mono-sulfonated 1,2-dichlorobenzene were the main organic components in the wastewater. The reductive transformation of the anthraquinone derivatives in the ME reactor improved the biodegradability of the wastewater, and rendered the decolorization. After long-term of operation, it was observed that the predominant microorganisms immobilized on the BAF carriers were rod-shaped and globular. Four bacterial strains with apparent 16S rDNA fragments in the Denaturing Gradient Gel Electrophoresis (DGGE) profiles of BAF samples were identified as Variovorax sp., Sphingomonas sp., Mycobacterium sp., and Microbacterium sp.
International Nuclear Information System (INIS)
Fan Li; Ni Jinren; Wu Yanjun; Zhang Yongyong
2009-01-01
The wastewater originated from the production of bromoamine acid was treated in a sequential system of micro-electrolysis (ME) and biological aerobic filter (BAF). Decolorization and COD Cr removal rate of the proposed system was investigated with full consideration of the influence of two major controlling factors such as organic loading rate (OLR) and hydraulic retention time (HRT). The removal rate of COD Cr was 81.2% and that of chrominance could be up to 96.6% at an OLR of 0.56 kg m -3 d -1 when the total HRT was 43.4 h. Most of the chrominance was removed by the ME treatment, however, the BAF process was more effective for COD Cr removal. The GC-MS and HPLC-MS analysis of the contaminants revealed that 1-aminoanthraquinone, bromoamine acid and mono-sulfonated 1,2-dichlorobenzene were the main organic components in the wastewater. The reductive transformation of the anthraquinone derivatives in the ME reactor improved the biodegradability of the wastewater, and rendered the decolorization. After long-term of operation, it was observed that the predominant microorganisms immobilized on the BAF carriers were rod-shaped and globular. Four bacterial strains with apparent 16S rDNA fragments in the Denaturing Gradient Gel Electrophoresis (DGGE) profiles of BAF samples were identified as Variovorax sp., Sphingomonas sp., Mycobacterium sp., and Microbacterium sp
Rare-earth doped transparent ceramics for spectral filtering and quantum information processing
Energy Technology Data Exchange (ETDEWEB)
Kunkel, Nathalie, E-mail: nathalie.kunkel@chimie-paristech.fr; Goldner, Philippe, E-mail: philippe.goldner@chimie-paristech.fr [PSL Research University, Chimie ParisTech–CNRS, Institut de Recherche de Chimie Paris, 11 rue Pierre et Marie Curie, 75005 Paris (France); Ferrier, Alban [PSL Research University, Chimie ParisTech–CNRS, Institut de Recherche de Chimie Paris, 11 rue Pierre et Marie Curie, 75005 Paris (France); Sorbonnes Universités, UPMC Univ Paris 06, 75005 Paris (France); Thiel, Charles W.; Cone, Rufus L. [Department of Physics, Montana State University, Bozeman, Montana 59717 (United States); Ramírez, Mariola O.; Bausá, Luisa E. [Departamento Física de Materiales and Instituto Nicolás Cabrera, Universidad Autónoma de Madrid, 28049 Madrid (Spain); Ikesue, Akio [World Laboratory, Mutsuno, Atsuta-ku, Nagoya 456-0023 (Japan)
2015-09-01
Homogeneous linewidths below 10 kHz are reported for the first time in high-quality Eu{sup 3+} doped Y {sub 2}O{sub 3} transparent ceramics. This result is obtained on the {sup 7}F{sub 0}→{sup 5}D{sub 0} transition in Eu{sup 3+} doped Y {sub 2}O{sub 3} ceramics and corresponds to an improvement of nearly one order of magnitude compared to previously reported values in transparent ceramics. Furthermore, we observed spectral hole lifetimes of ∼15 min that are long enough to enable efficient optical pumping of the nuclear hyperfine levels. Additionally, different Eu{sup 3+} concentrations (up to 1.0%) were studied, resulting in an increase of up to a factor of three in the peak absorption coefficient. These results suggest that transparent ceramics can be useful in applications where narrow and deep spectral holes can be burned into highly absorbing lines, such as quantum information processing and spectral filtering.
Liu, Bo; Yan, Dongdong; Wang, Qi; Li, Song; Yang, Shaogui; Wu, Wenfei
2009-09-01
A "two-stage biological aerated filter" (T-SBAF) consisting of two columns in series was developed to treat electroplating-wastewater. Due to the low BOD/CODcr values of electroplating-wastewater, "twice start-up" was employed to reduce the time for adaptation of microorganisms, a process that takes up of 20 days. Under steady-state conditions, the removal of CODcr and NH(4)(+)-N increased first and then decreased while the hydraulic loadings increased from 0.75 to 1.5 m(3) m(-2) h(-1). The air/water ratio had the same influence on the removal of CODcr and NH(4)(+)-N when increasing from 3:1 to 6:1. When the hydraulic loadings and air/water ratio were 1.20 m(3) m(-2) h(-1) and 4:1, the optimal removal of CODcr, NH(4)(+)-N and total-nitrogen (T-N) were 90.13%, 92.51% and 55.46%, respectively. The effluent steadily reached the wastewater reuse standard. Compared to the traditional BAF, the period before backwashing of the T-SBAF could be extended to 10days, and the recovery time was considerably shortened.
Directory of Open Access Journals (Sweden)
Jingyu Sun
2014-07-01
Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.
The filtering of raw water with partition system in pool row water for the process
International Nuclear Information System (INIS)
Harahap, Sentot Alibasya; Djunaidi
2003-01-01
The purpose of filtering raw water in the pool is decreasing soluble dirty in the water from Puspiptek PAM also the dirty from the environments. The monitoring of raw water since 1998 that the raw water is not so good in the quality. This partition system use tree type of screen a.i. the opened 10 mm, Mesh 60 and Mesh 100. The down position use a plat with 400 mm higher from the floor of the pool that given support frame from the L profile and strip plate by stainless steel (SS-304), use for deposited the impurities. The filter capability from the monitoring that the filtering result is a good quality, the TDS drop (Total Dissolved Solvent) is 2,5 gram/liter and the water filtering static type is (4 - 8,5) gram/liter
Dean, Robert; Flowers, George; Sanders, Nicole; MacAllister, Ken; Horvath, Roland; Hodel, A. S.; Johnson, Wayne; Kranz, Michael; Whitley, Michael
2005-05-01
Some harsh environments, such as those encountered by aerospace vehicles and various types of industrial machinery, contain high frequency/amplitude mechanical vibrations. Unfortunately, some very useful components are sensitive to these high frequency mechanical vibrations. Examples include MEMS gyroscopes and resonators, oscillators and some micro optics. Exposure of these components to high frequency mechanical vibrations present in the operating environment can result in problems ranging from an increased noise floor to component failure. Passive micromachined silicon lowpass filter structures (spring-mass-damper) have been demonstrated in recent years. However, the performance of these filter structures is typically limited by low damping (especially if operated in near-vacuum environments) and a lack of tunability after fabrication. Active filter topologies, such as piezoelectric, electrostrictive-polymer-film and SMA have also been investigated in recent years. Electrostatic actuators, however, are utilized in many micromachined silicon devices to generate mechanical motion. They offer a number of advantages, including low power, fast response time, compatibility with silicon micromachining, capacitive position measurement and relative simplicity of fabrication. This paper presents an approach for realizing active micromachined mechanical lowpass vibration isolation filters by integrating an electrostatic actuator with the micromachined passive filter structure to realize an active mechanical lowpass filter. Although the electrostatic actuator can be used to adjust the filter resonant frequency, the primary application is for increasing the damping to an acceptable level. The physical size of these active filters is suitable for use in or as packaging for sensitive electronic and MEMS devices, such as MEMS vibratory gyroscope chips.
Edit distance for marked point processes revisited: An implementation by binary integer programming
Energy Technology Data Exchange (ETDEWEB)
Hirata, Yoshito; Aihara, Kazuyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)
2015-12-15
We implement the edit distance for marked point processes [Suzuki et al., Int. J. Bifurcation Chaos 20, 3699–3708 (2010)] as a binary integer program. Compared with the previous implementation using minimum cost perfect matching, the proposed implementation has two advantages: first, by using the proposed implementation, we can apply a wide variety of software and hardware, even spin glasses and coherent ising machines, to calculate the edit distance for marked point processes; second, the proposed implementation runs faster than the previous implementation when the difference between the numbers of events in two time windows for a marked point process is large.
The cylindrical K-function and Poisson line cluster point processes
DEFF Research Database (Denmark)
Møller, Jesper; Safavimanesh, Farzaneh; Rasmussen, Jakob G.
Poisson line cluster point processes, is also introduced. Parameter estimation based on moment methods or Bayesian inference for this model is discussed when the underlying Poisson line process and the cluster memberships are treated as hidden processes. To illustrate the methodologies, we analyze two...
Selection vector filter framework
Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.
2003-10-01
We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.
Bridging the gap between a stationary point process and its Palm distribution
Nieuwenhuis, G.
1994-01-01
In the context of stationary point processes measurements are usually made from a time point chosen at random or from an occurrence chosen at random. That is, either the stationary distribution P or its Palm distribution P° is the ruling probability measure. In this paper an approach is presented to
Hierarchical spatial point process analysis for a plant community with high biodiversity
DEFF Research Database (Denmark)
Illian, Janine B.; Møller, Jesper; Waagepetersen, Rasmus
2009-01-01
A complex multivariate spatial point pattern of a plant community with high biodiversity is modelled using a hierarchical multivariate point process model. In the model, interactions between plants with different post-fire regeneration strategies are of key interest. We consider initially a maxim...
Definition of distance for nonlinear time series analysis of marked point process data
Energy Technology Data Exchange (ETDEWEB)
Iwayama, Koji, E-mail: koji@sat.t.u-tokyo.ac.jp [Research Institute for Food and Agriculture, Ryukoku Univeristy, 1-5 Yokotani, Seta Oe-cho, Otsu-Shi, Shiga 520-2194 (Japan); Hirata, Yoshito; Aihara, Kazuyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)
2017-01-30
Marked point process data are time series of discrete events accompanied with some values, such as economic trades, earthquakes, and lightnings. A distance for marked point process data allows us to apply nonlinear time series analysis to such data. We propose a distance for marked point process data which can be calculated much faster than the existing distance when the number of marks is small. Furthermore, under some assumptions, the Kullback–Leibler divergences between posterior distributions for neighbors defined by this distance are small. We performed some numerical simulations showing that analysis based on the proposed distance is effective. - Highlights: • A new distance for marked point process data is proposed. • The distance can be computed fast enough for a small number of marks. • The method to optimize parameter values of the distance is also proposed. • Numerical simulations indicate that the analysis based on the distance is effective.
Karacan, C. Özgen; Olea, Ricardo A.
2013-01-01
Coal seam degasification and its success are important for controlling methane, and thus for the health and safety of coal miners. During the course of degasification, properties of coal seams change. Thus, the changes in coal reservoir conditions and in-place gas content as well as methane emission potential into mines should be evaluated by examining time-dependent changes and the presence of major heterogeneities and geological discontinuities in the field. In this work, time-lapsed reservoir and fluid storage properties of the New Castle coal seam, Mary Lee/Blue Creek seam, and Jagger seam of Black Warrior Basin, Alabama, were determined from gas and water production history matching and production forecasting of vertical degasification wellbores. These properties were combined with isotherm and other important data to compute gas-in-place (GIP) and its change with time at borehole locations. Time-lapsed training images (TIs) of GIP and GIP difference corresponding to each coal and date were generated by using these point-wise data and Voronoi decomposition on the TI grid, which included faults as discontinuities for expansion of Voronoi regions. Filter-based multiple-point geostatistical simulations, which were preferred in this study due to anisotropies and discontinuities in the area, were used to predict time-lapsed GIP distributions within the study area. Performed simulations were used for mapping spatial time-lapsed methane quantities as well as their uncertainties within the study area.
Process and results of analytical framework and typology development for POINT
DEFF Research Database (Denmark)
Gudmundsson, Henrik; Lehtonen, Markku; Bauler, Tom
2009-01-01
POINT is a project about how indicators are used in practice; to what extent and in what way indicators actually influence, support, or hinder policy and decision making processes, and what could be done to enhance the positive role of indicators in such processes. The project needs an analytical......, a set of core concepts and associated typologies, a series of analytic schemes proposed, and a number of research propositions and questions for the subsequent empirical work in POINT....
FFT swept filtering: a bias-free method for processing fringe signals in absolute gravimeters
Křen, Petr; Pálinkáš, Vojtech; Mašika, Pavel; Val'ko, Miloš
2018-05-01
Absolute gravimeters, based on laser interferometry, are widely used for many applications in geoscience and metrology. Although currently the most accurate FG5 and FG5X gravimeters declare standard uncertainties at the level of 2-3 μGal, their inherent systematic errors affect the gravity reference determined by international key comparisons based predominately on the use of FG5-type instruments. The measurement results for FG5-215 and FG5X-251 clearly showed that the measured g-values depend on the size of the fringe signal and that this effect might be approximated by a linear regression with a slope of up to 0.030 μGal/mV . However, these empirical results do not enable one to identify the source of the effect or to determine a reasonable reference fringe level for correcting g-values in an absolute sense. Therefore, both gravimeters were equipped with new measuring systems (according to Křen et al. in Metrologia 53:27-40, 2016. https://doi.org/10.1088/0026-1394/53/1/27 applied for FG5), running in parallel with the original systems. The new systems use an analogue-to-digital converter HS5 to digitize the fringe signal and a new method of fringe signal analysis based on FFT swept bandpass filtering. We demonstrate that the source of the fringe size effect is connected to a distortion of the fringe signal due to the electronic components used in the FG5(X) gravimeters. To obtain a bias-free g-value, the FFT swept method should be applied for the determination of zero-crossings. A comparison of g-values obtained from the new and the original systems clearly shows that the original system might be biased by approximately 3-5 μGal due to improperly distorted fringe signal processing.
Image enhancement by spatial frequency post-processing of images obtained with pupil filters
Estévez, Irene; Escalera, Juan C.; Stefano, Quimey Pears; Iemmi, Claudio; Ledesma, Silvia; Yzuel, María J.; Campos, Juan
2016-12-01
The use of apodizing or superresolving filters improves the performance of an optical system in different frequency bands. This improvement can be seen as an increase in the OTF value compared to the OTF for the clear aperture. In this paper we propose a method to enhance the contrast of an image in both its low and its high frequencies. The method is based on the generation of a synthetic Optical Transfer Function, by multiplexing the OTFs given by the use of different non-uniform transmission filters on the pupil. We propose to capture three images, one obtained with a clear pupil, one obtained with an apodizing filter that enhances the low frequencies and another one taken with a superresolving filter that improves the high frequencies. In the Fourier domain the three spectra are combined by using smoothed passband filters, and then the inverse transform is performed. We show that we can create an enhanced image better than the image obtained with the clear aperture. To evaluate the performance of the method, bar tests (sinusoidal tests) with different frequency content are used. The results show that a contrast improvement in the high and low frequencies is obtained.
Asymmetric designed sintered metal filter elements in the HTF process of LILW vitrification plant
International Nuclear Information System (INIS)
Roehlig, Rainer
2005-01-01
Sintered metal filter elements have been used for years and have been successfully in operation in different application. The technical and economical advantages of only recently developed asymmetric Metallic Membranes elements, which operate as a surface filter, will be shown in comparison with standard sintered metal filter cartridges. The permeability, particle retention and back flushing performance have been improved. In order to achieve this, an asymmetric structure was designed in which an active filtration layer is applied onto a coarse porous metal support material made out of the same alloy. The economical benefits for customers are low maintenance and reduced investment cost as well as defined particle retention as is required by the users
Spatial Mixture Modelling for Unobserved Point Processes: Examples in Immunofluorescence Histology.
Ji, Chunlin; Merl, Daniel; Kepler, Thomas B; West, Mike
2009-12-04
We discuss Bayesian modelling and computational methods in analysis of indirectly observed spatial point processes. The context involves noisy measurements on an underlying point process that provide indirect and noisy data on locations of point outcomes. We are interested in problems in which the spatial intensity function may be highly heterogenous, and so is modelled via flexible nonparametric Bayesian mixture models. Analysis aims to estimate the underlying intensity function and the abundance of realized but unobserved points. Our motivating applications involve immunological studies of multiple fluorescent intensity images in sections of lymphatic tissue where the point processes represent geographical configurations of cells. We are interested in estimating intensity functions and cell abundance for each of a series of such data sets to facilitate comparisons of outcomes at different times and with respect to differing experimental conditions. The analysis is heavily computational, utilizing recently introduced MCMC approaches for spatial point process mixtures and extending them to the broader new context here of unobserved outcomes. Further, our example applications are problems in which the individual objects of interest are not simply points, but rather small groups of pixels; this implies a need to work at an aggregate pixel region level and we develop the resulting novel methodology for this. Two examples with with immunofluorescence histology data demonstrate the models and computational methodology.
Peters, Andre; Nehls, Thomas; Wessolek, Gerd
2016-06-01
Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.
Lessard, Jean-Philippe; Weinstein, Ben G; Borregaard, Michael K; Marske, Katharine A; Martin, Danny R; McGuire, Jimmy A; Parra, Juan L; Rahbek, Carsten; Graham, Catherine H
2016-01-01
A persistent challenge in ecology is to tease apart the influence of multiple processes acting simultaneously and interacting in complex ways to shape the structure of species assemblages. We implement a heuristic approach that relies on explicitly defining species pools and permits assessment of the relative influence of the main processes thought to shape assemblage structure: environmental filtering, dispersal limitations, and biotic interactions. We illustrate our approach using data on the assemblage composition and geographic distribution of hummingbirds, a comprehensive phylogeny and morphological traits. The implementation of several process-based species pool definitions in null models suggests that temperature-but not precipitation or dispersal limitation-acts as the main regional filter of assemblage structure. Incorporating this environmental filter directly into the definition of assemblage-specific species pools revealed an otherwise hidden pattern of phylogenetic evenness, indicating that biotic interactions might further influence hummingbird assemblage structure. Such hidden patterns of assemblage structure call for a reexamination of a multitude of phylogenetic- and trait-based studies that did not explicitly consider potentially important processes in their definition of the species pool. Our heuristic approach provides a transparent way to explore patterns and refine interpretations of the underlying causes of assemblage structure.
De Paepe, Domien; Coudijzer, Katleen; Noten, Bart; Valkenborg, Dirk; Servaes, Kelly; De Loose, Marc; Diels, Ludo; Voorspoels, Stefan; Van Droogenbroeck, Bart
2015-04-15
In this study, advantages and disadvantages of the innovative, low-oxygen spiral-filter press system were studied in comparison with the belt press, commonly applied in small and medium size enterprises for the production of cloudy apple juice. On the basis of equivalent throughput, a higher juice yield could be achieved with spiral-filter press. Also a more turbid juice with a higher content of suspended solids could be produced. The avoidance of enzymatic browning during juice extraction led to an attractive yellowish juice with an elevated phenolic content. Moreover, it was found that juice produced with spiral-filter press demonstrates a higher retention of phenolic compounds during the downstream processing steps and storage. The results demonstrates the advantage of the use of a spiral-filter press in comparison with belt press in the production of a high quality cloudy apple juice rich in phenolic compounds, without the use of oxidation inhibiting additives. Copyright © 2014 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Lin, Katie
to precipitation and corrosion. Manganese and iron can either be removed physico-chemically or biologically or combined. The physico-chemical oxidation and precipitation of manganese can theoretically be achieved by aeration, but this process is slow unless pH is raised far above neutral, making the removal...... of manganese by simple aeration and precipitation under normal drinking water treatment conditions insignificant. Manganese may also be oxidized autocatalytically. Iron is usually easier to remove. First, iron is rapidly chemically oxidized by oxygen at neutral pH followed by precipitation and filtration......-filter, where iron is removed. Step 2: Filtration in an after-filter where e.g. ammonium and manganese is removed. The treatment relies on microbial processes and may present an alternative, greener and more sustainable approach for drinking water production spending less chemicals and energy than chemical (e...
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
The research purpose of this paper is to show the limitations of the existing radiometric normalization approaches and their disadvantages in change detection of artificial objects by comparing the existing approaches,on the basis of which a preprocessing approach to radiometric consistency,based on wavelet transform and a spatial low-pass filter,has been devised.This approach first separates the high frequency information and low frequency information by wavelet transform.Then,the processing of relative radiometric consistency based on a low-pass filter is conducted on the low frequency parts.After processing,an inverse wavelet transform is conducted to obtain the results image.The experimental results show that this approach can substantially reduce the influence on change detection of linear or nonlinear radiometric differences in multi-temporal images.
Burggraeve, A; Van den Kerkhof, T; Hellings, M; Remon, J P; Vervaet, C; De Beer, T
2011-04-18
Fluid bed granulation is a batch process, which is characterized by the processing of raw materials for a predefined period of time, consisting of a fixed spraying phase and a subsequent drying period. The present study shows the multivariate statistical modeling and control of a fluid bed granulation process based on in-line particle size distribution (PSD) measurements (using spatial filter velocimetry) combined with continuous product temperature registration using a partial least squares (PLS) approach. Via the continuous in-line monitoring of the PSD and product temperature during granulation of various reference batches, a statistical batch model was developed allowing the real-time evaluation and acceptance or rejection of future batches. Continuously monitored PSD and product temperature process data of 10 reference batches (X-data) were used to develop a reference batch PLS model, regressing the X-data versus the batch process time (Y-data). Two PLS components captured 98.8% of the variation in the X-data block. Score control charts in which the average batch trajectory and upper and lower control limits are displayed were developed. Next, these control charts were used to monitor 4 new test batches in real-time and to immediately detect any deviations from the expected batch trajectory. By real-time evaluation of new batches using the developed control charts and by computation of contribution plots of deviating process behavior at a certain time point, batch losses or reprocessing can be prevented. Immediately after batch completion, all PSD and product temperature information (i.e., a batch progress fingerprint) was used to estimate some granule properties (density and flowability) at an early stage, which can improve batch release time. Individual PLS models relating the computed scores (X) of the reference PLS model (based on the 10 reference batches) and the density, respectively, flowabililty as Y-matrix, were developed. The scores of the 4 test
DEFF Research Database (Denmark)
Häggström, Olle; Lieshout, Marie-Colette van; Møller, Jesper
1999-01-01
The area-interaction process and the continuum random-cluster model are characterized in terms of certain functional forms of their respective conditional intensities. In certain cases, these two point process models can be derived from a bivariate point process model which in many respects...... is simpler to analyse and simulate. Using this correspondence we devise a two-component Gibbs sampler, which can be used for fast and exact simulation by extending the recent ideas of Propp and Wilson. We further introduce a Swendsen-Wang type algorithm. The relevance of the results within spatial statistics...
Modeling the neutron spin-flip process in a time-of-flight spin-resonance energy filter
Parizzi, A A; Klose, F
2002-01-01
A computer program for modeling the neutron spin-flip process in a novel time-of-flight (TOF) spin-resonance energy filter has been developed. The software allows studying the applicability of the device in various areas of spallation neutron scattering instrumentation, for example as a dynamic TOF monochromator. The program uses a quantum-mechanical approach to calculate the local spin-dependent spectra and is essential for optimizing the magnetic field profiles along the resonator axis. (orig.)
SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD
Directory of Open Access Journals (Sweden)
J. Zhang
2013-05-01
Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.
International Nuclear Information System (INIS)
Venkateswaran, G.; Gokhale, B.K.
2007-01-01
Iron turbidity is observed in the intermediate cooling circuit of the active process water system (APWS) of Kaiga Generating Station (KGS). Deposition of hydrous/hydrated oxides of iron on the plate type heat exchanger, which is employed to transfer heat from the APWS to the active process cooling water system (APCWS), can in turn result in higher moderator D 2 O temperatures due to reduced heat transfer. Characterization of turbidity showed that the major component is γ-FeOOH. An in-house designed and fabricated electrochemical filter (ECF) containing an alternate array of 33 pairs of cathode and anode graphite felts was successfully tested for the removal of iron turbidity from the APWS of Kaiga Generating Station Unit No. 1 (KGS No. 1). A total volume of 52.5 m 3 water was processed using the filter. At an average inlet turbidity of 5.6 nephelometric turbidity units (NTU), the outlet turbidity observed from the ECF was 1.6 NTU. A maximum flow rate (10 L . min -1 ) and applied potential of 18.0-20.0 V was found to yield an average turbidity-removal efficiency of ∝ 75 %. When the experiment was terminated, a throughput of > 2.08 . 10 5 NTU-liters was realized without any reduction in the removal efficiency. Removal of the internals of the filter showed that only the bottom 11 pairs of felts had brownish deposits, while the remaining felts looked clean and unused. (orig.)
Second-order analysis of structured inhomogeneous spatio-temporal point processes
DEFF Research Database (Denmark)
Møller, Jesper; Ghorbani, Mohammad
Statistical methodology for spatio-temporal point processes is in its infancy. We consider second-order analysis based on pair correlation functions and K-functions for first general inhomogeneous spatio-temporal point processes and second inhomogeneous spatio-temporal Cox processes. Assuming...... spatio-temporal separability of the intensity function, we clarify different meanings of second-order spatio-temporal separability. One is second-order spatio-temporal independence and relates e.g. to log-Gaussian Cox processes with an additive covariance structure of the underlying spatio......-temporal Gaussian process. Another concerns shot-noise Cox processes with a separable spatio-temporal covariance density. We propose diagnostic procedures for checking hypotheses of second-order spatio-temporal separability, which we apply on simulated and real data (the UK 2001 epidemic foot and mouth disease data)....
Truccolo, Wilson
2016-11-01
This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.
Du, Fuyi; Xie, Qingjie; Fang, Longxiang; Su, Hang
2016-08-01
Nutrients (nitrogen and phosphorus) from agricultural non-point source (NPS) pollution have been increasingly recognized as a major contributor to the deterioration of water quality in recent years. The purpose of this article is to investigate the discrepancies in interception of nutrients in agricultural NPS pollution for eco-soil reactors using different filling schemes. Parallel eco-soil reactors of laboratory scale were created and filled with filter media, such as grit, zeolite, limestone, and gravel. Three filling schemes were adopted: increasing-sized filling (I-filling), decreasing-sized filling (D-filling), and blend-sized filling (B-filling). The systems were intermittent operations via simulated rainstorm runoff. The nutrient removal efficiency, biomass accumulation and vertical dissolved oxygen (DO) distribution were defined to assess the performance of eco-soil. The results showed that B-filling reactor presented an ideal DO for partial nitrification-denitrification across the eco-soil, and B-filling was the most stable in the change of bio-film accumulation trends with depth in the three fillings. Simultaneous and highest removals of NH4(+)-N (57.74-70.52%), total nitrogen (43.69-54.50%), and total phosphorus (42.50-55.00%) were obtained in the B-filling, demonstrating the efficiency of the blend filling schemes of eco-soil for oxygen transfer and biomass accumulation to cope with agricultural NPS pollution.
Directory of Open Access Journals (Sweden)
Bartłomiej Kraszewski
2015-06-01
Full Text Available The article presents the results of research on the effect that radiometric quality of point cloud RGB attributes have on color-based segmentation. In the research, a point cloud with a resolution of 5 mm, received from FAROARO Photon 120 scanner, described the fragment of an office’s room and color images were taken by various digital cameras. The images were acquired by SLR Nikon D3X, and SLR Canon D200 integrated with the laser scanner, compact camera Panasonic TZ-30 and a mobile phone digital camera. Color information from images was spatially related to point cloud in FAROARO Scene software. The color-based segmentation of testing data was performed with the use of a developed application named “RGB Segmentation”. The application was based on public Point Cloud Libraries (PCL and allowed to extract subsets of points fulfilling the criteria of segmentation from the source point cloud using region growing method.Using the developed application, the segmentation of four tested point clouds containing different RGB attributes from various images was performed. Evaluation of segmentation process was performed based on comparison of segments acquired using the developed application and extracted manually by an operator. The following items were compared: the number of obtained segments, the number of correctly identified objects and the correctness of segmentation process. The best correctness of segmentation and most identified objects were obtained using the data with RGB attribute from Nikon D3X images. Based on the results it was found that quality of RGB attributes of point cloud had impact only on the number of identified objects. In case of correctness of the segmentation, as well as its error no apparent relationship between the quality of color information and the result of the process was found.[b]Keywords[/b]: terrestrial laser scanning, color-based segmentation, RGB attribute, region growing method, digital images, points cloud
International Nuclear Information System (INIS)
Holmberg, J.
1997-04-01
The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant
Energy Technology Data Exchange (ETDEWEB)
Holmberg, J [VTT Automation, Espoo (Finland)
1997-04-01
The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant. 62 refs. The thesis includes also five previous publications by author.
Apparatus and method for implementing power saving techniques when processing floating point values
Kim, Young Moon; Park, Sang Phill
2017-10-03
An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.
Effect of processing conditions on oil point pressure of moringa oleifera seed.
Aviara, N A; Musa, W B; Owolarafe, O K; Ogunsina, B S; Oluwole, F A
2015-07-01
Seed oil expression is an important economic venture in rural Nigeria. The traditional techniques of carrying out the operation is not only energy sapping and time consuming but also wasteful. In order to reduce the tedium involved in the expression of oil from moringa oleifera seed and develop efficient equipment for carrying out the operation, the oil point pressure of the seed was determined under different processing conditions using a laboratory press. The processing conditions employed were moisture content (4.78, 6.00, 8.00 and 10.00 % wet basis), heating temperature (50, 70, 85 and 100 °C) and heating time (15, 20, 25 and 30 min). Results showed that the oil point pressure increased with increase in seed moisture content, but decreased with increase in heating temperature and heating time within the above ranges. Highest oil point pressure value of 1.1239 MPa was obtained at the processing conditions of 10.00 % moisture content, 50 °C heating temperature and 15 min heating time. The lowest oil point pressure obtained was 0.3164 MPa and it occurred at the moisture content of 4.78 %, heating temperature of 100 °C and heating time of 30 min. Analysis of Variance (ANOVA) showed that all the processing variables and their interactions had significant effect on the oil point pressure of moringa oleifera seed at 1 % level of significance. This was further demonstrated using Response Surface Methodology (RSM). Tukey's test and Duncan's Multiple Range Analysis successfully separated the means and a multiple regression equation was used to express the relationship existing between the oil point pressure of moringa oleifera seed and its moisture content, processing temperature, heating time and their interactions. The model yielded coefficients that enabled the oil point pressure of the seed to be predicted with very high coefficient of determination.
Energy Technology Data Exchange (ETDEWEB)
Nebot Sanz, E.; Romero Garcia, L.I.; Quiroga Alonso, J.M.; Sales Marquez, D. (Departamento de Ingenieria Quimica, Universidad de Cadiz, Cadiz (Spain))
1994-01-01
In this work, the optimization of thermophilic anaerobic process, using Anaerobic Filter technology was studied. Feed of the Anaerobic Filter was wine-distillery wastewaters. The experiments developed were carried out at lab-scale downflow anaerobic filter reactors. Reactors were filled with a high porous plastic media (Flocor-R). The media support entities have a high surface/volume ratio. Test were run to determine the maximum organic load attainable in the system for wich both, the depurative efficiency and the methane production were optimum. Likewise, the effect of organic load on the anaerobic filter performance were studied. (Author) 15 refs. (Author)
Visual Information Processing Based on Spatial Filters Constrained by Biological Data.
1978-12-01
was provided by Pantie and Sekuler ( 19681. They found that the detection (if gratings was affected most by adapting isee Section 6.1. 11 to square...evidence for certain eye scans being directed by spatial information in filtered images is given. Eye scan paths of a portrait of a young girl I Figure 08...multistable objects to more complex objects such as the man- girl figure of Fisher 119681, decision boundaries that are a natural concomitant to any pattern
daptive Filter Used as a Dynamic Compensator in Automatic Gauge Control of Strip Rolling Processes
Directory of Open Access Journals (Sweden)
N. ROMAN
2010-12-01
Full Text Available The paper deals with a control structure of the strip thickness in a rolling mill of quarto type (AGC – Automatic Gauge Control. It performs two functions: the compensation of errors induced by unideal dynamics of the tracking systems lead by AGC system and the control adaptation to the change of dynamic properties of the tracking systems. The compensation of dynamical errors is achieved through inverse models of the tracking system, implemented as adaptive filters.
Linear and quadratic models of point process systems: contributions of patterned input to output.
Lindsay, K A; Rosenberg, J R
2012-08-01
In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.
Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale
Barrios, M. I.
2013-12-01
The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues
Putting to point the production process of iodine-131 by dry distillation (Preoperational tests)
International Nuclear Information System (INIS)
Alanis M, J.
2002-12-01
With the purpose of putting to point the process of production of 131 I, one of the objectives of carrying out the realization of operational tests of the production process of iodine-131, it was of verifying the operation of each one of the following components: heating systems, vacuum system, mechanical system and peripheral equipment that are part of the production process of iodine-131, another of the objectives, was settling down the optimal parameters that were applied in each process during the obtaining of iodine-131, it is necessary to point out that this objective is very important, since the components of the equipment are new and its behavior during the process is different to the equipment where its were carried out the experimental studies. (Author)
Directory of Open Access Journals (Sweden)
Houzeng Han
2016-07-01
Full Text Available Precise Point Positioning (PPP makes use of the undifferenced pseudorange and carrier phase measurements with ionospheric-free (IF combinations to achieve centimeter-level positioning accuracy. Conventionally, the IF ambiguities are estimated as float values. To improve the PPP positioning accuracy and shorten the convergence time, the integer phase clock model with between-satellites single-difference (BSSD operation is used to recover the integer property. However, the continuity and availability of stand-alone PPP is largely restricted by the observation environment. The positioning performance will be significantly degraded when GPS operates under challenging environments, if less than five satellites are present. A commonly used approach is integrating a low cost inertial sensor to improve the positioning performance and robustness. In this study, a tightly coupled (TC algorithm is implemented by integrating PPP with inertial navigation system (INS using an Extended Kalman filter (EKF. The navigation states, inertial sensor errors and GPS error states are estimated together. The troposphere constrained approach, which utilizes external tropospheric delay as virtual observation, is applied to further improve the ambiguity-fixed height positioning accuracy, and an improved adaptive filtering strategy is implemented to improve the covariance modelling considering the realistic noise effect. A field vehicular test with a geodetic GPS receiver and a low cost inertial sensor was conducted to validate the improvement on positioning performance with the proposed approach. The results show that the positioning accuracy has been improved with inertial aiding. Centimeter-level positioning accuracy is achievable during the test, and the PPP/INS TC integration achieves a fast re-convergence after signal outages. For troposphere constrained solutions, a significant improvement for the height component has been obtained. The overall positioning accuracies
Analysis of fire and smoke threat to off-gas HEPA filters in a transuranium processing plant
International Nuclear Information System (INIS)
Alvares, N.J.
1988-01-01
The author performed an analysis of fire risk to the high-efficiency particulate air (HEPA) filters that provide ventilation containment for a transuranium processing plant at the Oak Ridge National Laboratory. A fire-safety survey by an independent fire-protection consulting company had identified the HEPA filters in the facility's off-gas containment ventilation system as being at risk from fire effects. Independently studied were the ventilation networks and flow dynamics, and typical fuel loads were analyzed. It was found that virtually no condition for fire initiation exists and that, even if a fire started, its consequences would be minimal as a result of standard shut-down procedures. Moreover, the installed fire-protection system would limit any fire and thus would further reduce smoke or heat exposure to the ventilation components. 4 references, 4 figures, 5 tables
Designing H-shaped micromechanical filters
International Nuclear Information System (INIS)
Arhaug, O P; Soeraasen, O
2006-01-01
This paper investigates the design constraints and possibilities given when designing a micromechanical band pass filter for intermediate frequencies (e.g. 10 MHz). The class of filters are based on coupled clamped-clamped beams constituting an H-shaped structure. A primary beam can electrostatically be activated in one of its different harmonic modes, setting the filter center frequency. The motion is transferred to an accompanying beam of equal dimensions by a mechanical coupling beam. The placement or coupling points of the quarterwavelength coupling beam which connects the vertically resonating beams is critical with respect to the bandwidth of the filters. Of special concern has been to investigate realistic dimensions allowing the filters to be processed by an actual foundry process and to find out how the choice of materials and actual dimensions would affect the performance
Hazard analysis and critical control point (HACCP) for an ultrasound food processing operation.
Chemat, Farid; Hoarau, Nicolas
2004-05-01
Emerging technologies, such as ultrasound (US), used for food and drink production often cause hazards for product safety. Classical quality control methods are inadequate to control these hazards. Hazard analysis of critical control points (HACCP) is the most secure and cost-effective method for controlling possible product contamination or cross-contamination, due to physical or chemical hazard during production. The following case study on the application of HACCP to an US food-processing operation demonstrates how the hazards at the critical control points of the process are effectively controlled through the implementation of HACCP.
The application of prototype point processes for the summary and description of California wildfires
Nichols, K.; Schoenberg, F.P.; Keeley, J.E.; Bray, A.; Diez, D.
2011-01-01
A method for summarizing repeated realizations of a space-time marked point process, known as prototyping, is discussed and applied to catalogues of wildfires in California. Prototype summaries are constructed for varying time intervals using California wildfire data from 1990 to 2006. Previous work on prototypes for temporal and space-time point processes is extended here to include methods for computing prototypes with marks and the incorporation of prototype summaries into hierarchical clustering algorithms, the latter of which is used to delineate fire seasons in California. Other results include summaries of patterns in the spatial-temporal distribution of wildfires within each wildfire season. ?? 2011 Blackwell Publishing Ltd.
Mass measurement on the rp-process waiting point {sup 72}Kr
Energy Technology Data Exchange (ETDEWEB)
Rodriguez, D. [Gesellschaft fuer Schwerionenforschung mbH, Darmstadt (Germany); Kolhinen, V.S. [Jyvaeskylae Univ. (Finland); Audi, G. [CSNSM-IN2P3-Centre National de la Recherche Scientifique (CNRS), 91 - Orsay (FR)] [and others
2004-06-01
The mass of one of the three major waiting points in the astrophysical rp-process {sup 72}Kr was measured for the first time with the Penning trap mass spectrometer ISOLTRAP. The measurement yielded a relative mass uncertainty of {delta}m/m=1.2 x 10{sup -7} ({delta}m=8 keV). Other Kr isotopes, also needed for astrophysical calculations, were measured with more than one order of magnitude improved accuracy. We use the ISOLTRAP masses of{sup 72-74}Kr to reanalyze the role of the {sup 72}Kr waiting point in the rp-process during X-ray bursts. (orig.)
Directory of Open Access Journals (Sweden)
Nicolas Chemidlin Prévost-Bouré
Full Text Available Spatial scaling of microorganisms has been demonstrated over the last decade. However, the processes and environmental filters shaping soil microbial community structure on a broad spatial scale still need to be refined and ranked. Here, we compared bacterial and fungal community composition turnovers through a biogeographical approach on the same soil sampling design at a broad spatial scale (area range: 13300 to 31000 km2: i to examine their spatial structuring; ii to investigate the relative importance of environmental selection and spatial autocorrelation in determining their community composition turnover; and iii to identify and rank the relevant environmental filters and scales involved in their spatial variations. Molecular fingerprinting of soil bacterial and fungal communities was performed on 413 soils from four French regions of contrasting environmental heterogeneity (Landes
Chemidlin Prévost-Bouré, Nicolas; Dequiedt, Samuel; Thioulouse, Jean; Lelièvre, Mélanie; Saby, Nicolas P A; Jolivet, Claudy; Arrouays, Dominique; Plassart, Pierre; Lemanceau, Philippe; Ranjard, Lionel
2014-01-01
Spatial scaling of microorganisms has been demonstrated over the last decade. However, the processes and environmental filters shaping soil microbial community structure on a broad spatial scale still need to be refined and ranked. Here, we compared bacterial and fungal community composition turnovers through a biogeographical approach on the same soil sampling design at a broad spatial scale (area range: 13300 to 31000 km2): i) to examine their spatial structuring; ii) to investigate the relative importance of environmental selection and spatial autocorrelation in determining their community composition turnover; and iii) to identify and rank the relevant environmental filters and scales involved in their spatial variations. Molecular fingerprinting of soil bacterial and fungal communities was performed on 413 soils from four French regions of contrasting environmental heterogeneity (Landescommunities' composition turnovers. The relative importance of processes and filters was assessed by distance-based redundancy analysis. This study demonstrates significant community composition turnover rates for soil bacteria and fungi, which were dependent on the region. Bacterial and fungal community composition turnovers were mainly driven by environmental selection explaining from 10% to 20% of community composition variations, but spatial variables also explained 3% to 9% of total variance. These variables highlighted significant spatial autocorrelation of both communities unexplained by the environmental variables measured and could partly be explained by dispersal limitations. Although the identified filters and their hierarchy were dependent on the region and organism, selection was systematically based on a common group of environmental variables: pH, trophic resources, texture and land use. Spatial autocorrelation was also important at coarse (80 to 120 km radius) and/or medium (40 to 65 km radius) spatial scales, suggesting dispersal limitations at these scales.
Depth Images Filtering In Distributed Streaming
Directory of Open Access Journals (Sweden)
Dziubich Tomasz
2016-04-01
Full Text Available In this paper, we propose a distributed system for point cloud processing and transferring them via computer network regarding to effectiveness-related requirements. We discuss the comparison of point cloud filters focusing on their usage for streaming optimization. For the filtering step of the stream pipeline processing we evaluate four filters: Voxel Grid, Radial Outliner Remover, Statistical Outlier Removal and Pass Through. For each of the filters we perform a series of tests for evaluating the impact on the point cloud size and transmitting frequency (analysed for various fps ratio. We present results of the optimization process used for point cloud consolidation in a distributed environment. We describe the processing of the point clouds before and after the transmission. Pre- and post-processing allow the user to send the cloud via network without any delays. The proposed pre-processing compression of the cloud and the post-processing reconstruction of it are focused on assuring that the end-user application obtains the cloud with a given precision.
International Nuclear Information System (INIS)
Serizawa, Ken-ichi; Yamazaki, Masami
1998-01-01
A filtering and concentrating device is prepared by assembling a porous ceramic filtering material having a pore diameter of 1 μm or less secured by a support to a filtering device main body. The porous ceramic filtering material preferably comprises a surface portion having pores having a diameter of 1 μm or less and a hollow ceramic material having filtering flow channels having a diameter greater than the pores on the surface portion. The ratio of the diameter and the thickness of the hollow ceramic material is determined to greater than 50 : 1. The filtering and concentrating device precisely filter and concentrate radioactive liquid wastes containing an insoluble solid content generated from a nuclear power plant to conduct solid/liquid separation thereby forming a filtrate and concentrated wastes having a mass concentration of 20% or more. With such a constitution, stable filtration and concentration can be conducted while reducing occurrence of clogging of filtering materials. In addition, the frequency for the exchange of filtering materials can be reduced. (I.N.)
Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework
Wang, C.; Hu, F.; Sha, D.; Han, X.
2017-10-01
Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.
End point detection in ion milling processes by sputter-induced optical emission spectroscopy
International Nuclear Information System (INIS)
Lu, C.; Dorian, M.; Tabei, M.; Elsea, A.
1984-01-01
The characteristic optical emission from the sputtered material during ion milling processes can provide an unambiguous indication of the presence of the specific etched species. By monitoring the intensity of a representative emission line, the etching process can be precisely terminated at an interface. Enhancement of the etching end point is possible by using a dual-channel photodetection system operating in a ratio or difference mode. The installation of the optical detection system to an existing etching chamber has been greatly facilitated by the use of optical fibers. Using a commercial ion milling system, experimental data for a number of etching processes have been obtained. The result demonstrates that sputter-induced optical emission spectroscopy offers many advantages over other techniques in detecting the etching end point of ion milling processes
Digital analyzer for point processes based on first-in-first-out memories
Basano, Lorenzo; Ottonello, Pasquale; Schiavi, Enore
1992-06-01
We present an entirely new version of a multipurpose instrument designed for the statistical analysis of point processes, especially those characterized by high bunching. A long sequence of pulses can be recorded in the RAM bank of a personal computer via a suitably designed front end which employs a pair of first-in-first-out (FIFO) memories; these allow one to build an analyzer that, besides being simpler from the electronic point of view, is capable of sustaining much higher intensity fluctuations of the point process. The overflow risk of the device is evaluated by treating the FIFO pair as a queueing system. The apparatus was tested using both a deterministic signal and a sequence of photoelectrons obtained from laser light scattered by random surfaces.
Energy Technology Data Exchange (ETDEWEB)
Bumbalek, A.
1986-01-02
This is a process for the manufacture of a filter material for cleaning industrial or internal combustion engine exhaust gases and filter material manufactured according to the process. The filter material is manufactured from the mineralized combustion product of peel of tropical fruits burnt at a temperature of 820/sup 0/C to 840/sup 0/C in an oxidising atmosphere excluding the production of carbon, particularly using banana skins and orange peels, which product is granulated with carrier materials or compressed.
ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS
Directory of Open Access Journals (Sweden)
Dietrich Stoyan
2011-05-01
Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.
Analysis of the stochastic channel model by Saleh & Valenzuela via the theory of point processes
DEFF Research Database (Denmark)
Jakobsen, Morten Lomholt; Pedersen, Troels; Fleury, Bernard Henri
2012-01-01
and underlying features, like the intensity function of the component delays and the delaypower intensity. The flexibility and clarity of the mathematical instruments utilized to obtain these results lead us to conjecture that the theory of spatial point processes provides a unifying mathematical framework...
AKaplan-Meier estimators of distance distributions for spatial point processes
Baddeley, A.J.; Gill, R.D.
1997-01-01
When a spatial point process is observed through a bounded window, edge effects hamper the estimation of characteristics such as the empty space function $F$, the nearest neighbour distance distribution $G$, and the reduced second order moment function $K$. Here we propose and study product-limit
Two step estimation for Neyman-Scott point process with inhomogeneous cluster centers
Czech Academy of Sciences Publication Activity Database
Mrkvička, T.; Muška, Milan; Kubečka, Jan
2014-01-01
Roč. 24, č. 1 (2014), s. 91-100 ISSN 0960-3174 R&D Projects: GA ČR(CZ) GA206/07/1392 Institutional support: RVO:60077344 Keywords : bayesian method * clustering * inhomogeneous point process Subject RIV: EH - Ecology, Behaviour Impact factor: 1.623, year: 2014
Dense range images from sparse point clouds using multi-scale processing
Do, Q.L.; Ma, L.; With, de P.H.N.
2013-01-01
Multi-modal data processing based on visual and depth/range images has become relevant in computer vision for 3D reconstruction applications such as city modeling, robot navigation etc. In this paper, we generate highaccuracy dense range images from sparse point clouds to facilitate such
Fast covariance estimation for innovations computed from a spatial Gibbs point process
DEFF Research Database (Denmark)
Coeurjolly, Jean-Francois; Rubak, Ege
In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo...
A Systematic Approach to Process Evaluation in the Central Oklahoma Turning Point (COTP) Partnership
Tolma, Eleni L.; Cheney, Marshall K.; Chrislip, David D.; Blankenship, Derek; Troup, Pam; Hann, Neil
2011-01-01
Formation is an important stage of partnership development. Purpose: To describe the systematic approach to process evaluation of a Turning Point initiative in central Oklahoma during the formation stage. The nine-month collaborative effort aimed to develop an action plan to promote health. Methods: A sound planning framework was used in the…
International Nuclear Information System (INIS)
Bukhari, W; Hong, S-M
2015-01-01
Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR + , implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR + algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR + implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR + in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR + . The experimental results show that the EKF-GPR + algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR + reduces the patient-wise RMS error to 37%, 39% and 42
Bukhari, W.; Hong, S.-M.
2015-01-01
Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR+, implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR+ algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR+ implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR+ in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR+. The experimental results show that the EKF-GPR+ algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR+ reduces the patient-wise RMS error to 37%, 39% and 42% in
Bukhari, W; Hong, S-M
2015-01-07
Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR(+), implements a gating function without pre-specifying a particular region of the patient's breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR(+) algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR(+) implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR(+) in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR(+). The experimental results show that the EKF-GPR(+) algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR(+) reduces the patient-wise RMS error to 37%, 39% and
International Nuclear Information System (INIS)
Chakraborty, Subhadeep; Keller, Eric; Talley, Justin; Srivastav, Abhishek; Ray, Asok; Kim, Seungjin
2009-01-01
This communication introduces a non-intrusive method for void fraction measurement and identification of two-phase flow regimes, based on ultrasonic sensing. The underlying algorithm is built upon the recently reported theory of a statistical pattern recognition method called symbolic dynamic filtering (SDF). The results of experimental validation, generated on a laboratory test apparatus, show a one-to-one correspondence between the flow measure derived from SDF and the void fraction measured by a conductivity probe. A sharp change in the slope of flow measure is found to be in agreement with a transition from fully bubbly flow to cap-bubbly flow. (rapid communication)
Directory of Open Access Journals (Sweden)
A Francina Webster
Full Text Available Many regulatory agencies are exploring ways to integrate toxicogenomic data into their chemical risk assessments. The major challenge lies in determining how to distill the complex data produced by high-content, multi-dose gene expression studies into quantitative information. It has been proposed that benchmark dose (BMD values derived from toxicogenomics data be used as point of departure (PoD values in chemical risk assessments. However, there is limited information regarding which genomics platforms are most suitable and how to select appropriate PoD values. In this study, we compared BMD values modeled from RNA sequencing-, microarray-, and qPCR-derived gene expression data from a single study, and explored multiple approaches for selecting a single PoD from these data. The strategies evaluated include several that do not require prior mechanistic knowledge of the compound for selection of the PoD, thus providing approaches for assessing data-poor chemicals. We used RNA extracted from the livers of female mice exposed to non-carcinogenic (0, 2 mg/kg/day, mkd and carcinogenic (4, 8 mkd doses of furan for 21 days. We show that transcriptional BMD values were consistent across technologies and highly predictive of the two-year cancer bioassay-based PoD. We also demonstrate that filtering data based on statistically significant changes in gene expression prior to BMD modeling creates more conservative BMD values. Taken together, this case study on mice exposed to furan demonstrates that high-content toxicogenomics studies produce robust data for BMD modelling that are minimally affected by inter-technology variability and highly predictive of cancer-based PoD doses.
Directory of Open Access Journals (Sweden)
A. A. Ponedelchenko
2016-01-01
Full Text Available Researches on the experimental ultrasonic installation were carried out, using industrial equipment for bottling liquids and ultrasonic apparatus "Volna-M" UZTA-1/22-OM, for clarification and filtering of table wines by tangential microfiltration using membrane ceramic filtering elements with a pore size of 0.2 micron at a pressure of 0.5-2.0 bar. Membrane ultrafiltration upon application of ultrasound of 30-40 microns amplitude and a frequency of 20 kHz ± 1.65 Hz at high filter performance and work stability changes the quantitative content of the valuable wine components slightly. But much attention to the increase of titratable acidity and pH medium due to possible degradation and esterification intensification of higher acids and alcohols was paid. At the same time more intense and rich aroma and distinct flavor with berry notes appears in wine that along with the physical- and chemical indicators helped to improve organoleptic characteristics and to increase the tasting evaluation of wines. At the same time, the content of phenolic and nitrogen compounds is reduced resulting in wines stability to protein and colloidal opacification. It became possible to refuse multiple regeneration of ceramic filter elements for the ecovery of their performance, as well as the use of preservatives and antiseptics at a high wines bottling stability. It is shown that the filtration with the dosing of ultrasound in the wine industry allows not only reducing the cost of consumables, equipment and removing some of the traditional processes, but also providing the cold sterilization of wine materials with an increase in their quality.
Energy Technology Data Exchange (ETDEWEB)
Royer, L; Manen, S; Gay, P, E-mail: royer@clermont.in2p3.f [Clermont Universite, Universite Blaise Pascal, CNRS/IN2P3, LPC, BP 10448, F-63000 Clermont-Ferrand (France)
2010-12-15
A very-front-end electronics dedicated to high granularity calorimeters has been designed and its performance measured. This electronics performs the amplification of the charge delivered by the detector thanks to a low-noise Charge Sensitive Amplifier. The dynamic range is improved using a bandpass filter based on a Gated Integrator. Studying its weighting function, we show that this filter is more efficient than standard CRRC shaper, thanks to the integration time which can be expand near the bunch interval time, whereas the peaking time of the CRRC shaper is limited to pile-up consideration. Moreover, the Gated Integrator performs intrinsically the analog memorization of the signal before its delayed digital conversion. The analog-to-digital conversion is performed through a 12-bit cyclic ADC specifically developed for this application. The very-front-end channel has been fabricated using a 0.35 {mu}m CMOS technology. Measurements show a global non-linearity better than 0.1%. The Equivalent Noise Charge at the input of the channel is evaluated to 1.8 fC, compare to the maximum input charge of 10 pC. The power consumption of the complete channel is limited to 6.5 mW.
Park, Baek Sung; Hyung, Kyung Hee; Oh, Gwi Jeong; Jung, Hyun Wook
2018-02-01
The color filter (CF) is one of the key components for improving the performance of TV displays such as liquid crystal display (LCD) and white organic light emitting diodes (WOLED). The profile defects like undercut during the fine fabrication processes for CF layers are inevitably generated through the UV exposure and development processes, however, these can be controlled through the baking process. In order to resolve the profile defects of CF layers, in this study, the real-time dynamic changes of CF layers are monitored during the baking process by changing components such as polymeric binder and acrylate. The motion of pigment particles in CF layers during baking is quantitatively interpreted using multi-speckle diffusing wave spectroscopy (MSDWS), in terms of the autocorrelation function and the characteristic time of α-relaxation.
Braenzel, J.; Barriga-Carrasco, M. D.; Morales, R.; Schnürer, M.
2018-05-01
We investigate, both experimentally and theoretically, how the spectral distribution of laser accelerated carbon ions can be filtered by charge exchange processes in a double foil target setup. Carbon ions at multiple charge states with an initially wide kinetic energy spectrum, from 0.1 to 18 MeV, were detected with a remarkably narrow spectral bandwidth after they had passed through an ultrathin and partially ionized foil. With our theoretical calculations, we demonstrate that this process is a consequence of the evolution of the carbon ion charge states in the second foil. We calculated the resulting spectral distribution separately for each ion species by solving the rate equations for electron loss and capture processes within a collisional radiative model. We determine how the efficiency of charge transfer processes can be manipulated by controlling the ionization degree of the transfer matter.
International Nuclear Information System (INIS)
Dikusar, N.D.
1993-01-01
The new approach to solving of the finding problem is proposed. The method is based on Discrete Projective Transformations (DPT), the List Square Fitting (LSF) and uses the information feedback in tracing for linear or quadratic track segments (TS). The fast and stable with respect to measurement errors and background points recurrent algorithm is suggested. The algorithm realizes the family of digital adaptive projective filters (APF) with known nonlinear weight functions-projective invariants. APF can be used in adequate control systems for collection, processing and compression of data, including tracking problems for the wide class of detectors. 10 refs.; 9 figs
A Combined Control Chart for Identifying Out–Of–Control Points in Multivariate Processes
Directory of Open Access Journals (Sweden)
Marroquín–Prado E.
2010-10-01
Full Text Available The Hotelling's T2 control chart is widely used to identify out–of–control signals in multivariate processes. However, this chart is not sensitive to small shifts in the process mean vec tor. In this work we propose a control chart to identify out–of–control signals. The proposed chart is a combination of Hotelling's T2 chart, M chart proposed by Hayter et al. (1994 and a new chart based on Principal Components. The combination of these charts identifies any type and size of change in the process mean vector. Us ing simulation and the Average Run Length (ARL, the performance of the proposed control chart is evaluated. The ARL means the average points within control before an out–of–control point is detected, The results of the simulation show that the proposed chart is more sensitive that each one of the three charts individually
International Nuclear Information System (INIS)
Verma, K.; MacNeil, C.; Odar, S.
1996-01-01
The secondary sides of all four steam generators at the Point Lepreau Nuclear Generating Stations were cleaned during the 1995 annual outage run-down using the Siemens high temperature chemical cleaning process. Traditionally all secondary side chemical cleaning exercises in CANDU as well as the other nuclear power stations in North America have been conducted using a process developed in conjunction with the Electric Power Research Institute (EPRI). The Siemens high temperature process was applied for the first time in North America at the Point Lepreau Nuclear Generating Station (PLGS). The paper discusses experiences related to the pre and post award chemical cleaning activities, chemical cleaning application, post cleaning inspection results and waste handling activities. (author)
Bayesian inference for multivariate point processes observed at sparsely distributed times
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl; Møller, Jesper; Aukema, B.H.
We consider statistical and computational aspects of simulation-based Bayesian inference for a multivariate point process which is only observed at sparsely distributed times. For specicity we consider a particular data set which has earlier been analyzed by a discrete time model involving unknown...... normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared to discrete time processes in the setting of the present paper as well as other spatial-temporal situations. Keywords: Bark beetle, conditional intensity, forest entomology, Markov chain Monte Carlo...
DEFF Research Database (Denmark)
Bey, Niki
2000-01-01
to three essential assessment steps, the method enables rough environmental evaluations and supports in this way material- and process-related decision-making in the early stages of design. In its overall structure, the Oil Point Method is related to Life Cycle Assessment - except for two main differences...... of environmental evaluation and only approximate information about the product and its life cycle. This dissertation addresses this challenge in presenting a method, which is tailored to these requirements of designers - the Oil Point Method (OPM). In providing environmental key information and confining itself...
Spatial point process analysis for a plant community with high biodiversity
DEFF Research Database (Denmark)
Illian, Janine; Møller, Jesper; Waagepetersen, Rasmus Plenge
A complex multivariate spatial point pattern for a plant community with high biodiversity is modelled using a hierarchical multivariate point process model. In the model, interactions between plants with different post-fire regeneration strategies are of key interest. We consider initially...... a maximum likelihood approach to inference where problems arise due to unknown interaction radii for the plants. We next demonstrate that a Bayesian approach provides a flexible framework for incorporating prior information concerning the interaction radii. From an ecological perspective, we are able both...
Zhang, Shuangyi; Gitungo, Stephen W; Axe, Lisa; Raczko, Robert F; Dyksen, John E
2017-05-01
With the increasing concern of contaminants of emerging concern (CECs) in source water, this study examines the hypothesis that existing filters in water treatment plants can be converted to biologically active filters (BAFs) to treat these compounds. Removals through bench-scale BAFs were evaluated as a function of media, granular activated carbon (GAC) and dual media, empty bed contact time (EBCT), and pre-ozonation. For GAC BAFs, greater oxygen consumption, increased pH drop, and greater dissolved organic carbon removal normalized to adenosine triphosphate (ATP) were observed indicating increased microbial activity as compared to anthracite/sand dual media BAFs. ATP concentrations in the upper portion of the BAFs were as much as four times greater than the middle and lower portions of the dual media and 1.5 times greater in GAC. Sixteen CECs were spiked in the source water. At an EBCT of 18 min (min), GAC BAFs were highly effective with overall removals greater than 80% without pre-ozonation; exceptions included tri(2-chloroethyl) phosphate and iopromide. With a 10 min EBCT, the degree of CECs removal was reduced with less than half of the compounds removed at greater than 80%. The dual media BAFs showed limited CECs removal with only four compounds removed at greater than 80%, and 10 compounds were reduced by less than 50% with either EBCT. This study demonstrated that GAC BAFs with and without pre-ozonation are an effective and advanced technology for treating emerging contaminants. On the other hand, pre-ozonation is needed for dual media BAFs to remove CECs. The most cost effective operating conditions for dual media BAFs were a 10 min EBCT with the application of pre-ozonation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Musicality as an Aesthetic Process of Filtering in Thomas W. Shapcott's Poetry
Directory of Open Access Journals (Sweden)
Muslim Abbas Eidan Al-Ta'an
2017-11-01
Full Text Available How does music transcend individual experience? Is music the filter to purify everything? How does everything in the poet become music? Such questions are raised, now and then, by the conscious reader of poetry in general and that of the Australian poet Thomas W. Shapcott in particular. My present research-paper attempts to present an answer for these questions via probing the individuality of Shapcott's poetic experience and how does the poet's personal and experimental musicality as an artistic motif and aesthetic perspective play a key role in purifying language of its lies and its daily impurities. In the first place, my account is apt to find an aesthetic meaning for the action of transcending the individual experience in selected poems written by Shapcott. The philosophical and ritual thought of musicality is interplayed with the aesthetic power of poetry. Both aesthetic energies stem from the individual experience of the poet to transcend the borders of individuality and being absorbed and saturated in the wide pot of human universality. In other words, the poem after being filtered and purified musically and aesthetically is no longer an individual experience owned by its producer only, rather it becomes a human experience for its conscious readers. Music as a motif and meaning, regardless of its technical significance, is controversial in Shapcott's poetic diction. Music, here, is not a mere artistic genre; rather it is a ritualistic and philosophical thought. The paper is to investigate how Shapcott's musicality is constructed on aesthetics of balance and conformity in poetry and life.
Analysis of residual stress state in sheet metal parts processed by single point incremental forming
Maaß, F.; Gies, S.; Dobecki, M.; Brömmelhoff, K.; Tekkaya, A. E.; Reimers, W.
2018-05-01
The mechanical properties of formed metal components are highly affected by the prevailing residual stress state. A selective induction of residual compressive stresses in the component, can improve the product properties such as the fatigue strength. By means of single point incremental forming (SPIF), the residual stress state can be influenced by adjusting the process parameters during the manufacturing process. To achieve a fundamental understanding of the residual stress formation caused by the SPIF process, a valid numerical process model is essential. Within the scope of this paper the significance of kinematic hardening effects on the determined residual stress state is presented based on numerical simulations. The effect of the unclamping step after the manufacturing process is also analyzed. An average deviation of the residual stress amplitudes in the clamped and unclamped condition of 18 % reveals, that the unclamping step needs to be considered to reach a high numerical prediction quality.
Analysis of multi-species point patterns using multivariate log Gaussian Cox processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus; Guan, Yongtao; Jalilian, Abdollah
Multivariate log Gaussian Cox processes are flexible models for multivariate point patterns. However, they have so far only been applied in bivariate cases. In this paper we move beyond the bivariate case in order to model multi-species point patterns of tree locations. In particular we address t...... of the data. The selected number of common latent fields provides an index of complexity of the multivariate covariance structure. Hierarchical clustering is used to identify groups of species with similar patterns of dependence on the common latent fields.......Multivariate log Gaussian Cox processes are flexible models for multivariate point patterns. However, they have so far only been applied in bivariate cases. In this paper we move beyond the bivariate case in order to model multi-species point patterns of tree locations. In particular we address...... the problems of identifying parsimonious models and of extracting biologically relevant information from the fitted models. The latent multivariate Gaussian field is decomposed into components given in terms of random fields common to all species and components which are species specific. This allows...
Zhang, Hongyin; Oyanedel-Craver, Vinka
2013-09-15
This study compares the disinfection performance of ceramic water filters impregnated with two antibacterial compounds: silver nanoparticles and a polymer based quaternary amine functiaonalized silsesquioxane (poly(trihydroxysilyl) propyldimethyloctadecyl ammonium chloride (TPA)). This study evaluated these compounds using ceramic disks manufactures with clay obtained from a ceramic filter factory located in San Mateo Ixtatan, Guatemala. Instead of using full size ceramic water filters, manufactured 6.5 cm diameter ceramic water filter disks were used. Results showed that TPA can achieve a log bacterial reduction value of 10 while silver nanoparticles reached up to 2 log reduction using a initial concentration of bacteria of 10(10)-10(11)CFU/ml. Similarly, bacterial transport demonstrated that ceramic filter disks painted with TPA achieved a bacterial log reduction value of 6.24, which is about 2 log higher than the values obtained for disks painted with silver nanoparticles (bacterial log reduction value: 4.42). The release of both disinfectants from the ceramic materials to the treated water was determined measuring the effluent concentrations in each test performed. Regarding TPA, about 3% of the total mass applied to the ceramic disks was released in the effluent over 300 min, which is slightly lower than the release percentage for silver nanoparticles (4%). This study showed that TPA provides a comparable disinfection performance than silver nanoparticles in ceramic water filter. Another advantage of using TPA is the cost as the price of TPA is considerable lower than silver nanoparticles. In spite of the use of TPA in several medical related products, there is only partial information regarding the health risk associated with the ingestion of this compound. Additional long-term toxicological information for TPA should be evaluated before its future application in ceramic water filters. Copyright © 2013 Elsevier B.V. All rights reserved.
Prospects for direct neutron capture measurements on s-process branching point isotopes
Energy Technology Data Exchange (ETDEWEB)
Guerrero, C.; Lerendegui-Marco, J.; Quesada, J.M. [Universidad de Sevilla, Dept. de Fisica Atomica, Molecular y Nuclear, Sevilla (Spain); Domingo-Pardo, C. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Kaeppeler, F. [Karlsruhe Institute of Technology, Institut fuer Kernphysik, Karlsruhe (Germany); Palomo, F.R. [Universidad de Sevilla, Dept. de Ingenieria Electronica, Sevilla (Spain); Reifarth, R. [Goethe-Universitaet Frankfurt am Main, Frankfurt am Main (Germany)
2017-05-15
The neutron capture cross sections of several unstable key isotopes acting as branching points in the s-process are crucial for stellar nucleosynthesis studies, but they are very challenging to measure directly due to the difficult production of sufficient sample material, the high activity of the resulting samples, and the actual (n, γ) measurement, where high neutron fluxes and effective background rejection capabilities are required. At present there are about 21 relevant s-process branching point isotopes whose cross section could not be measured yet over the neutron energy range of interest for astrophysics. However, the situation is changing with some very recent developments and upcoming technologies. This work introduces three techniques that will change the current paradigm in the field: the use of γ-ray imaging techniques in (n, γ) experiments, the production of moderated neutron beams using high-power lasers, and double capture experiments in Maxwellian neutron beams. (orig.)
Valenza, Gaetano; Citi, Luca; Barbieri, Riccardo
2013-01-01
We report an exemplary study of instantaneous assessment of cardiovascular dynamics performed using point-process nonlinear models based on Laguerre expansion of the linear and nonlinear Wiener-Volterra kernels. As quantifiers, instantaneous measures such as high order spectral features and Lyapunov exponents can be estimated from a quadratic and cubic autoregressive formulation of the model first order moment, respectively. Here, these measures are evaluated on heartbeat series coming from 16 healthy subjects and 14 patients with Congestive Hearth Failure (CHF). Data were gathered from the on-line repository PhysioBank, which has been taken as landmark for testing nonlinear indices. Results show that the proposed nonlinear Laguerre-Volterra point-process methods are able to track the nonlinear and complex cardiovascular dynamics, distinguishing significantly between CHF and healthy heartbeat series.
Point process analyses of variations in smoking rate by setting, mood, gender, and dependence
Shiffman, Saul; Rathbun, Stephen L.
2010-01-01
The immediate emotional and situational antecedents of ad libitum smoking are still not well understood. We re-analyzed data from Ecological Momentary Assessment using novel point-process analyses, to assess how craving, mood, and social setting influence smoking rate, as well as assessing the moderating effects of gender and nicotine dependence. 304 smokers recorded craving, mood, and social setting using electronic diaries when smoking and at random nonsmoking times over 16 days of smoking. Point-process analysis, which makes use of the known random sampling scheme for momentary variables, examined main effects of setting and interactions with gender and dependence. Increased craving was associated with higher rates of smoking, particularly among women. Negative affect was not associated with smoking rate, even in interaction with arousal, but restlessness was associated with substantially higher smoking rates. Women's smoking tended to be less affected by negative affect. Nicotine dependence had little moderating effect on situational influences. Smoking rates were higher when smokers were alone or with others smoking, and smoking restrictions reduced smoking rates. However, the presence of others smoking undermined the effects of restrictions. The more sensitive point-process analyses confirmed earlier findings, including the surprising conclusion that negative affect by itself was not related to smoking rates. Contrary to hypothesis, men's and not women's smoking was influenced by negative affect. Both smoking restrictions and the presence of others who are not smoking suppress smoking, but others’ smoking undermines the effects of restrictions. Point-process analyses of EMA data can bring out even small influences on smoking rate. PMID:21480683
A random point process model for the score in sport matches
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2009-01-01
Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf
A business process model as a starting point for tight cooperation among organizations
Directory of Open Access Journals (Sweden)
O. Mysliveček
2006-01-01
Full Text Available Outsourcing and other kinds of tight cooperation among organizations are more and more necessary for success on all markets (markets of high technology products are particularly influenced. Thus it is important for companies to be able to effectively set up all kinds of cooperation. A business process model (BPM is a suitable starting point for this future cooperation. In this paper the process of setting up such cooperation is outlined, as well as why it is important for business success.
Weak interaction rates for Kr and Sr waiting-point nuclei under rp-process conditions
International Nuclear Information System (INIS)
Sarriguren, P.
2009-01-01
Weak interaction rates are studied in neutron deficient Kr and Sr waiting-point isotopes in ranges of densities and temperatures relevant for the rp process. The nuclear structure is described within a microscopic model (deformed QRPA) that reproduces not only the half-lives but also the Gamow-Teller strength distributions recently measured. The various sensitivities of the decay rates to both density and temperature are discussed. Continuum electron capture is shown to contribute significantly to the weak rates at rp-process conditions.
Filtering and spectral processing of 1-D signals using cellular neural networks
Moreira-Tamayo, O.; Pineda de Gyvez, J.
1996-01-01
This paper presents cellular neural networks (CNN) for one-dimensional discrete signal processing. Although CNN has been extensively used in image processing applications, little has been done for 1-dimensional signal processing. We propose a novel CNN architecture to carry out these tasks. This
Nere, Nandkishor K; Allen, Kimberley C; Marek, James C; Bordawekar, Shailendra V
2012-10-01
Drying an early stage active pharmaceutical ingredient candidate required excessively long cycle times in a pilot plant agitated filter dryer. The key to faster drying is to ensure sufficient heat transfer and minimize mass transfer limitations. Designing the right mixing protocol is of utmost importance to achieve efficient heat transfer. To this order, a composite model was developed for the removal of bound solvent that incorporates models for heat transfer and desolvation kinetics. The proposed heat transfer model differs from previously reported models in two respects: it accounts for the effects of a gas gap between the vessel wall and solids on the overall heat transfer coefficient, and headspace pressure on the mean free path length of the inert gas and thereby on the heat transfer between the vessel wall and the first layer of solids. A computational methodology was developed incorporating the effects of mixing and headspace pressure to simulate the drying profile using a modified model framework within the Dynochem software. A dryer operational protocol was designed based on the desolvation kinetics, thermal stability studies of wet and dry cake, and the understanding gained through model simulations, resulting in a multifold reduction in drying time. Copyright © 2012 Wiley-Liss, Inc.
Energy Technology Data Exchange (ETDEWEB)
Poirier, M. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Burket, P. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Duignan, M. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-03-12
The Savannah River Site (SRS) is currently treating radioactive liquid waste with the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Extraction Unit (MCU). The low filter flux through the ARP has limited the rate at which radioactive liquid waste can be treated. Recent filter flux has averaged approximately 5 gallons per minute (gpm). Salt Batch 6 has had a lower processing rate and required frequent filter cleaning. Savannah River Remediation (SRR) has a desire to understand the causes of the low filter flux and to increase ARP/MCU throughput. In addition, at the time the testing started, SRR was assessing the impact of replacing the 0.1 micron filter with a 0.5 micron filter. This report describes testing of MST filterability to investigate the impact of filter pore size and MST particle size on filter flux and testing of filter enhancers to attempt to increase filter flux. The authors constructed a laboratory-scale crossflow filter apparatus with two crossflow filters operating in parallel. One filter was a 0.1 micron Mott sintered SS filter and the other was a 0.5 micron Mott sintered SS filter. The authors also constructed a dead-end filtration apparatus to conduct screening tests with potential filter aids and body feeds, referred to as filter enhancers. The original baseline for ARP was 5.6 M sodium salt solution with a free hydroxide concentration of approximately 1.7 M.3 ARP has been operating with a sodium concentration of approximately 6.4 M and a free hydroxide concentration of approximately 2.5 M. SRNL conducted tests varying the concentration of sodium and free hydroxide to determine whether those changes had a significant effect on filter flux. The feed slurries for the MST filterability tests were composed of simple salts (NaOH, NaNO_{2}, and NaNO_{3}) and MST (0.2 – 4.8 g/L). The feed slurry for the filter enhancer tests contained simulated salt batch 6 supernate, MST, and filter enhancers.
Hamming, Richard W
1997-01-01
Digital signals occur in an increasing number of applications: in telephone communications; in radio, television, and stereo sound systems; and in spacecraft transmissions, to name just a few. This introductory text examines digital filtering, the processes of smoothing, predicting, differentiating, integrating, and separating signals, as well as the removal of noise from a signal. The processes bear particular relevance to computer applications, one of the focuses of this book.Readers will find Hamming's analysis accessible and engaging, in recognition of the fact that many people with the s
PARALLEL PROCESSING OF BIG POINT CLOUDS USING Z-ORDER-BASED PARTITIONING
Directory of Open Access Journals (Sweden)
C. Alis
2016-06-01
Full Text Available As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112 is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest
Parallel Processing of Big Point Clouds Using Z-Order Partitioning
Alis, C.; Boehm, J.; Liu, K.
2016-06-01
As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112) is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest neighbour algorithm
Developing a Business Intelligence Process for a Training Module in SharePoint 2010
Schmidtchen, Bryce; Solano, Wanda M.; Albasini, Colby
2015-01-01
Prior to this project, training information for the employees of the National Center for Critical Processing and Storage (NCCIPS) was stored in an array of unrelated spreadsheets and SharePoint lists that had to be manually updated. By developing a content management system through a web application platform named SharePoint, this training system is now highly automated and provides a much less intensive method of storing training data and scheduling training courses. This system was developed by using SharePoint Designer and laying out the data structure for the interaction between different lists of data about the employees. The automation of data population inside of the lists was accomplished by implementing SharePoint workflows which essentially lay out the logic for how data is connected and calculated between certain lists. The resulting training system is constructed from a combination of five lists of data with a single list acting as the user-friendly interface. This interface is populated with the courses required for each employee and includes past and future information about course requirements. The employees of NCCIPS now have the ability to view, log, and schedule their training information and courses with much more ease. This system will relieve a significant amount of manual input and serve as a powerful informational resource for the employees of NCCIPS in the future.
EFFICIENT LIDAR POINT CLOUD DATA MANAGING AND PROCESSING IN A HADOOP-BASED DISTRIBUTED FRAMEWORK
Directory of Open Access Journals (Sweden)
C. Wang
2017-10-01
Full Text Available Light Detection and Ranging (LiDAR is one of the most promising technologies in surveying and mapping，city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop’s storage and computing ability. At the same time, the Point Cloud Library (PCL, an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.
Rastetter, Edward B; Williams, Mathew; Griffin, Kevin L; Kwiatkowski, Bonnie L; Tomasky, Gabrielle; Potosnak, Mark J; Stoy, Paul C; Shaver, Gaius R; Stieglitz, Marc; Hobbie, John E; Kling, George W
2010-07-01
Continuous time-series estimates of net ecosystem carbon exchange (NEE) are routinely made using eddy covariance techniques. Identifying and compensating for errors in the NEE time series can be automated using a signal processing filter like the ensemble Kalman filter (EnKF). The EnKF compares each measurement in the time series to a model prediction and updates the NEE estimate by weighting the measurement and model prediction relative to a specified measurement error estimate and an estimate of the model-prediction error that is continuously updated based on model predictions of earlier measurements in the time series. Because of the covariance among model variables, the EnKF can also update estimates of variables for which there is no direct measurement. The resulting estimates evolve through time, enabling the EnKF to be used to estimate dynamic variables like changes in leaf phenology. The evolving estimates can also serve as a means to test the embedded model and reconcile persistent deviations between observations and model predictions. We embedded a simple arctic NEE model into the EnKF and filtered data from an eddy covariance tower located in tussock tundra on the northern foothills of the Brooks Range in northern Alaska, USA. The model predicts NEE based only on leaf area, irradiance, and temperature and has been well corroborated for all the major vegetation types in the Low Arctic using chamber-based data. This is the first application of the model to eddy covariance data. We modified the EnKF by adding an adaptive noise estimator that provides a feedback between persistent model data deviations and the noise added to the ensemble of Monte Carlo simulations in the EnKF. We also ran the EnKF with both a specified leaf-area trajectory and with the EnKF sequentially recalibrating leaf-area estimates to compensate for persistent model-data deviations. When used together, adaptive noise estimation and sequential recalibration substantially improved filter
Directory of Open Access Journals (Sweden)
Denis Delisle-Rodriguez
2017-11-01
Full Text Available This work presents a new on-line adaptive filter, which is based on a similarity analysis between standard electrode locations, in order to reduce artifacts and common interferences throughout electroencephalography (EEG signals, but preserving the useful information. Standard deviation and Concordance Correlation Coefficient (CCC between target electrodes and its correspondent neighbor electrodes are analyzed on sliding windows to select those neighbors that are highly correlated. Afterwards, a model based on CCC is applied to provide higher values of weight to those correlated electrodes with lower similarity to the target electrode. The approach was applied to brain computer-interfaces (BCIs based on Canonical Correlation Analysis (CCA to recognize 40 targets of steady-state visual evoked potential (SSVEP, providing an accuracy (ACC of 86.44 ± 2.81%. In addition, also using this approach, features of low frequency were selected in the pre-processing stage of another BCI to recognize gait planning. In this case, the recognition was significantly ( p < 0.01 improved for most of the subjects ( A C C ≥ 74.79 % , when compared with other BCIs based on Common Spatial Pattern, Filter Bank-Common Spatial Pattern, and Riemannian Geometry.
Citraresmi, A. D. P.; Wahyuni, E. E.
2018-03-01
The aim of this study was to inspect the implementation of Hazard Analysis and Critical Control Point (HACCP) for identification and prevention of potential hazards in the production process of dried anchovy at PT. Kelola Mina Laut (KML), Lobuk unit, Sumenep. Cold storage process is needed in each anchovy processing step in order to maintain its physical and chemical condition. In addition, the implementation of quality assurance system should be undertaken to maintain product quality. The research was conducted using a survey method, by following the whole process of making anchovy from the receiving raw materials to the packaging of final product. The method of data analysis used was descriptive analysis method. Implementation of HACCP at PT. KML, Lobuk unit, Sumenep was conducted by applying Pre Requisite Programs (PRP) and preparation stage consisting of 5 initial stages and 7 principles of HACCP. The results showed that CCP was found in boiling process flow with significant hazard of Listeria monocytogenesis bacteria and final sorting process with significant hazard of foreign material contamination in the product. Actions taken were controlling boiling temperature of 100 – 105°C for 3 - 5 minutes and training for sorting process employees.
Directory of Open Access Journals (Sweden)
Saidi Badreddine
2016-01-01
Full Text Available The single point incremental forming process is well-known to be perfectly suited for prototyping and small series. One of its fields of applicability is the medicine area for the forming of titanium prostheses or titanium medical implants. However this process is not yet very industrialized, mainly due its geometrical inaccuracy, its not homogeneous thickness distribution& Moreover considerable forces can occur. They must be controlled in order to preserve the tooling. In this paper, a numerical approach is proposed in order to minimize the maximum force achieved during the incremental forming of titanium sheets and to maximize the minimal thickness. A surface response methodology is used to find the optimal values of two input parameters of the process, the punch diameter and the vertical step size of the tool path.
Marked point process framework for living probabilistic safety assessment and risk follow-up
International Nuclear Information System (INIS)
Arjas, Elja; Holmberg, Jan
1995-01-01
We construct a model for living probabilistic safety assessment (PSA) by applying the general framework of marked point processes. The framework provides a theoretically rigorous approach for considering risk follow-up of posterior hazards. In risk follow-up, the hazard of core damage is evaluated synthetically at time points in the past, by using some observed events as logged history and combining it with re-evaluated potential hazards. There are several alternatives for doing this, of which we consider three here, calling them initiating event approach, hazard rate approach, and safety system approach. In addition, for a comparison, we consider a core damage hazard arising in risk monitoring. Each of these four definitions draws attention to a particular aspect in risk assessment, and this is reflected in the behaviour of the consequent risk importance measures. Several alternative measures are again considered. The concepts and definitions are illustrated by a numerical example
Quality control for electron beam processing of polymeric materials by end-point analysis
International Nuclear Information System (INIS)
DeGraff, E.; McLaughlin, W.L.
1981-01-01
Properties of certain plastics, e.g. polytetrafluoroethylene, polyethylene, ethylene vinyl acetate copolymer, can be modified selectively by ionizing radiation. One of the advantages of this treatment over chemical methods is better control of the process and the end-product properties. The most convenient method of dosimetry for monitoring quality control is post-irradiation evaluation of the plastic itself, e.g., melt index and melt point determination. It is shown that by proper calibration in terms of total dose and sufficiently reproducible radiation effects, such product test methods provide convenient and meaningful analyses. Other appropriate standardized analytical methods include stress-crack resistance, stress-strain-to-fracture testing and solubility determination. Standard routine dosimetry over the dose and dose rate ranges of interest confirm that measured product end points can be correlated with calibrated values of absorbed dose in the product within uncertainty limits of the measurements. (author)
Adaptivni digitalni filtri / Adaptive digital filters
Directory of Open Access Journals (Sweden)
Dragan Petković
2002-01-01
Full Text Available Rad opisuje osnove funkcionisanja adaptivnih filtara. U uvodnim razmatranjima obra-đene su osnove matematičke obrade diskretnih signala i z-transformacije kod adaptivnih filtara. Izložen je Wienerov problem filtracije. Predstavljeni su CCL petlja i Widrow-Hoffov LMS algoritam i razmotrena brzina konvergencije adaptivnih filtara. Praktično je realizova-na CCL petlja sa osvrtom na brzinu konvergencije. / The paper describes the basis of adaptive filter functioning. The first considerations deal with the mathematical processing of discrete signals and the Z-transform in adaptive filters. The Wieners filter processing problem was exposed. The Correlation Canceler Loop (CCL was presented as well as the Widrow-Hoffs adaptive Least Mean Squares (LMS step-by-step procedure. The convergence rate of adaptive filters was considered as well. The CCL simulations were obtained pointing out the convergence rate.
Gelatin-Filtered Consomme: A Practical Demonstration of the Freezing and Thawing Processes
Lahne, Jacob B.; Schmidt, Shelly J.
2010-01-01
Freezing is a key food processing and preservation technique widely used in the food industry. Application of best freezing and storage practices extends the shelf-life of foods for several months, while retaining much of the original quality of the fresh food. During freezing, as well as its counterpart process, thawing, a number of critical…
2018-01-01
kHz 3. Statistical Processing 3.1 Statistical Analysis Statistical analysis is the mathematical science dealing with the analysis or...diagnostic vibrational monitoring applications , statistical techniques that are mainly used for alarm purposes in industrial plants are the...SUPPLEMENTARY NOTES 14. ABSTRACT This report is the result of applying morphological image and statistical processing techniques to the energy
Directory of Open Access Journals (Sweden)
Woźniak Arkadiusz
2015-12-01
Full Text Available The determination of how efficiently filtration systems used for the production of breathing air used in hyperbaric environments are operating is significant both from theoretical and practical points of view. The quality of breathing air and the breathing mixes based on air is crucial with regard to divers' safety. Paradoxically, a change in regulations regarding quality requirements for breathing mixes has imposed the necessity to verify both the technical equipment and laboratory procedures used in their production and verification. The following material, which is a continuation of previous publications, presents results of the conducted research along with the evaluation of effectiveness of the filtration systems used by the Polish Navy.
Application of random-point processes to the detection of radiation sources
International Nuclear Information System (INIS)
Woods, J.W.
1978-01-01
In this report the mathematical theory of random-point processes is reviewed and it is shown how use of the theory can obtain optimal solutions to the problem of detecting radiation sources. As noted, the theory also applies to image processing in low-light-level or low-count-rate situations. Paralleling Snyder's work, the theory is extended to the multichannel case of a continuous, two-dimensional (2-D), energy-time space. This extension essentially involves showing that the data are doubly stochastic Poisson (DSP) point processes in energy as well as time. Further, a new 2-D recursive formulation is presented for the radiation-detection problem with large computational savings over nonrecursive techniques when the number of channels is large (greater than or equal to 30). Finally, some adaptive strategies for on-line ''learning'' of unknown, time-varying signal and background-intensity parameters and statistics are present and discussed. These adaptive procedures apply when a complete statistical description is not available a priori
Nuclear binding around the RP-process waiting points $^{68}$Se and $^{72}$Kr
2002-01-01
Encouraged by the success of mass determinations of nuclei close to the Z=N line performed at ISOLTRAP during the year 2000 and of the recent decay spectroscopy studies on neutron-deficient Kr isotopes (IS351 collaboration), we aim to measure masses and proton separation energies of the bottleneck nuclei defining the flow of the astrophysical rp-process beyond A$\\sim$70. In detail, the program includes mass measurements of the rp-process waiting point nuclei $^{68}$Se and $^{72}$Kr and determination of proton separation energies of the proton-unbound $^{69}$Br and $^{73}$Rb via $\\beta$-decays of $^{69}$Kr and $^{73}$Sr, respectively. The aim of the project is to complete the experimental database for astrophysical network calculations and for the liquid-drop type of mass models typically used in the modelling of the astrophysical rp process in the region. The first beamtime is scheduled for the August 2001 and the aim is to measure the absolute mass of the waiting-point nucleus $^{72}$Kr.
Assessment of Peer Mediation Process from Conflicting Students’ Point of Views
Directory of Open Access Journals (Sweden)
Fulya TÜRK
2016-12-01
Full Text Available The purpose of this study was to analyze peer mediation process that was applied in a high school on conflicting students’ point of views. This research was carried out in a high school in Denizli. After ten sessions of training in peer mediation, peer mediators mediated peers’ real conflicts. In the research, 41 students (28 girls, 13 boys who got help at least once were interviewed as a party to the conflict. Through semistructured interviews with conflicting students, the mediation process has been evaluated through the point of views of students. Eight questions were asked about the conflicting parties. Verbal data obtained from interviews were analyzed using the content analysis. When conflicting students’ opinions and experiences about peer mediation were analyzed, it is seen that they were satisfied regarding the process, they have resolved their conflicts in a constructive and peaceful way, their friendship has been continuing as before. All of these results also indicate that peer mediation is an effective method of resolving student conflicts constructively
Ultra low-power biomedical signal processing : An analog wavelet filter approach for pacemakers
Pavlík Haddad, S.A.
2006-01-01
The purpose of this thesis is to describe novel signal processing methodologies and analog integrated circuit techniques for low-power biomedical systems. Physiological signals, such as the electrocardiogram (ECG), the electroencephalogram (EEG) and the electromyogram (EMG) are mostly
Spatially assisted down-track median filter for GPR image post-processing
Paglieroni, David W; Beer, N Reginald
2014-10-07
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Ultra low-power biomedical signal processing: An analog wavelet filter approach for pacemakers
Pavlík Haddad, S.A.
2006-01-01
The purpose of this thesis is to describe novel signal processing methodologies and analog integrated circuit techniques for low-power biomedical systems. Physiological signals, such as the electrocardiogram (ECG), the electroencephalogram (EEG) and the electromyogram (EMG) are mostly non-stationary. The main difficulty in dealing with biomedical signal processing is that the information of interest is often a combination of features that are well localized temporally (e.g., spikes) and other...
Hartman, Brian Davis
1995-01-01
A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal
On the estimation of the spherical contact distribution Hs(y) for spatial point processes
International Nuclear Information System (INIS)
Doguwa, S.I.
1990-08-01
RIPLEY (1977, Journal of the Royal Statistical Society, B39 172-212) proposed an estimator for the spherical contact distribution H s (s), of a spatial point process observed in a bounded planar region. However, this estimator is not defined for some distances of interest, in this bounded region. A new estimator for H s (y), is proposed for use with regular grid of sampling locations. This new estimator is defined for all distances of interest. It also appears to have a smaller bias and a smaller mean squared error than the previously suggested alternative. (author). 11 refs, 4 figs, 1 tab
Analysing the distribution of synaptic vesicles using a spatial point process model
DEFF Research Database (Denmark)
Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta
2014-01-01
functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....
DeWitt, Jessica D.; Warner, Timothy A.; Chirico, Peter G.; Bergstresser, Sarah E.
2017-01-01
For areas of the world that do not have access to lidar, fine-scale digital elevation models (DEMs) can be photogrammetrically created using globally available high-spatial resolution stereo satellite imagery. The resultant DEM is best termed a digital surface model (DSM) because it includes heights of surface features. In densely vegetated conditions, this inclusion can limit its usefulness in applications requiring a bare-earth DEM. This study explores the use of techniques designed for filtering lidar point clouds to mitigate the elevation artifacts caused by above ground features, within the context of a case study of Prince William Forest Park, Virginia, USA. The influences of land cover and leaf-on vs. leaf-off conditions are investigated, and the accuracy of the raw photogrammetric DSM extracted from leaf-on imagery was between that of a lidar bare-earth DEM and the Shuttle Radar Topography Mission DEM. Although the filtered leaf-on photogrammetric DEM retains some artifacts of the vegetation canopy and may not be useful for some applications, filtering procedures significantly improved the accuracy of the modeled terrain. The accuracy of the DSM extracted in leaf-off conditions was comparable in most areas to the lidar bare-earth DEM and filtering procedures resulted in accuracy comparable of that to the lidar DEM.
Page, Ralph H.; Doty, Patrick F.
2017-08-01
The various technologies presented herein relate to a tiled filter array that can be used in connection with performance of spatial sampling of optical signals. The filter array comprises filter tiles, wherein a first plurality of filter tiles are formed from a first material, the first material being configured such that only photons having wavelengths in a first wavelength band pass therethrough. A second plurality of filter tiles is formed from a second material, the second material being configured such that only photons having wavelengths in a second wavelength band pass therethrough. The first plurality of filter tiles and the second plurality of filter tiles can be interspersed to form the filter array comprising an alternating arrangement of first filter tiles and second filter tiles.
The Impact of the Delivery of Prepared Power Point Presentations on the Learning Process
Directory of Open Access Journals (Sweden)
Auksė Marmienė
2011-04-01
Full Text Available This article describes the process of the preparation and delivery of Power Point presentations and how it can be used by teachers as a resource for classroom teaching. The advantages of this classroom activity covering some of the problems and providing a few suggestions for dealing with those difficulties are also outlined. The major objective of the present paper is to investigate the students ability to choose the material and the content of Power Point presentations on professional topics via the Internet as well as the ability to prepare and deliver the presentation in front of the audience. The factors which determine the choice of the presentation subject are also analysed in this paper. After the delivery students were requested to self- and peer-assess the difficulties they faced in preparation and performance of the presentations by writing the reports. Learners’ attitudes to the choice of the topic of Power Point presentations were surveyed by administering a self-assessment questionnaire.
Directory of Open Access Journals (Sweden)
Nelson Gutiérrez Guzmán
2014-12-01
Full Text Available n order to evaluate the current operating conditions of wastewater treatment systems of small scale coffee growers in the south of Huila a lab-scale prototype (S 1:25 was constructed. It was composed of both a sediment tank and a filter fit in series, simulating similar operating conditions used by coffee producers. Removal of biological oxygen demand (BOD5 and suspended solids (SS was performed in wastewater from coffee bean processing. A 23 factorial experimental design for the evaluation of the type of sedimentation tank, type of filter and hydraulic retention time (HRT in the sedimentation tank was employed. The results showed high removal efficiencies of suspended solid concentrations (more than 95%, and low removal efficiencies in BOD5 (about 20%. The combination of tank type 1 (square with a lower area, filter type 1 (upflow anaerobic filter – UAF and HRT of 30 hours had the highest removal efficiency.
Tavakkoli Estahbanat, A.; Dehghani, M.
2017-09-01
In interferometry technique, phases have been modulated between 0-2π. Finding the number of integer phases missed when they were wrapped is the main goal of unwrapping algorithms. Although the density of points in conventional interferometry is high, this is not effective in some cases such as large temporal baselines or noisy interferograms. Due to existing noisy pixels, not only it does not improve results, but also it leads to some unwrapping errors during interferogram unwrapping. In PS technique, because of the sparse PS pixels, scientists are confronted with a problem to unwrap phases. Due to the irregular data separation, conventional methods are sterile. Unwrapping techniques are divided in to path-independent and path-dependent in the case of unwrapping paths. A region-growing method which is a path-dependent technique has been used to unwrap PS data. In this paper an idea of EKF has been generalized on PS data. This algorithm is applied to consider the nonlinearity of PS unwrapping problem as well as conventional unwrapping problem. A pulse-pair method enhanced with singular value decomposition (SVD) has been used to estimate spectral shift from interferometric power spectral density in 7*7 local windows. Furthermore, a hybrid cost-map is used to manage the unwrapping path. This algorithm has been implemented on simulated PS data. To form a sparse dataset, A few points from regular grid are randomly selected and the RMSE of results and true unambiguous phases in presented to validate presented approach. The results of this algorithm and true unwrapped phases were completely identical.
Directory of Open Access Journals (Sweden)
Mokhtar Mahdavi
2018-04-01
Full Text Available Spent filter backwash water (SFBW reuse has attracted particular attention, especially in countries that experience water scarcity. It can act as a permanent water source until the water treatment plant is working. In this study, the concentrations of Fe, Al, Pb, As, and Cd with total and fecal coliform (TC/FC were investigated in raw and treated SFBW by hybrid coagulation-UF processes. The pilot plant consisted of pre-sedimentation, coagulation, flocculation, clarification, and ultrafiltration (UF units. Poly-aluminum ferric chloride (PAFCL and ferric chloride (FeCl3 were used as pretreatment. The results showed that, at the optimum dose of PAFCl, the average removal of TC and FC was 88 and 79% and with PAFCl-UF process, it reached 100 and 100%, respectively. For FeCl3, removal efficiency of TC and FC were 81 and 72% and by applying FeCl3-UF process, it reached 100 and 100%, respectively. In comparison with FeCl3, PAFCl showed better removal efficiency for Fe, Pb, As, and Cd, except residual Al concentration. Coagulation-UF process could treat SFBW efficiently and treated SFBW could meet the US-EPA drinking water standard. Health risk index values of Fe, AL, Pb, AS, and Cd in treated SFBW indicate no risk of exposure to the use of this water.
Ritchey, Maureen; McCullough, Andrew M; Ranganath, Charan; Yonelinas, Andrew P
2017-01-01
Acute stress has been shown to modulate memory for recently learned information, an effect attributed to the influence of stress hormones on medial temporal lobe (MTL) consolidation processes. However, little is known about which memories will be affected when stress follows encoding. One possibility is that stress interacts with encoding processes to selectively protect memories that had elicited responses in the hippocampus and amygdala, two MTL structures important for memory formation. There is limited evidence for interactions between encoding processes and consolidation effects in humans, but recent studies of consolidation in rodents have emphasized the importance of encoding "tags" for determining the impact of consolidation manipulations on memory. Here, we used functional magnetic resonance imaging in humans to test the hypothesis that the effects of post-encoding stress depend on MTL processes observed during encoding. We found that changes in stress hormone levels were associated with an increase in the contingency of memory outcomes on hippocampal and amygdala encoding responses. That is, for participants showing high cortisol reactivity, memories became more dependent on MTL activity observed during encoding, thereby shifting the distribution of recollected events toward those that had elicited relatively high activation. Surprisingly, this effect was generally larger for neutral, compared to emotionally negative, memories. The results suggest that stress does not uniformly enhance memory, but instead selectively preserves memories tagged during encoding, effectively acting as mnemonic filter. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Process for quality assurance of welded joints for electrical resistance point welding
International Nuclear Information System (INIS)
Schaefer, R.; Singh, S.
1977-01-01
In order to guarantee the reproducibility of welded joints of even quality (above all in the metal working industry), it is proposed that before starting resistance point welding, a preheating current should be allowed to flow at the site of the weld. A given reduction of the total resistance at the site of the weld should effect the time when the preheating current is switched over to welding current. This value is always predetermined empirically. Further possibilities of controlling the welding process are described, where the measurement of thermal expansion of the parts is used. A standard welding time is given. The rated course of electrode movement during the process can be predicted and a running comparison of nominal and actual values can be carried out. (RW) [de
Implementation of 5S tools as a starting point in business process reengineering
Directory of Open Access Journals (Sweden)
Vorkapić Miloš 0000-0002-3463-8665
2017-01-01
Full Text Available The paper deals with the analysis of elements which represent a starting point in implementation of a business process reengineering. We have used Lean tools through the analysis of 5S model in our research. On the example of finalization of the finished transmitter in IHMT-CMT production, 5S tools were implemented with a focus on Quality elements although the theory shows that BPR and TQM are two opposite activities in an enterprise. We wanted to distinguish the significance of employees’ self-discipline which helps the process of product finalization to develop in time and without waste and losses. In addition, the employees keep their work place clean, tidy and functional.
A generalized adaptive mathematical morphological filter for LIDAR data
Cui, Zheng
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in
Energy Technology Data Exchange (ETDEWEB)
Mugendiran, V.; Gnanavelbabu, A. [Anna University, Chennai, Tamilnadu (India)
2017-06-15
In this study, a surface based strain measurement was used to determine the formability of the sheet metal. A strain measurement may employ manual calculation of plastic strains based on the reference circle and the deformed circle. The manual calculation method has a greater margin of error in the practical applications. In this paper, an attempt has been made to compare the formability by implementing three different theoretical approaches: Namely conventional method, least square method and digital based strain measurements. As the sheet metal was formed by a single point incremental process the etched circles get deformed into elliptical shapes approximately, image acquisition has been done before and after forming. The plastic strains of the deformed circle grids are calculated based on the non- deformed reference. The coordinates of the deformed circles are measured by various image processing steps. Finally the strains obtained from the deformed circle are used to plot the forming limit diagram. To evaluate the accuracy of the system, the conventional, least square and digital based method of prediction of the forming limit diagram was compared. Conventional method and least square method have marginal error when compared with digital based processing method. Measurement of strain based on image processing agrees well and can be used to improve the accuracy and to reduce the measurement error in prediction of forming limit diagram.
International Nuclear Information System (INIS)
Mugendiran, V.; Gnanavelbabu, A.
2017-01-01
In this study, a surface based strain measurement was used to determine the formability of the sheet metal. A strain measurement may employ manual calculation of plastic strains based on the reference circle and the deformed circle. The manual calculation method has a greater margin of error in the practical applications. In this paper, an attempt has been made to compare the formability by implementing three different theoretical approaches: Namely conventional method, least square method and digital based strain measurements. As the sheet metal was formed by a single point incremental process the etched circles get deformed into elliptical shapes approximately, image acquisition has been done before and after forming. The plastic strains of the deformed circle grids are calculated based on the non- deformed reference. The coordinates of the deformed circles are measured by various image processing steps. Finally the strains obtained from the deformed circle are used to plot the forming limit diagram. To evaluate the accuracy of the system, the conventional, least square and digital based method of prediction of the forming limit diagram was compared. Conventional method and least square method have marginal error when compared with digital based processing method. Measurement of strain based on image processing agrees well and can be used to improve the accuracy and to reduce the measurement error in prediction of forming limit diagram.
Neutron capture at the s-process branching points $^{171}$Tm and $^{204}$Tl
Branching points in the s-process are very special isotopes for which there is a competition between the neutron capture and the subsequent b-decay chain producing the heavy elements beyond Fe. Typically, the knowledge on the associated capture cross sections is very poor due to the difficulty in obtaining enough material of these radioactive isotopes and to measure the cross section of a sample with an intrinsic activity; indeed only 2 out o the 21 ${s}$-process branching points have ever been measured by using the time-of-flight method. In this experiment we aim at measuring for the first time the capture cross sections of $^{171}$Tm and $^{204}$Tl, both of crucial importance for understanding the nucleosynthesis of heavy elements in AGB stars. The combination of both (n,$\\gamma$) measurements on $^{171}$Tm and $^{204}$Tl will allow one to accurately constrain neutron density and the strength of the 13C(α,n) source in low mass AGB stars. Additionally, the cross section of $^{204}$Tl is also of cosmo-chrono...
Detection of bursts in extracellular spike trains using hidden semi-Markov point process models.
Tokdar, Surya; Xi, Peiyi; Kelly, Ryan C; Kass, Robert E
2010-08-01
Neurons in vitro and in vivo have epochs of bursting or "up state" activity during which firing rates are dramatically elevated. Various methods of detecting bursts in extracellular spike trains have appeared in the literature, the most widely used apparently being Poisson Surprise (PS). A natural description of the phenomenon assumes (1) there are two hidden states, which we label "burst" and "non-burst," (2) the neuron evolves stochastically, switching at random between these two states, and (3) within each state the spike train follows a time-homogeneous point process. If in (2) the transitions from non-burst to burst and burst to non-burst states are memoryless, this becomes a hidden Markov model (HMM). For HMMs, the state transitions follow exponential distributions, and are highly irregular. Because observed bursting may in some cases be fairly regular-exhibiting inter-burst intervals with small variation-we relaxed this assumption. When more general probability distributions are used to describe the state transitions the two-state point process model becomes a hidden semi-Markov model (HSMM). We developed an efficient Bayesian computational scheme to fit HSMMs to spike train data. Numerical simulations indicate the method can perform well, sometimes yielding very different results than those based on PS.
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
Students’ Algebraic Thinking Process in Context of Point and Line Properties
Nurrahmi, H.; Suryadi, D.; Fatimah, S.
2017-09-01
Learning of schools algebra is limited to symbols and operating procedures, so students are able to work on problems that only require the ability to operate symbols but unable to generalize a pattern as one of part of algebraic thinking. The purpose of this study is to create a didactic design that facilitates students to do algebraic thinking process through the generalization of patterns, especially in the context of the property of point and line. This study used qualitative method and includes Didactical Design Research (DDR). The result is students are able to make factual, contextual, and symbolic generalization. This happen because the generalization arises based on facts on local terms, then the generalization produced an algebraic formula that was described in the context and perspective of each student. After that, the formula uses the algebraic letter symbol from the symbol t hat uses the students’ language. It can be concluded that the design has facilitated students to do algebraic thinking process through the generalization of patterns, especially in the context of property of the point and line. The impact of this study is this design can use as one of material teaching alternative in learning of school algebra.
The impact of ion exchange media and filters on LLW processing
International Nuclear Information System (INIS)
James, K.L.; Miller, C.C.
1992-01-01
Optimized ion exchange media at Diablo Canyon have steadily improved the treatment of radioactive liquid waste. The activity released to the environment has been reduced while simultaneously reducing the volume of solid radwaste generated from processing radioactive liquids. This has lowered the liquid waste processing costs and reduced the number of radioactive shipments from the plant. A cobalt treatment technique was identified and successfully implemented prior to reactor coolant chemistry alteration. A cesium treatment using zeolite has been successfully implemented. A cobalt removal treatment, combining series cation ion exchange with submicron filtration, has successfully removed cobalt after reactor coolant chemistry alteration. A new carbon-based material will be monitored to find a media to remove cobalt from high-conductivity liquids. (author)
Doutsi, Effrosyni; Fillatre, Lionel; Antonini, Marc; Gaulmin, Julien
2018-07-01
This paper introduces a novel filter, which is inspired by the human retina. The human retina consists of three different layers: the Outer Plexiform Layer (OPL), the inner plexiform layer, and the ganglionic layer. Our inspiration is the linear transform which takes place in the OPL and has been mathematically described by the neuroscientific model "virtual retina." This model is the cornerstone to derive the non-separable spatio-temporal OPL retina-inspired filter, briefly renamed retina-inspired filter, studied in this paper. This filter is connected to the dynamic behavior of the retina, which enables the retina to increase the sharpness of the visual stimulus during filtering before its transmission to the brain. We establish that this retina-inspired transform forms a group of spatio-temporal Weighted Difference of Gaussian (WDoG) filters when it is applied to a still image visible for a given time. We analyze the spatial frequency bandwidth of the retina-inspired filter with respect to time. It is shown that the WDoG spectrum varies from a lowpass filter to a bandpass filter. Therefore, while time increases, the retina-inspired filter enables to extract different kinds of information from the input image. Finally, we discuss the benefits of using the retina-inspired filter in image processing applications such as edge detection and compression.
Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data
Deng, Xinyi
2016-08-01
A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in
Nuclear structure and weak rates of heavy waiting point nuclei under rp-process conditions
Nabi, Jameel-Un; Böyükata, Mahmut
2017-01-01
The structure and the weak interaction mediated rates of the heavy waiting point (WP) nuclei 80Zr, 84Mo, 88Ru, 92Pd and 96Cd along N = Z line were studied within the interacting boson model-1 (IBM-1) and the proton-neutron quasi-particle random phase approximation (pn-QRPA). The energy levels of the N = Z WP nuclei were calculated by fitting the essential parameters of IBM-1 Hamiltonian and their geometric shapes were predicted by plotting potential energy surfaces (PESs). Half-lives, continuum electron capture rates, positron decay rates, electron capture cross sections of WP nuclei, energy rates of β-delayed protons and their emission probabilities were later calculated using the pn-QRPA. The calculated Gamow-Teller strength distributions were compared with previous calculation. We present positron decay and continuum electron capture rates on these WP nuclei under rp-process conditions using the same model. For the rp-process conditions, the calculated total weak rates are twice the Skyrme HF+BCS+QRPA rates for 80Zr. For remaining nuclei the two calculations compare well. The electron capture rates are significant and compete well with the corresponding positron decay rates under rp-process conditions. The finding of the present study supports that electron capture rates form an integral part of the weak rates under rp-process conditions and has an important role for the nuclear model calculations.
Directory of Open Access Journals (Sweden)
Misganaw Abebe
2017-11-01
Full Text Available Springback in multi-point dieless forming (MDF is a common problem because of the small deformation and blank holder free boundary condition. Numerical simulations are widely used in sheet metal forming to predict the springback. However, the computational time in using the numerical tools is time costly to find the optimal process parameters value. This study proposes radial basis function (RBF to replace the numerical simulation model by using statistical analyses that are based on a design of experiment (DOE. Punch holding time, blank thickness, and curvature radius are chosen as effective process parameters for determining the springback. The Latin hypercube DOE method facilitates statistical analyses and the extraction of a prediction model in the experimental process parameter domain. Finite element (FE simulation model is conducted in the ABAQUS commercial software to generate the springback responses of the training and testing samples. The genetic algorithm is applied to find the optimal value for reducing and compensating the induced springback for the different blank thicknesses using the developed RBF prediction model. Finally, the RBF numerical result is verified by comparing with the FE simulation result of the optimal process parameters and both results show that the springback is almost negligible from the target shape.
International Nuclear Information System (INIS)
Charleston, B.D.; Beckman, F.H.; Franco, M.J.; Charleston, D.B.
1981-01-01
A versatile electronic-analogue image processing system has been developed for use in improving the quality of various types of images with emphasis on those encountered in experimental and diagnostic medicine. The operational principle utilizes spatial filtering which selectively controls the contrast of an image according to the spatial frequency content of relevant and non-relevant features of the image. Noise can be reduced or eliminated by selectively lowering the contrast of information in the high spatial frequency range. Edge sharpness can be enhanced by accentuating the upper midrange spatial frequencies. Both methods of spatial frequency control may be adjusted continuously in the same image to obtain maximum visibility of the features of interest. A precision video camera is used to view medical diagnostic images, either prints, transparencies or CRT displays. The output of the camera provides the analogue input signal for both the electronic processing system and the video display of the unprocessed image. The video signal input to the electronic processing system is processed by a two-dimensional spatial convolution operation. The system employs charged-coupled devices (CCDs), both tapped analogue delay lines (TADs) and serial analogue delay lines (SADs), to store information in the form of analogue potentials which are constantly being updated as new sampled analogue data arrive at the input. This information is convolved with a programmed bipolar radially symmetrical hexagonal function which may be controlled and varied at each radius by the operator in real-time by adjusting a set of front panel controls or by a programmed microprocessor control. Two TV monitors are used, one for processed image display and the other for constant reference to the original image. The working prototype has a full-screen display matrix size of 200 picture elements per horizontal line by 240 lines. The matrix can be expanded vertically and horizontally for the
National Research Council Canada - National Science Library
Stoffell, Kevin M
2006-01-01
.... Performance and device utilization results between the Quadrature Mirror Filter Bank implemented in VHDL, design elements implemented in the C programming language, and calculations made using high...
Directory of Open Access Journals (Sweden)
2006-01-01
Full Text Available RF circuits for multi-GHz frequencies have recently migrated to low-cost digital deep-submicron CMOS processes. Unfortunately, this process environment, which is optimized only for digital logic and SRAM memory, is extremely unfriendly for conventional analog and RF designs. We present fundamental techniques recently developed that transform the RF and analog circuit design complexity to digitally intensive domain for a wireless RF transceiver, so that it enjoys benefits of digital and switched-capacitor approaches. Direct RF sampling techniques allow great flexibility in reconfigurable radio design. Digital signal processing concepts are used to help relieve analog design complexity, allowing one to reduce cost and power consumption in a reconfigurable design environment. The ideas presented have been used in Texas Instruments to develop two generations of commercial digital RF processors: a single-chip Bluetooth radio and a single-chip GSM radio. We further present details of the RF receiver front end for a GSM radio realized in a 90-nm digital CMOS technology. The circuit consisting of low-noise amplifier, transconductance amplifier, and switching mixer offers 32.5 dB dynamic range with digitally configurable voltage gain of 40 dB down to 7.5 dB. A series of decimation and discrete-time filtering follows the mixer and performs a highly linear second-order lowpass filtering to reject close-in interferers. The front-end gains can be configured with an automatic gain control to select an optimal setting to form a trade-off between noise figure and linearity and to compensate the process and temperature variations. Even under the digital switching activity, noise figure at the 40 dB maximum gain is 1.8 dB and +50 dBm IIP2 at the 34 dB gain. The variation of the input matching versus multiple gains is less than 1 dB. The circuit in total occupies 3.1 mm 2 . The LNA, TA, and mixer consume less than 15.3 mA at a supply voltage of 1.4 V.
Directory of Open Access Journals (Sweden)
Ho Yo-Chuol
2006-01-01
Full Text Available RF circuits for multi-GHz frequencies have recently migrated to low-cost digital deep-submicron CMOS processes. Unfortunately, this process environment, which is optimized only for digital logic and SRAM memory, is extremely unfriendly for conventional analog and RF designs. We present fundamental techniques recently developed that transform the RF and analog circuit design complexity to digitally intensive domain for a wireless RF transceiver, so that it enjoys benefits of digital and switched-capacitor approaches. Direct RF sampling techniques allow great flexibility in reconfigurable radio design. Digital signal processing concepts are used to help relieve analog design complexity, allowing one to reduce cost and power consumption in a reconfigurable design environment. The ideas presented have been used in Texas Instruments to develop two generations of commercial digital RF processors: a single-chip Bluetooth radio and a single-chip GSM radio. We further present details of the RF receiver front end for a GSM radio realized in a 90-nm digital CMOS technology. The circuit consisting of low-noise amplifier, transconductance amplifier, and switching mixer offers dB dynamic range with digitally configurable voltage gain of 40 dB down to dB. A series of decimation and discrete-time filtering follows the mixer and performs a highly linear second-order lowpass filtering to reject close-in interferers. The front-end gains can be configured with an automatic gain control to select an optimal setting to form a trade-off between noise figure and linearity and to compensate the process and temperature variations. Even under the digital switching activity, noise figure at the 40 dB maximum gain is 1.8 dB and dBm IIP2 at the 34 dB gain. The variation of the input matching versus multiple gains is less than 1 dB. The circuit in total occupies 3.1 . The LNA, TA, and mixer consume less than mA at a supply voltage of 1.4 V.
Gerhard, Felipe; Deger, Moritz; Truccolo, Wilson
2017-02-01
Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a
Using a micro-molding process to fabricate polymeric wavelength filters
Chuang, Wei-Ching; Lee, An-Chen; Ho, Chi-Ting
2008-08-01
A procedure for fabricating a high aspect ratio periodic structure on a UV polymer at submicron order using holographic interferometry and molding processes is described. First, holographic interferometry using a He-Cd (325 nm) laser was used to create the master of the periodic line structure on an i-line sub-micron positive photoresist film. A 20 nm nickel thin film was then sputtered on the photoresist. The final line pattern on a UV polymer was obtained from casting against the master mold. Finally, a SU8 polymer was spun on the polymer grating to form a planar waveguide or a channel waveguide. The measurement results show that the waveguide length could be reduced for the waveguide having gratings with a high aspect ratio.
Valenza, G; Romigi, A; Citi, L; Placidi, F; Izzi, F; Albanese, M; Scilingo, E P; Marciani, M G; Duggento, A; Guerrisi, M; Toschi, N; Barbieri, R
2016-08-01
Symptoms of temporal lobe epilepsy (TLE) are frequently associated with autonomic dysregulation, whose underlying biological processes are thought to strongly contribute to sudden unexpected death in epilepsy (SUDEP). While abnormal cardiovascular patterns commonly occur during ictal events, putative patterns of autonomic cardiac effects during pre-ictal (PRE) periods (i.e. periods preceding seizures) are still unknown. In this study, we investigated TLE-related heart rate variability (HRV) through instantaneous, nonlinear estimates of cardiovascular oscillations during inter-ictal (INT) and PRE periods. ECG recordings from 12 patients with TLE were processed to extract standard HRV indices, as well as indices of instantaneous HRV complexity (dominant Lyapunov exponent and entropy) and higher-order statistics (bispectra) obtained through definition of inhomogeneous point-process nonlinear models, employing Volterra-Laguerre expansions of linear, quadratic, and cubic kernels. Experimental results demonstrate that the best INT vs. PRE classification performance (balanced accuracy: 73.91%) was achieved only when retaining the time-varying, nonlinear, and non-stationary structure of heartbeat dynamical features. The proposed approach opens novel important avenues in predicting ictal events using information gathered from cardiovascular signals exclusively.
2011-11-01
Using a multidisciplinary team approach, the University of California, San Diego, Health System has been able to significantly reduce average door-to-balloon angioplasty times for patients with the most severe form of heart attacks, beating national recommendations by more than a third. The multidisciplinary team meets monthly to review all cases involving patients with ST-segment-elevation myocardial infarctions (STEMI) to see where process improvements can be made. Using this continuous quality improvement (CQI) process, the health system has reduced average door-to-balloon times from 120 minutes to less than 60 minutes, and administrators are now aiming for further progress. Among the improvements instituted by the multidisciplinary team are the implementation of a "greeter" with enough clinical expertise to quickly pick up on potential STEMI heart attacks as soon as patients walk into the ED, and the purchase of an electrocardiogram (EKG) machine so that evaluations can be done in the triage area. ED staff have prepared "STEMI" packets, including items such as special IV tubing and disposable leads, so that patients headed for the catheterization laboratory are prepared to undergo the procedure soon after arrival. All the clocks and devices used in the ED are synchronized so that analysts can later review how long it took to complete each step of the care process. Points of delay can then be targeted for improvement.
Directory of Open Access Journals (Sweden)
Zhe eChen
2012-02-01
Full Text Available In recent years, time-varying inhomogeneous point process models have been introduced for assessment of instantaneous heartbeat dynamics as well as specific cardiovascular control mechanisms and hemodynamics. Assessment of the model's statistics is established through the Wiener-Volterra theory and a multivariate autoregressive (AR structure. A variety of instantaneous cardiovascular metrics, such as heart rate (HR, heart rate variability (HRV, respiratory sinus arrhythmia (RSA, and baroreceptor-cardiac reflex (baroreflex sensitivity (BRS, are derived within a parametric framework and instantaneously updated with adaptive and local maximum likelihood estimation algorithms. Inclusion of second order nonlinearities, with subsequent bispectral quantification in the frequency domain, further allows for definition of instantaneous metrics of nonlinearity. We here organize a comprehensive review of the devised methods as applied to experimental recordings from healthy subjects during propofol anesthesia. Collective results reveal interesting dynamic trends across the different pharmacological interventions operated within each anesthesia session, confirming the ability of the algorithm to track important changes in cardiorespiratory elicited interactions, and pointing at our mathematical approach as a promising monitoring tool for an accurate, noninvasive assessment in clinical practice.
Energy Technology Data Exchange (ETDEWEB)
Tsybulevski, A.M.; Pearson, M. [Alcoa Industrial Chemicals, 16010 Barker`s Point Lane, Houston, TX (United States); Morgun, L.V.; Filatova, O.E. [All-Russian Research Institute of Natural Gases and Gas Technologies VNIIGAZ, Moscow (Russian Federation); Sharp, M. [Porocel Corporation, Westheimer, Houston, TX (United States)
1996-10-08
The efficiency of 4 samples of alumina catalyst has been studied experimentally in the course of the Claus `tail gas` treating processes at the sulphur sub-dew point (TGTP). The samples were characterized by the same chemical and crystallographic composition, the same volume of micropores, the same surface area and the same catalytic activity but differed appreciably in the volume of macropores. An increase in the effective operation time of the catalysts before breakthrough of unrecoverable sulphur containing compounds, with the increasing macropore volume has been established. A theoretical model of the TGTP has been considered and it has been shown that the increase in the sulphur capacity of the catalysts with a larger volume of macropores is due to an increase in the catalysts efficiency factor and a slower decrease in their diffusive permeability during filling of micropores by sulphur
Quantification of annual wildfire risk; A spatio-temporal point process approach.
Directory of Open Access Journals (Sweden)
Paula Pereira
2013-10-01
Full Text Available Policy responses for local and global firemanagement depend heavily on the proper understanding of the fire extent as well as its spatio-temporal variation across any given study area. Annual fire risk maps are important tools for such policy responses, supporting strategic decisions such as location-allocation of equipment and human resources. Here, we define risk of fire in the narrow sense as the probability of its occurrence without addressing the loss component. In this paper, we study the spatio-temporal point patterns of wildfires and model them by a log Gaussian Cox processes. Themean of predictive distribution of randomintensity function is used in the narrow sense, as the annual fire risk map for next year.
The (n, $\\gamma$) reaction in the s-process branching point $^{59}$Ni
We propose to measure the $^{59}$Ni(n,$\\gamma$)$^{56}$Fe cross section at the neutron time of flight (n TOF) facility with a dedicated chemical vapor deposition (CVD) diamond detector. The (n, ) reaction in the radioactive $^{59}$Ni is of relevance in nuclear astrophysics as it can be seen as a rst branching point in the astrophysical s-process. Its relevance in nuclear technology is especially related to material embrittlement in stainless steel. There is a strong discrepancy between available experimental data and the evaluated nuclear data les for this isotope. The aim of the measurement is to clarify this disagreement. The clear energy separation of the reaction products of neutron induced reactions in $^{59}$Ni makes it a very suitable candidate for a rst cross section measurement with the CVD diamond detector, which should serve in the future for similar measurements at n_TOF.
Aydin, Orhun; Caers, Jef Karel
2017-08-01
Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed
International Nuclear Information System (INIS)
Buerck, J.; Kraemer, K.; Koenig, W.
1990-02-01
The multicomponent version of the interference filter photometer SPECTRAN was adapted by radiation resistant quartz glass optical fibers to in-line flow cells in the aqueous and organic bypass stream of an uranium laboratory extraction column. A combined photometric/electrolytical conductivity measurement allows this modified process instrument to be used as uranium/plutonium in-line monitor in radioactive process streams. By applying a high performance 100 W quartz halogen lamp and suitable light focussing optics the light intensity, attenuated by coupling losses, could be increased to the desired level even when 1000 μm-single strand fibers (2x18 m) were used to transmit the light. In a series of calibration experiments the U(VI)- and U(IV)-extinction coefficients were determined as a function of nitric acid molarity (for U(VI) also in TBP/kerosene). Furthermore the validity of Lambert-Beer's law was examined for both oxidation states at different optical path lengths and nitric acid/electrolytical conductivity calibration functions between 0-100 g/l U(VI) and 0-4 mol/l HNO 3 were set up. (orig./EF) [de
DEFF Research Database (Denmark)
Møller, Jesper; Diaz-Avalos, Carlos
Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... dataset consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....
Directory of Open Access Journals (Sweden)
Zhiqiang Yang
2016-05-01
Full Text Available Due to the dynamic process of maximum power point tracking (MPPT caused by turbulence and large rotor inertia, variable-speed wind turbines (VSWTs cannot maintain the optimal tip speed ratio (TSR from cut-in wind speed up to the rated speed. Therefore, in order to increase the total captured wind energy, the existing aerodynamic design for VSWT blades, which only focuses on performance improvement at a single TSR, needs to be improved to a multi-point design. In this paper, based on a closed-loop system of VSWTs, including turbulent wind, rotor, drive train and MPPT controller, the distribution of operational TSR and its description based on inflow wind energy are investigated. Moreover, a multi-point method considering the MPPT dynamic process for the aerodynamic optimization of VSWT blades is proposed. In the proposed method, the distribution of operational TSR is obtained through a dynamic simulation of the closed-loop system under a specific turbulent wind, and accordingly the multiple design TSRs and the corresponding weighting coefficients in the objective function are determined. Finally, using the blade of a National Renewable Energy Laboratory (NREL 1.5 MW wind turbine as the baseline, the proposed method is compared with the conventional single-point optimization method using the commercial software Bladed. Simulation results verify the effectiveness of the proposed method.
Directory of Open Access Journals (Sweden)
Juan C Salcedo
2011-12-01
Full Text Available O processo da retrolavagem consiste na passagem da água através do filtro em sentido contrário ao fluxo de filtragem com o objetivo de remover partículas orgânicas e inorgânicas retidas no meio filtrante. O projeto de filtros de areia com configurações ineficientes e a ocorrência de condições operacionais inadequadas contribuem para limitar o desempenho desse processo, causando deficiências na limpeza dos meios filtrantes e comprometendo o funcionamento dos sistemas de irrigação localizada. O objetivo do presente trabalho é proporcionar uma revisão sobre os conceitos associados ao processo da retrolavagem nos filtros de areia, relacionando informações existentes na literatura com experiências de laboratório. Foi gerado um texto básico com informações técnico-científicas sobre o tema, visando a criar um momento de reflexão sobre o processo de retrolavagem e a contribuir para a melhoria do desempenho desses equipamentos na irrigação localizada.The backwash process consists of water passing through the filter in the opposite direction of the filtering flow to remove organic and inorganic particles of media filter. Inefficient sand filters designs and the occurrence of inadequate operating conditions contribute to restrict the process performance, causing deficiencies in the filter cleaning and compromise the operation of localized irrigation systems. The objective of this study is to provide a review about concepts associated with the backwash process in sand filters, relating literature information with laboratory experiments. A basic documentation was produced with technical and scientific information on this subject to create a reflection about the backwash process and contribute to the improvement of the equipment performance in the localized irrigation.
Bove, Patricia; Claveau-Mallet, Dominique; Boutet, Étienne; Lida, Félix; Comeau, Yves
2018-02-01
The main objective of this project was to develop a steel slag filter effluent neutralization process by acidification with CO 2 -enriched air coming from a bioprocess. Sub-objectives were to evaluate the neutralization capacity of different configurations of neutralization units in lab-scale conditions and to propose a design model of steel slag effluent neutralization. Two lab-scale column neutralization units fed with two different types of influent were operated at hydraulic retention time of 10 h. Tested variables were mode of flow (saturated or percolating), type of media (none, gravel, Bionest and AnoxKaldnes K3), type of air (ambient or CO 2 -enriched) and airflow rate. One neutralization field test (saturated and no media, 2000-5000 ppm CO 2 , sequential feeding, hydraulic retention time of 7.8 h) was conducted for 7 days. Lab-scale and field-scale tests resulted in effluent pH of 7.5-9.5 when the aeration rate was sufficiently high. A model was implemented in the PHREEQC software and was based on the carbonate system, CO 2 transfer and calcite precipitation; and was calibrated on ambient air lab tests. The model was validated with CO 2 -enriched air lab and field tests, providing satisfactory validation results over a wide range of CO 2 concentrations. The flow mode had a major impact on CO 2 transfer and hydraulic efficiency, while the type of media had little influence. The flow mode also had a major impact on the calcite surface concentration in the reactor: it was constant in saturated mode and was increasing in percolating mode. Predictions could be made for different steel slag effluent pH and different operation conditions (hydraulic retention time, CO 2 concentration, media and mode of flow). The pH of the steel slag filter effluent and the CO 2 concentration of the enriched air were factors that influenced most the effluent pH of the neutralization process. An increased concentration in CO 2 in the enriched air reduced calcite precipitation
Directory of Open Access Journals (Sweden)
Y. A. Bladyko
2010-01-01
Full Text Available The paper contains definition of a smoothing factor which is suitable for any rectifier filter. The formulae of complex smoothing factors have been developed for simple and complex passive filters. The paper shows conditions for application of calculation formulae and filters.
Nanofiber Filters Eliminate Contaminants
2009-01-01
With support from Phase I and II SBIR funding from Johnson Space Center, Argonide Corporation of Sanford, Florida tested and developed its proprietary nanofiber water filter media. Capable of removing more than 99.99 percent of dangerous particles like bacteria, viruses, and parasites, the media was incorporated into the company's commercial NanoCeram water filter, an inductee into the Space Foundation's Space Technology Hall of Fame. In addition to its drinking water filters, Argonide now produces large-scale nanofiber filters used as part of the reverse osmosis process for industrial water purification.
Insights into mortality patterns and causes of death through a process point of view model.
Anderson, James J; Li, Ting; Sharrow, David J
2017-02-01
Process point of view (POV) models of mortality, such as the Strehler-Mildvan and stochastic vitality models, represent death in terms of the loss of survival capacity through challenges and dissipation. Drawing on hallmarks of aging, we link these concepts to candidate biological mechanisms through a framework that defines death as challenges to vitality where distal factors defined the age-evolution of vitality and proximal factors define the probability distribution of challenges. To illustrate the process POV, we hypothesize that the immune system is a mortality nexus, characterized by two vitality streams: increasing vitality representing immune system development and immunosenescence representing vitality dissipation. Proximal challenges define three mortality partitions: juvenile and adult extrinsic mortalities and intrinsic adult mortality. Model parameters, generated from Swedish mortality data (1751-2010), exhibit biologically meaningful correspondences to economic, health and cause-of-death patterns. The model characterizes the twentieth century epidemiological transition mainly as a reduction in extrinsic mortality resulting from a shift from high magnitude disease challenges on individuals at all vitality levels to low magnitude stress challenges on low vitality individuals. Of secondary importance, intrinsic mortality was described by a gradual reduction in the rate of loss of vitality presumably resulting from reduction in the rate of immunosenescence. Extensions and limitations of a distal/proximal framework for characterizing more explicit causes of death, e.g. the young adult mortality hump or cancer in old age are discussed.
Sunusi, Nurtiti
2018-03-01
The study of time distribution of occurrences of extreme rain phenomena plays a very important role in the analysis and weather forecast in an area. The timing of extreme rainfall is difficult to predict because its occurrence is random. This paper aims to determine the inter event time distribution of extreme rain events and minimum waiting time until the occurrence of next extreme event through a point process approach. The phenomenon of extreme rain events over a given period of time is following a renewal process in which the time for events is a random variable τ. The distribution of random variable τ is assumed to be a Pareto, Log Normal, and Gamma. To estimate model parameters, a moment method is used. Consider Rt as the time of the last extreme rain event at one location is the time difference since the last extreme rainfall event. if there are no extreme rain events up to t 0, there will be an opportunity for extreme rainfall events at (t 0, t 0 + δt 0). Furthermore from the three models reviewed, the minimum waiting time until the next extreme rainfall will be determined. The result shows that Log Nrmal model is better than Pareto and Gamma model for predicting the next extreme rainfall in South Sulawesi while the Pareto model can not be used.
The neutron capture cross section of the ${s}$-process branch point isotope $^{63}$Ni
Neutron capture nucleosynthesis in massive stars plays an important role in Galactic chemical evolution as well as for the analysis of abundance patterns in very old metal-poor halo stars. The so-called weak ${s}$-process component, which is responsible for most of the ${s}$ abundances between Fe and Sr, turned out to be very sensitive to the stellar neutron capture cross sections in this mass region and, in particular, of isotopes near the seed distribution around Fe. In this context, the unstable isotope $^{63}$Ni is of particular interest because it represents the first branching point in the reaction path of the ${s}$-process. We propose to measure this cross section at n_TOF from thermal energies up to 500 keV, covering the entire range of astrophysical interest. These data are needed to replace uncertain theoretical predicitons by first experimental information to understand the consequences of the $^{63}$Ni branching for the abundance pattern of the subsequent isotopes, especially for $^{63}$Cu and $^{...
Lambert, Amaury; Stadler, Tanja
2013-12-01
Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.
Danielson, E. F.; Hipskind, R. S.; Gaines, S. E.
1980-01-01
Results are presented from computer processing and digital filtering of radiosonde and radar tracking data obtained during the ITCZ experiment when coordinated measurements were taken daily over a 16 day period across the Panama Canal Zone. The temperature relative humidity and wind velocity profiles are discussed.
International Nuclear Information System (INIS)
Mueller, Georges.
1982-01-01
From the non contaminated area, the filter is enclosed in a leak tight bag which is affixed to the outside periphery of a supporting frame. The filter is placed in the bottom of the bag which is then welded in two places, a cut is then made between the two welds to achieve a sealed membrane separating the two halves of the vessel. An additional supporting frame is then placed on the frame. The new filter is secured in place and the sealed membrane is withdrawn from the contaminated part of the vessel [fr
Landrum, Asheley R; Lull, Robert B; Akin, Heather; Hasell, Ariel; Jamieson, Kathleen Hall
2017-09-01
Previous research suggests that when individuals encounter new information, they interpret it through perceptual 'filters' of prior beliefs, relevant social identities, and messenger credibility. In short, evaluations are not based solely on message accuracy, but also on the extent to which the message and messenger are amenable to the values of one's social groups. Here, we use the release of Pope Francis's 2015 encyclical as the context for a natural experiment to examine the role of prior values in climate change cognition. Based on our analysis of panel data collected before and after the encyclical's release, we find that political ideology moderated views of papal credibility on climate change for those participants who were aware of the encyclical. We also find that, in some contexts, non-Catholics who were aware of the encyclical granted Pope Francis additional credibility compared to the non-Catholics who were unaware of it, yet Catholics granted the Pope high credibility regardless of encyclical awareness. Importantly, papal credibility mediated the conditional relationships between encyclical awareness and acceptance of the Pope's messages on climate change. We conclude by discussing how our results provide insight into cognitive processing of new information about controversial issues. Copyright © 2017 Elsevier B.V. All rights reserved.
Mousavi Anzehaee, Mohammad; Adib, Ahmad; Heydarzadeh, Kobra
2015-10-01
The manner of microtremor data collection and filtering operation and also the method used for processing have a considerable effect on the accuracy of estimation of dynamic soil parameters. In this paper, running variance method was used to improve the automatic detection of data sections infected by local perturbations. In this method, the microtremor data running variance is computed using a sliding window. Then the obtained signal is used to remove the ranges of data affected by perturbations from the original data. Additionally, to determinate the fundamental frequency of a site, this study has proposed a statistical characteristics-based method. Actually, statistical characteristics, such as the probability density graph and the average and the standard deviation of all the frequencies corresponding to the maximum peaks in the H/ V spectra of all data windows, are used to differentiate the real peaks from the false peaks resulting from perturbations. The methods have been applied to the data recorded for the City of Meybod in central Iran. Experimental results show that the applied methods are able to successfully reduce the effects of extensive local perturbations on microtremor data and eventually to estimate the fundamental frequency more accurately compared to other common methods.
Kim, Sung-Wan; Choi, Hyoung-Suk; Park, Dong-Uk; Baek, Eun-Rim; Kim, Jae-Min
2018-02-01
Sloshing refers to the movement of fluid that occurs when the kinetic energy of various storage tanks containing fluid (e.g., excitation and vibration) is continuously applied to the fluid inside the tanks. As the movement induced by an external force gets closer to the resonance frequency of the fluid, the effect of sloshing increases, and this can lead to a serious problem with the structural stability of the system. Thus, it is important to accurately understand the physics of sloshing, and to effectively suppress and reduce the sloshing. Also, a method for the economical measurement of the water level response of a liquid storage tank is needed for the exact analysis of sloshing. In this study, a method using images was employed among the methods for measuring the water level response of a liquid storage tank, and the water level response was measured using an image filter processing algorithm for the reduction of the noise of the fluid induced by light, and for the sharpening of the structure installed at the liquid storage tank. A shaking table test was performed to verify the validity of the method of measuring the water level response of a liquid storage tank using images, and the result was analyzed and compared with the response measured using a water level gauge.
Li, X.; Roo, de G.; Burgers, K.; Ottens, M.; Eppink, M.H.M.
2012-01-01
The use of high throughput screening (HTS) has successfully been applied in the past years in downstream process development of therapeutic proteins. Different HTS applications were introduced to speed up the purification process development of these proteins. In the light of these findings, studies
Energy Technology Data Exchange (ETDEWEB)
Garcia, Marcelo H.F. [Poland Quimica Ltda., Duque de Caxias, RJ (Brazil)
2004-07-01
Drilling fluids filter cakes are based on a combination of properly graded dispersed particles and polysaccharide polymers. High efficiency filter cakes are formed by these combination , and their formation on wellbore walls during the drilling process has, among other roles, the task of protecting the formation from instantaneous or accumulative invasion of drilling fluid filtrate, granting stability to well and production zones. Filter cake minimizes contact between drilling fluid filtrate and water, hydrocarbons and clay existent in formations. The uniform removal of the filter cake from the entire interval is a critical factor of the completion process. The main methods used to breaking filter cake are classified into two groups, external or internal, according to their removal mechanism. The aim of this work is the presentation of these mechanisms as well their efficiency. (author)
Reiter, P; Blazhev, A A; Nardelli, S; Voulot, D; Habs, D; Schwerdtfeger, W; Iwanicki, J S
We propose to investigate the nucleus $^{128}$Cd neighbouring the r-process "waiting point" $^{130}$Cd. A possible explanation for the peak in the solar r-abundances at A $\\approx$ 130 is a quenching of the N = 82 shell closure for spherical nuclei below $^{132}$Sn. This explanation seems to be in agreement with recent $\\beta$-decay measurements performed at ISOLDE. In contrast to this picture, a beyond-mean-field approach would explain the anomaly in the excitation energy observed for $^{128}$Cd rather with a quite large quadrupole collectivity. Therefore, we propose to measure the reduced transition strengths B(E2) between ground state and first excited 2$^{+}$-state in $^{128}$Cd applying $\\gamma$-spectroscopy with MINIBALL after "safe" Coulomb excitation of a post-accelerated beam obtained from REX-ISOLDE. Such a measurement came into reach only because of the source developments made in 2006 for experiment IS411, in particular the use of a heated quartz transfer line. The result from the proposed measure...
Process-based coastal erosion modeling for Drew Point (North Slope, Alaska)
Ravens, Thomas M.; Jones, Benjamin M.; Zhang, Jinlin; Arp, Christopher D.; Schmutz, Joel A.
2012-01-01
A predictive, coastal erosion/shoreline change model has been developed for a small coastal segment near Drew Point, Beaufort Sea, Alaska. This coastal setting has experienced a dramatic increase in erosion since the early 2000’s. The bluffs at this site are 3-4 m tall and consist of ice-wedge bounded blocks of fine-grained sediments cemented by ice-rich permafrost and capped with a thin organic layer. The bluffs are typically fronted by a narrow (∼ 5 m wide) beach or none at all. During a storm surge, the sea contacts the base of the bluff and a niche is formed through thermal and mechanical erosion. The niche grows both vertically and laterally and eventually undermines the bluff, leading to block failure or collapse. The fallen block is then eroded both thermally and mechanically by waves and currents, which must occur before a new niche forming episode may begin. The erosion model explicitly accounts for and integrates a number of these processes including: (1) storm surge generation resulting from wind and atmospheric forcing, (2) erosional niche growth resulting from wave-induced turbulent heat transfer and sediment transport (using the Kobayashi niche erosion model), and (3) thermal and mechanical erosion of the fallen block. The model was calibrated with historic shoreline change data for one time period (1979-2002), and validated with a later time period (2002-2007).
Kruse, Christian; Rottensteiner, Franz; Hoberg, Thorsten; Ziems, Marcel; Rebke, Julia; Heipke, Christian
2018-04-01
The aftermath of wartime attacks is often felt long after the war ended, as numerous unexploded bombs may still exist in the ground. Typically, such areas are documented in so-called impact maps which are based on the detection of bomb craters. This paper proposes a method for the automatic detection of bomb craters in aerial wartime images that were taken during the Second World War. The object model for the bomb craters is represented by ellipses. A probabilistic approach based on marked point processes determines the most likely configuration of objects within the scene. Adding and removing new objects to and from the current configuration, respectively, changing their positions and modifying the ellipse parameters randomly creates new object configurations. Each configuration is evaluated using an energy function. High gradient magnitudes along the border of the ellipse are favored and overlapping ellipses are penalized. Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing provides the global energy optimum, which describes the conformance with a predefined model. For generating the impact map a probability map is defined which is created from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively. Our results show the general potential of the method for the automatic detection of bomb craters and its automated generation of an impact map in a heterogeneous image stock.
A marked point process approach for identifying neural correlates of tics in Tourette Syndrome.
Loza, Carlos A; Shute, Jonathan B; Principe, Jose C; Okun, Michael S; Gunduz, Aysegul
2017-07-01
We propose a novel interpretation of local field potentials (LFP) based on a marked point process (MPP) framework that models relevant neuromodulations as shifted weighted versions of prototypical temporal patterns. Particularly, the MPP samples are categorized according to the well known oscillatory rhythms of the brain in an effort to elucidate spectrally specific behavioral correlates. The result is a transient model for LFP. We exploit data-driven techniques to fully estimate the model parameters with the added feature of exceptional temporal resolution of the resulting events. We utilize the learned features in the alpha and beta bands to assess correlations to tic events in patients with Tourette Syndrome (TS). The final results show stronger coupling between LFP recorded from the centromedian-paraficicular complex of the thalamus and the tic marks, in comparison to electrocorticogram (ECoG) recordings from the hand area of the primary motor cortex (M1) in terms of the area under the curve (AUC) of the receiver operating characteristic (ROC) curve.
Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana
2016-01-01
In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.
Plasmon point spread functions: How do we model plasmon-mediated emission processes?
Willets, Katherine A.
2014-02-01
A major challenge with studying plasmon-mediated emission events is the small size of plasmonic nanoparticles relative to the wavelength of light. Objects smaller than roughly half the wavelength of light will appear as diffraction-limited spots in far-field optical images, presenting a significant experimental challenge for studying plasmonic processes on the nanoscale. Super-resolution imaging has recently been applied to plasmonic nanosystems and allows plasmon-mediated emission to be resolved on the order of ˜5 nm. In super-resolution imaging, a diffraction-limited spot is fit to some model function in order to calculate the position of the emission centroid, which represents the location of the emitter. However, the accuracy of the centroid position strongly depends on how well the fitting function describes the data. This Perspective discusses the commonly used two-dimensional Gaussian fitting function applied to super-resolution imaging of plasmon-mediated emission, then introduces an alternative model based on dipole point spread functions. The two fitting models are compared and contrasted for super-resolution imaging of nanoparticle scattering/luminescence, surface-enhanced Raman scattering, and surface-enhanced fluorescence.
Remotely operated top loading filter housing
International Nuclear Information System (INIS)
Ross, M.J.; Carter, J.A.
1989-01-01
A high-efficiency particulate air (HEPA) filter system was developed for the Fuel Processing Facility at the Idaho Chemical Processing Plant. The system utilizes commercially available HEPA filters and allows in-cell filters to be maintained using operator-controlled remote handling equipment. The remote handling tasks include transport of filters before and after replacement, removal and replacement of the filter from the housing, and filter containment
Face Recognition using Gabor Filters
Directory of Open Access Journals (Sweden)
Sajjad MOHSIN
2011-01-01
Full Text Available An Elastic Bunch Graph Map (EBGM algorithm is being proposed in this research paper that successfully implements face recognition using Gabor filters. The proposed system applies 40 different Gabor filters on an image. As aresult of which 40 images with different angles and orientation are received. Next, maximum intensity points in each filtered image are calculated and mark them as Fiducial points. The system reduces these points in accordance to distance between them. The next step is calculating the distances between the reduced points using distance formula. At last, the distances are compared with database. If match occurs, it means that the image is recognized.
ISRIA statement: ten-point guidelines for an effective process of research impact assessment.
Adam, Paula; Ovseiko, Pavel V; Grant, Jonathan; Graham, Kathryn E A; Boukhris, Omar F; Dowd, Anne-Maree; Balling, Gert V; Christensen, Rikke N; Pollitt, Alexandra; Taylor, Mark; Sued, Omar; Hinrichs-Krapels, Saba; Solans-Domènech, Maite; Chorzempa, Heidi
2018-02-08
As governments, funding agencies and research organisations worldwide seek to maximise both the financial and non-financial returns on investment in research, the way the research process is organised and funded is becoming increasingly under scrutiny. There are growing demands and aspirations to measure research impact (beyond academic publications), to understand how science works, and to optimise its societal and economic impact. In response, a multidisciplinary practice called research impact assessment is rapidly developing. Given that the practice is still in its formative stage, systematised recommendations or accepted standards for practitioners (such as funders and those responsible for managing research projects) across countries or disciplines to guide research impact assessment are not yet available.In this statement, we propose initial guidelines for a rigorous and effective process of research impact assessment applicable to all research disciplines and oriented towards practice. This statement systematises expert knowledge and practitioner experience from designing and delivering the International School on Research Impact Assessment (ISRIA). It brings together insights from over 450 experts and practitioners from 34 countries, who participated in the school during its 5-year run (from 2013 to 2017) and shares a set of core values from the school's learning programme. These insights are distilled into ten-point guidelines, which relate to (1) context, (2) purpose, (3) stakeholders' needs, (4) stakeholder engagement, (5) conceptual frameworks, (6) methods and data sources, (7) indicators and metrics, (8) ethics and conflicts of interest, (9) communication, and (10) community of practice.The guidelines can help practitioners improve and standardise the process of research impact assessment, but they are by no means exhaustive and require evaluation and continuous improvement. The prima facie effectiveness of the guidelines is based on the systematised
Spatial filters for focusing ultrasound images
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Gori, Paola
2001-01-01
, but the approach always yields point spread functions better or equal to a traditional dynamically focused image. Finally, the process was applied to in-vivo clinical images of the liver and right kidney from a 28 years old male. The data was obtained with a single element transducer focused at 100 mm....... A new method for making spatial matched filter focusing of RF ultrasound data is proposed based on the spatial impulse response description of the imaging. The response from a scatterer at any given point in space relative to the transducer can be calculated, and this gives the spatial matched filter...... for synthetic aperture imaging for single element transducers. It is evaluated using the Field II program. Data from a single 3 MHz transducer focused at a distance of 80 mm is processed. Far from the transducer focal region, the processing greatly improves the image resolution: the lateral slice...
Directory of Open Access Journals (Sweden)
Mónica A Silva
Full Text Available Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF. The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km was nearly half that of LS estimates (11.6 ± 8.4 km. Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.
Point process models for localization and interdependence of punctate cellular structures.
Li, Ying; Majarian, Timothy D; Naik, Armaghan W; Johnson, Gregory R; Murphy, Robert F
2016-07-01
Accurate representations of cellular organization for multiple eukaryotic cell types are required for creating predictive models of dynamic cellular function. To this end, we have previously developed the CellOrganizer platform, an open source system for generative modeling of cellular components from microscopy images. CellOrganizer models capture the inherent heterogeneity in the spatial distribution, size, and quantity of different components among a cell population. Furthermore, CellOrganizer can generate quantitatively realistic synthetic images that reflect the underlying cell population. A current focus of the project is to model the complex, interdependent nature of organelle localization. We built upon previous work on developing multiple non-parametric models of organelles or structures that show punctate patterns. The previous models described the relationships between the subcellular localization of puncta and the positions of cell and nuclear membranes and microtubules. We extend these models to consider the relationship to the endoplasmic reticulum (ER), and to consider the relationship between the positions of different puncta of the same type. Our results do not suggest that the punctate patterns we examined are dependent on ER position or inter- and intra-class proximity. With these results, we built classifiers to update previous assignments of proteins to one of 11 patterns in three distinct cell lines. Our generative models demonstrate the ability to construct statistically accurate representations of puncta localization from simple cellular markers in distinct cell types, capturing the complex phenomena of cellular structure interaction with little human input. This protocol represents a novel approach to vesicular protein annotation, a field that is often neglected in high-throughput microscopy. These results suggest that spatial point process models provide useful insight with respect to the spatial dependence between cellular structures.
Bubble point pressures of the selected model system for CatLiq® bio-oil process
DEFF Research Database (Denmark)
Toor, Saqib Sohail; Rosendahl, Lasse; Baig, Muhammad Noman
2010-01-01
. In this work, the bubble point pressures of a selected model mixture (CO2 + H2O + Ethanol + Acetic acid + Octanoic acid) were measured to investigate the phase boundaries of the CatLiq® process. The bubble points were measured in the JEFRI-DBR high pressure PVT phase behavior system. The experimental results......The CatLiq® process is a second generation catalytic liquefaction process for the production of bio-oil from WDGS (Wet Distillers Grains with Solubles) at subcritical conditions (280-350 oC and 225-250 bar) in the presence of a homogeneous alkaline and a heterogeneous Zirconia catalyst...
DEFF Research Database (Denmark)
Møller, Jesper; Diaz-Avalos, Carlos
2010-01-01
Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... data set consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....
Generalized Selection Weighted Vector Filters
Directory of Open Access Journals (Sweden)
Rastislav Lukac
2004-09-01
Full Text Available This paper introduces a class of nonlinear multichannel filters capable of removing impulsive noise in color images. The here-proposed generalized selection weighted vector filter class constitutes a powerful filtering framework for multichannel signal processing. Previously defined multichannel filters such as vector median filter, basic vector directional filter, directional-distance filter, weighted vector median filters, and weighted vector directional filters are treated from a global viewpoint using the proposed framework. Robust order-statistic concepts and increased degree of freedom in filter design make the proposed method attractive for a variety of applications. Introduced multichannel sigmoidal adaptation of the filter parameters and its modifications allow to accommodate the filter parameters to varying signal and noise statistics. Simulation studies reported in this paper indicate that the proposed filter class is computationally attractive, yields excellent performance, and is able to preserve fine details and color information while efficiently suppressing impulsive noise. This paper is an extended version of the paper by Lukac et al. presented at the 2003 IEEE-EURASIP Workshop on Nonlinear Signal and Image Processing (NSIP '03 in Grado, Italy.
Fristedt, B; Krylov, N
2007-01-01
Filtering and prediction is about observing moving objects when the observations are corrupted by random errors. The main focus is then on filtering out the errors and extracting from the observations the most precise information about the object, which itself may or may not be moving in a somewhat random fashion. Next comes the prediction step where, using information about the past behavior of the object, one tries to predict its future path. The first three chapters of the book deal with discrete probability spaces, random variables, conditioning, Markov chains, and filtering of discrete Markov chains. The next three chapters deal with the more sophisticated notions of conditioning in nondiscrete situations, filtering of continuous-space Markov chains, and of Wiener process. Filtering and prediction of stationary sequences is discussed in the last two chapters. The authors believe that they have succeeded in presenting necessary ideas in an elementary manner without sacrificing the rigor too much. Such rig...
Experimental study of filter cake formation on different filter media
International Nuclear Information System (INIS)
Saleem, M.
2009-01-01
Removal of particulate matter from gases generated in the process industry is important for product recovery as well as emission control. Dynamics of filtration plant depend on operating conditions. The models, that predict filter plant behaviour, involve empirical resistance parameters which are usually derived from limited experimental data and are characteristics of the filter media and filter cake (dust deposited on filter medium). Filter cake characteristics are affected by the nature of filter media, process parameters and mode of filter regeneration. Removal of dust particles from air is studied in a pilot scale jet pulsed bag filter facility resembling closely to the industrial filters. Limestone dust and ambient air are used in this study with two widely different filter media. All important parameters like pressure drop, gas flow rate, dust settling, are recorded continuously at 1s interval. The data is processed for estimation of the resistance parameters. The pressure drop rise on test filter media is compared. Results reveal that the surface of filter media has an influence on pressure drop rise (concave pressure drop rise). Similar effect is produced by partially jet pulsed filter surface. Filter behaviour is also simulated using estimated parameters and a simplified model and compared with the experimental results. Distribution of cake area load is therefore an important aspect of jet pulse cleaned bag filter modeling. Mean specific cake resistance remains nearly constant on thoroughly jet pulse cleaned membrane coated filter bags. However, the trend can not be confirmed without independent cake height and density measurements. Thus the results reveal the importance of independent measurements of cake resistance. (author)
Energy Technology Data Exchange (ETDEWEB)
Wu, Bingdang; Yang, Minghui [State Key Laboratory of Pollution Control and Resource Reuse, School of the Environment, Nanjing University, Nanjing, 210023 (China); Yin, Ran [Department of Civil and Environmental Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon (Hong Kong); Zhang, Shujuan, E-mail: sjzhang@nju.edu.cn [State Key Laboratory of Pollution Control and Resource Reuse, School of the Environment, Nanjing University, Nanjing, 210023 (China)
2017-08-05
Highlights: • Acetylacetone (AA) could directly use solar irradiation to decolorize dyes. • AA had a wider applicability than H{sub 2}O{sub 2} to a variety of light sources. • The photonic efficiency in the UV/AA process was target-dependent. • An accurate calculation approach for the inner filter effect was developed. - Abstract: Light source is a crucial factor in the application of a photochemical process, which determines the energy efficiency. The performances of acetylacetone (AA) in conversion of aqueous contaminants under irradiation with a low-pressure mercury lamp, a medium-pressure mercury lamp, a xenon lamp, and natural sunlight were investigated and compared with those of H{sub 2}O{sub 2} as reference. In all cases, AA was superior to H{sub 2}O{sub 2} in the degradation of Acid Orange 7. Using combinations of the different light sources with various cut-off and band-pass filters, the spectra responses of the absorbed photons in the UV/AA and UV/H{sub 2}O{sub 2} processes were determined for two colored and two colorless compounds. The photonic efficiency (φ) of the two photochemical processes was found to be target-dependent. A calculation approach for the inner filter effect was developed by taking the obtained φ into account, which provides a more accurate indication of the reaction mechanisms.
International Nuclear Information System (INIS)
Butterworth, D.J.
1980-01-01
This invention relates to liquid filters, precoated by replaceable powders, which are used in the production of ultra pure water required for steam generation of electricity. The filter elements are capable of being installed and removed by remote control so that they can be used in nuclear power reactors. (UK)
Strong approximations and sequential change-point analysis for diffusion processes
DEFF Research Database (Denmark)
Mihalache, Stefan-Radu
2012-01-01
In this paper ergodic diffusion processes depending on a parameter in the drift are considered under the assumption that the processes can be observed continuously. Strong approximations by Wiener processes for a stochastic integral and for the estimator process constructed by the one...
Some Aspects on Filter Design for Target Tracking
Directory of Open Access Journals (Sweden)
Bertil Ekstrand
2012-01-01
Full Text Available Tracking filter design is discussed. It is argued that the basis of the present stochastic paradigm is questionable. White process noise is not adequate as a model for target manoeuvring, stochastic least-square optimality is not relevant or required in practice, the fact that requirements are necessary for design is ignored, and root mean square (RMS errors are insufficient as performance measure. It is argued that there is no process noise and that the covariance of the assumed process noise contains the design parameters. Focus is on the basic tracking filter, the Kalman filter, which is convenient for clarity and simplicity, but the arguments and conclusions are relevant in general. For design the possibility of an observer transfer function approach is pointed out. The issues can also be considered as a consequence of the fact that there is a difference between estimation and design. The - filter is used for illustration.
Directory of Open Access Journals (Sweden)
Chodun Rafal
2016-03-01
Full Text Available This work presents the very first results of the application of plasma magnetic filtering achieved by a coil coupled with an electrical circuit of a coaxial accelerator during the synthesis of A1N thin films by use of Impulse Plasma Deposition method (IPD. The uniqueness of this technical solution lies in the fact that the filter is not supplied, controlled and synchronized from any external device. Our solution uses the energy from the electrical circuit of plasma accelerator. The plasma state was described on the basis of OES studies. Estimation of the effects of plasma filtering on the film quality was carried out on the basis of characterization of structure morphology (SEM, phase and chemical composition (vibrational spectroscopy. Our work has shown that the use of the developed magnetic self-filter improved the structure of the AlN coatings synthesized under the condition of impulse plasma, especially by the minimization of the tendency to deposit metallic aluminum droplets and columnar growth.
ASIC For Complex Fixed-Point Arithmetic
Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.
1995-01-01
Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.
Yamamoto, Yuta; Iriyama, Yasutoshi; Muto, Shunsuke
2016-04-01
In this article, we propose a smart image-analysis method suitable for extracting target features with hierarchical dimension from original data. The method was applied to three-dimensional volume data of an all-solid lithium-ion battery obtained by the automated sequential sample milling and imaging process using a focused ion beam/scanning electron microscope to investigate the spatial configuration of voids inside the battery. To automatically fully extract the shape and location of the voids, three types of filters were consecutively applied: a median blur filter to extract relatively larger voids, a morphological opening operation filter for small dot-shaped voids and a morphological closing operation filter for small voids with concave contrasts. Three data cubes separately processed by the above-mentioned filters were integrated by a union operation to the final unified volume data, which confirmed the correct extraction of the voids over the entire dimension contained in the original data. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Concerning the acid dew point in waste gases from combustion processes
Energy Technology Data Exchange (ETDEWEB)
Knoche, K.F.; Deutz, W.; Hein, K.; Derichs, W.
1986-09-01
The paper discusses the problems associated with the measurement of acid dew point and of sulphuric acid-(say SO/sub 3/-)concentrations in the flue gas from brown coal-fired boiler plants. The sulphuric acid content in brown coal flue gas has been measured at 0.5 to 3 vpm in SO/sub 2/ concentrations of 200 to 800 vpm. Using a conditional equation, the derivation of which from new formulae for phase stability is described in the paper, an acid dew point temperature of 115 to 125/sup 0/C is produced.
Comparison of Clothing Cultures from the View Point of Funeral Procession
増田, 美子; 大枝, 近子; 梅谷, 知世; 杉本, 浄; 内村, 理奈
2011-01-01
This study was for its object to research for the look in the funeral ceremony and make the point of the different and common point between the respective cultural spheres of the Buddhism,Hinduism, Islam and Christianity clearly. In the year 21, we tried to grasp the reality of costumes of funeral courtesy in modern times and present-day. And it became clear in the result, Japan, the Buddhist cultural sphere, China and Taiwan, the Buddhism, the Confucianism and the Taoism intermingled cultura...
Fixed-point Characterization of Compositionality Properties of Probabilistic Processes Combinators
Directory of Open Access Journals (Sweden)
Daniel Gebler
2014-08-01
Full Text Available Bisimulation metric is a robust behavioural semantics for probabilistic processes. Given any SOS specification of probabilistic processes, we provide a method to compute for each operator of the language its respective metric compositionality property. The compositionality property of an operator is defined as its modulus of continuity which gives the relative increase of the distance between processes when they are combined by that operator. The compositionality property of an operator is computed by recursively counting how many times the combined processes are copied along their evolution. The compositionality properties allow to derive an upper bound on the distance between processes by purely inspecting the operators used to specify those processes.
Energy Technology Data Exchange (ETDEWEB)
Lerm, Stephanie; Alawi, Mashal; Wuerdemann, Hilke [Helmholtz-Zentrum Potsdam, GFZ - Deutsches GeoForschungsZentrum, Internationales Geothermiezentrum, Potsdam (Germany); Miethling-Graff, Rona [Wald und Fischerei Institut fuer Biodiversitaet, Johann Heinrich von Thuenen Institut, Bundesforschungsinstitut fuer Laendliche Raeume, Braunschweig (Germany); Wolfgramm, Markus; Rauppach, Kerstin [Geothermie Neubrandenburg GmbH (GTN), Neubrandenburg (Germany); Seibt, Andrea [BWG Geochemische Beratung GbR, Neubrandenburg (Germany)
2011-06-15
In this study, the operation of a cold store, located in 30-60 m depth in the North German Basin, was investigated by direct counting of bacteria and genetic fingerprinting analysis. Quantification of microbes accounted for 1 to 10.10{sup 5} cells per ml fluid with minor differences in the microbial community composition between well and process fluids. The detected microorganisms belong to versatile phyla Proteobacteria and Flavobacteria. In addition to routine plant operation, a phase of plant malfunction caused by filter clogging was monitored. Increased abundance of sulfur-oxidizing bacteria indicated a change in the supply of electron acceptors, however, no changes in the availability of electron acceptors like nitrate or oxygen were detected. Sulfur- and iron-oxidizing bacteria played essential roles for the filter lifetimes at the topside facility and the injectivity of the wells due to the formation of biofilms and induced mineral precipitations. In particular, sulfur-oxidizing Thiothrix generated filamentous biofilms were involved in the filter clogging. (orig.) [German] Im Rahmen dieser Studie wurde der Betrieb eines in 30-60 m Tiefe gelegenen Kaeltespeichers des Norddeutschen Beckens durch Bestimmung der Bakterien-Zellzahlen und genetischer Fingerprinting-Analysen untersucht. Eine Zellzahlbestimmung ergab 1 bis 10.10{sup 5} Zellen pro ml Fluid, wobei geringe Unterschiede in der mikrobiellen Zusammensetzung zwischen Brunnenproben und Prozessfluiden nachgewiesen wurden. Die identifizierten Mikroorganismen wurden den Phyla Proteobacteria und Flavobacteria zugeordnet. Neben routinemaessigem Anlagenbetrieb wurde eine Phase mit technischen Stoerungen durch zugesetzte Filter dokumentiert. Die Zunahme an Schwefel-oxidierenden Bakterien zeigte eine erhoehte Verfuegbarkeit von Elektronenakzeptoren an, obwohl keine Aenderungen in der Verfuegbarkeit von Elektronenakzeptoren, wie Nitrat oder Sauerstoff, nachgewiesen werden konnte. Schwefel- und Eisen
Focal Points, Endogenous Processes, and Exogenous Shocks in the Autism Epidemic
Liu, Kayuet; Bearman, Peter S.
2015-01-01
Autism prevalence has increased rapidly in the United States during the past two decades. We have previously shown that the diffusion of information about autism through spatially proximate social relations has contributed significantly to the epidemic. This study expands on this finding by identifying the focal points for interaction that drive…
Jansen, M.H.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.
2006-01-01
We present a continuous wavelet analysis of count data with timevarying intensities. The objective is to extract intervals with significant intensities from background intervals. This includes the precise starting point of the significant interval, its exact duration and the (average) level of
MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER
Barton, R. S.
1994-01-01
The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the
Energy Technology Data Exchange (ETDEWEB)
Prausnitz, J.M.
1980-05-01
This research is concerned with the fundamental physical chemistry and thermodynamics of condensation of tars (dew points) from the vapor phase at advanced temperatures and pressures. Fundamental quantitative understanding of dew points is important for rational design of heat exchangers to recover sensible heat from hot, tar-containing gases that are produced in coal gasification. This report includes essentially six contributions toward establishing the desired understanding: (1) Characterization of Coal Tars for Dew-Point Calculations; (2) Fugacity Coefficients for Dew-Point Calculations in Coal-Gasification Process Design; (3) Vapor Pressures of High-Molecular-Weight Hydrocarbons; (4) Estimation of Vapor Pressures of High-Boiling Fractions in Liquefied Fossil Fuels Containing Heteroatoms Nitrogen or Sulfur; and (5) Vapor Pressures of Heavy Liquid Hydrocarbons by a Group-Contribution Method.
International Nuclear Information System (INIS)
Cooper, W.S.
1986-01-01
Several techniques proposed for diagnosing the velocity distribution of fast alpha-particles in a burning plasma require the injection of a beam of fast neutral atoms as probes. The author discusses how improving signal detection techniques is a high leverage factor in reducing the cost of the diagnostic beam. Optimal estimation theory provides a computational algorithm, the Kalman filter, that can optimally estimate the amplitude of a signal with arbitrary (but known) time dependence in the presence of noise. In one example presented, based on a square-wave signal and assumed noise levels, the Kalman filter achieves an enhancement of signal detection efficiency of about a factor of 10 (as compared with the straightforward observation of the signal superimposed on noise) with an observation time of 100 signal periods
Yi, Faliu; Lee, Jieun; Moon, Inkyu
2014-05-01
The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.
International Nuclear Information System (INIS)
Vanin, V.R.
1990-01-01
The multidetector systems for high resolution gamma spectroscopy are presented. The observable parameters for identifying nuclides produced simultaneously in the reaction are analysed discussing the efficiency of filter systems. (M.C.K.)
Second-order analysis of inhomogeneous spatial point processes with proportional intensity functions
DEFF Research Database (Denmark)
Guan, Yongtao; Waagepetersen, Rasmus; Beale, Colin M.
2008-01-01
of the intensity functions. The first approach is based on nonparametric kernel-smoothing, whereas the second approach uses a conditional likelihood estimation approach to fit a parametric model for the pair correlation function. A great advantage of the proposed methods is that they do not require the often...... to two spatial point patterns regarding the spatial distributions of birds in the U.K.'s Peak District in 1990 and 2004....
Fractal Point Process and Queueing Theory and Application to Communication Networks
National Research Council Canada - National Science Library
Wornel, Gregory
1999-01-01
.... A unifying theme in the approaches to these problems has been an integration of interrelated perspectives from communication theory, information theory, signal processing theory, and control theory...
Energy Technology Data Exchange (ETDEWEB)
Bergfeld, K
1935-03-09
A process of extracting oil from stones or sands bearing oils is characterized by the stones and sands being heated in a suitable furnace to a temperature below that of cracking and preferably slightly higher than the boiling-point of the oils. The oily vapors are removed from the treating chamber by means of flushing gas.
DEFF Research Database (Denmark)
Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten
2015-01-01
We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial...... the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use....
Estimating functions for inhomogeneous spatial point processes with incomplete covariate data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
and this leads to parameter estimation error which is difficult to quantify. In this paper we introduce a Monte Carlo version of the estimating function used in "spatstat" for fitting inhomogeneous Poisson processes and certain inhomogeneous cluster processes. For this modified estimating function it is feasible...
Estimating functions for inhomogeneous spatial point processes with incomplete covariate data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2008-01-01
and this leads to parameter estimation error which is difficult to quantify. In this paper, we introduce a Monte Carlo version of the estimating function used in spatstat for fitting inhomogeneous Poisson processes and certain inhomogeneous cluster processes. For this modified estimating function, it is feasible...
Hazard rate model and statistical analysis of a compound point process
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2005-01-01
Roč. 41, č. 6 (2005), s. 773-786 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : couting process * compound process * Cox regression model * intensity Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.343, year: 2005
Congruence from the operator's point of view: compositionality requirements on process semantics
Gazda, M.; Fokkink, W.J.
2010-01-01
One of the basic sanity properties of a behavioural semantics is that it constitutes a congruence with respect to standard process operators. This issue has been traditionally addressed by the development of rule formats for transition system specifications that define process algebras. In this
Congruence from the operator's point of view : compositionality requirements on process semantics
Gazda, M.W.; Fokkink, W.J.; Aceto, L.; Sobocinski, P.
2010-01-01
One of the basic sanity properties of a behavioural semantics is that it constitutes a congruence with respect to standard process operators. This issue has been traditionally addressed by the development of rule formats for transition system specifications that define process algebras. In this
Rodgers, John C.; McFarland, Andrew R.; Ortiz, Carlos A.
1995-01-01
A quick-change filter cartridge. In sampling systems for measurement of airborne materials, a filter element is introduced into the sampled airstream such that the aerosol constituents are removed and deposited on the filter. Fragile sampling media often require support in order to prevent rupture during sampling, and careful mounting and sealing to prevent misalignment, tearing, or creasing which would allow the sampled air to bypass the filter. Additionally, handling of filter elements may introduce cross-contamination or exposure of operators to toxic materials. Moreover, it is desirable to enable the preloading of filter media into quick-change cartridges in clean laboratory environments, thereby simplifying and expediting the filter-changing process in the field. The quick-change filter cartridge of the present invention permits the application of a variety of filter media in many types of instruments and may also be used in automated systems. The cartridge includes a base through which a vacuum can be applied to draw air through the filter medium which is located on a porous filter support and held there by means of a cap which forms an airtight seal with the base. The base is also adapted for receiving absorbing media so that both particulates and gas-phase samples may be trapped for investigation, the latter downstream of the aerosol filter.
International Nuclear Information System (INIS)
McNabb, J.
2001-01-01
The analysis of data from CLAS is a multi-step process. After the detectors for a given running period have been calibrated, the data is processed in the so called pass-1 cooking. During the pass-1 cooking each event is reconstructed by the program a1c which finds particle tracks and computes momenta from the raw data. The results are then passed on to several data monitoring and filtering utilities. In CLAS software, a filter is a parameterless function which returns an integer indicating whether an event should be kept by that filter or not. There is a main filter program called g1-filter which controls several specific filters and outputs several files, one for each filter. These files may then be analyzed separately, allowing individuals interested in one reaction channel to work from smaller files than using the whole data set would require. There are several constraints on what the filter functions should do. Obviously, the filtered files should be as small as possible, however the filter should also not reject any events that might be used in the later analysis for which the filter was intended
Directory of Open Access Journals (Sweden)
Alex Donaldson
2016-09-01
Conclusion: This systematic yet pragmatic and iterative intervention development process is potentially applicable to any injury prevention topic across all sports settings and levels. It will guide researchers wishing to undertake intervention development.
Perspectives on Nonlinear Filtering
Law, Kody
2015-01-01
The solution to the problem of nonlinear filtering may be given either as an estimate of the signal (and ideally some measure of concentration), or as a full posterior distribution. Similarly, one may evaluate the fidelity of the filter either by its ability to track the signal or its proximity to the posterior filtering distribution. Hence, the field enjoys a lively symbiosis between probability and control theory, and there are plenty of applications which benefit from algorithmic advances, from signal processing, to econometrics, to large-scale ocean, atmosphere, and climate modeling. This talk will survey some recent theoretical results involving accurate signal tracking with noise-free (degenerate) dynamics in high-dimensions (infinite, in principle, but say d between 103 and 108 , depending on the size of your application and your computer), and high-fidelity approximations of the filtering distribution in low dimensions (say d between 1 and several 10s).
Perspectives on Nonlinear Filtering
Law, Kody
2015-01-07
The solution to the problem of nonlinear filtering may be given either as an estimate of the signal (and ideally some measure of concentration), or as a full posterior distribution. Similarly, one may evaluate the fidelity of the filter either by its ability to track the signal or its proximity to the posterior filtering distribution. Hence, the field enjoys a lively symbiosis between probability and control theory, and there are plenty of applications which benefit from algorithmic advances, from signal processing, to econometrics, to large-scale ocean, atmosphere, and climate modeling. This talk will survey some recent theoretical results involving accurate signal tracking with noise-free (degenerate) dynamics in high-dimensions (infinite, in principle, but say d between 103 and 108 , depending on the size of your application and your computer), and high-fidelity approximations of the filtering distribution in low dimensions (say d between 1 and several 10s).
Main points of research in crude oil processing and petrochemistry. [German Democratic Republic
Energy Technology Data Exchange (ETDEWEB)
Keil, G.; Nowak, S.; Fiedrich, G.; Klare, H.; Apelt, E.
1982-04-01
This article analyzes general aspects in the development of petrochemistry and carbochemistry on a global scale and for industry in the German Democratic Republic. Diagrams are given for liquid and solid carbon resources and their natural hydrogen content showing the increasing hydrogen demand for chemical fuel conversion processes. The petrochemical and carbochemical industry must take a growing level of hydrogen demand into account, which is at present 25 Mt/a on a global scale and which increases by 7% annually. Various methods for chemical processing of crude oil and crude oil residues are outlined. Advanced coal conversion processes with prospects for future application in the GDR are also explained, including the methanol carbonylation process, which achieves 90% selectivity and which is based on carbon monoxide hydrogenation, further the Transcat process, using ethane for vinyl chloride production. Acetylene and carbide carbochemistry in the GDR is a further major line in research and development. Technological processes for the pyrolysis of vacuum gas oil are also evaluated. (27 refs.)
Bukhari, W.; Hong, S.-M.
2016-03-01
The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit
International Nuclear Information System (INIS)
Bukhari, W; Hong, S-M
2016-01-01
The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN + , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN + prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN + implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN + . The experimental results show that the EKF-GPRN + algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN + algorithm can further reduce the prediction error by employing the gating function
The AGILE on-board Kalman filter
International Nuclear Information System (INIS)
Giuliani, A.; Cocco, V.; Mereghetti, S.; Pittori, C.; Tavani, M.
2006-01-01
On-board reduction of particle background is one of the main challenges of space instruments dedicated to gamma-ray astrophysics. We present in this paper a discussion of the method and main simulation results of the on-board background filter of the Gamma-Ray Imaging Detector (GRID) of the AGILE mission. The GRID is capable of detecting and imaging with optimal point spread function gamma-ray photons in the range 30MeV-30GeV. The AGILE planned orbit is equatorial, with an altitude of 550km. This is an optimal orbit from the point of view of the expected particle background. For this orbit, electrons and positrons of kinetic energies between 20MeV and hundreds of MeV dominate the particle background, with significant contributions from high-energy (primary) and low-energy protons, and gamma-ray albedo-photons. We present here the main results obtained by extensive simulations of the on-board AGILE-GRID particle/photon background rejection algorithms based on a special application of Kalman filter techniques. This filter is applied (Level-2) sequentially after other data processing techniques characterizing the Level-1 processing. We show that, in conjunction with the Level-1 processing, the adopted Kalman filtering is expected to reduce the total particle/albedo-photon background rate to a value (=<10-30Hz) that is compatible with the AGILE telemetry. The AGILE on-board Kalman filter is also effective in reducing the Earth-albedo-photon background rate, and therefore contributes to substantially increase the AGILE exposure for celestial gamma-ray sources
Directory of Open Access Journals (Sweden)
Christina K. Barstow
2016-07-01
Full Text Available Abstract Background In an effort to reduce the disease burden in rural Rwanda, decrease poverty associated with expenditures for fuel, and minimize the environmental impact on forests and greenhouse gases from inefficient combustion of biomass, the Rwanda Ministry of Health (MOH partnered with DelAgua Health (DelAgua, a private social enterprise, to distribute and promote the use of improved cookstoves and advanced water filters to the poorest quarter of households (Ubudehe 1 and 2 nationally, beginning in Western Province under a program branded Tubeho Neza (“Live Well”. The project is privately financed and earns revenue from carbon credits under the United Nations Clean Development Mechanism. Methods During a 3-month period in late 2014, over 470,000 people living in over 101,000 households were provided free water filters and cookstoves. Following the distribution, community health workers visited nearly 98 % of households to perform household level education and training activities. Over 87 % of households were visited again within 6 months with a basic survey conducted. Detailed adoption surveys were conducted among a sample of households, 1000 in the first round, 187 in the second. Results Approximately a year after distribution, reported water filter use was above 90 % (+/−4 % CI and water present in filter was observed in over 76 % (+/−6 % CI of households, while the reported primary stove was nearly 90 % (+/−4.4 % CI and of households cooking at the time of the visit, over 83 % (+/−5.3 % CI were on the improved stove. There was no observed association between household size and stove stacking behavior. Conclusions This program suggests that free distribution is not a determinant of low adoption. It is plausible that continued engagement in households, enabled by Ministry of Health support and carbon financed revenue, contributed to high adoption rates. Overall, the program was able to demonstrate a privately
Díaz Fernández, Ester
2010-01-01
In this thesis, new models and methodologies are introduced for the analysis of dynamic processes characterized by image sequences with spatial temporal overlapping. The spatial temporal overlapping exists in many natural phenomena and should be addressed properly in several Science disciplines such as Microscopy, Material Sciences, Biology, Geostatistics or Communication Networks. This work is related to the Point Process and Random Closed Set theories, within Stochastic Ge...