WorldWideScience

Sample records for network reconstruction algorithm

  1. A Practical Algorithm for Reconstructing Level-1 Phylogenetic Networks

    NARCIS (Netherlands)

    K.T. Huber; L.J.J. van Iersel (Leo); S.M. Kelk (Steven); R. Suchecki

    2010-01-01

    htmlabstractRecently much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks - a type of

  2. Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

    Directory of Open Access Journals (Sweden)

    Christian L Barrett

    2006-05-01

    Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.

  3. Consensus-based sparse signal reconstruction algorithm for wireless sensor networks

    National Research Council Canada - National Science Library

    Peng, Bao; Zhao, Zhi; Han, Guangjie; Shen, Jian

    2016-01-01

    This article presents a distributed Bayesian reconstruction algorithm for wireless sensor networks to reconstruct the sparse signals based on variational sparse Bayesian learning and consensus filter...

  4. Robustness and Optimization of Complex Networks : Reconstructability, Algorithms and Modeling

    NARCIS (Netherlands)

    Liu, D.

    2013-01-01

    The infrastructure networks, including the Internet, telecommunication networks, electrical power grids, transportation networks (road, railway, waterway, and airway networks), gas networks and water networks, are becoming more and more complex. The complex infrastructure networks are crucial to our

  5. Mouse obesity network reconstruction with a variational Bayes algorithm to employ aggressive false positive control

    Directory of Open Access Journals (Sweden)

    Logsdon Benjamin A

    2012-04-01

    Full Text Available Abstract Background We propose a novel variational Bayes network reconstruction algorithm to extract the most relevant disease factors from high-throughput genomic data-sets. Our algorithm is the only scalable method for regularized network recovery that employs Bayesian model averaging and that can internally estimate an appropriate level of sparsity to ensure few false positives enter the model without the need for cross-validation or a model selection criterion. We use our algorithm to characterize the effect of genetic markers and liver gene expression traits on mouse obesity related phenotypes, including weight, cholesterol, glucose, and free fatty acid levels, in an experiment previously used for discovery and validation of network connections: an F2 intercross between the C57BL/6 J and C3H/HeJ mouse strains, where apolipoprotein E is null on the background. Results We identified eleven genes, Gch1, Zfp69, Dlgap1, Gna14, Yy1, Gabarapl1, Folr2, Fdft1, Cnr2, Slc24a3, and Ccl19, and a quantitative trait locus directly connected to weight, glucose, cholesterol, or free fatty acid levels in our network. None of these genes were identified by other network analyses of this mouse intercross data-set, but all have been previously associated with obesity or related pathologies in independent studies. In addition, through both simulations and data analysis we demonstrate that our algorithm achieves superior performance in terms of power and type I error control than other network recovery algorithms that use the lasso and have bounds on type I error control. Conclusions Our final network contains 118 previously associated and novel genes affecting weight, cholesterol, glucose, and free fatty acid levels that are excellent obesity risk candidates.

  6. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two

  7. A new algorithm for $H\\rightarrow\\tau\\bar{\\tau}$ invariant mass reconstruction using Deep Neural Networks

    CERN Document Server

    Dietrich, Felix

    2017-01-01

    Reconstructing the invariant mass in a Higgs boson decay event containing tau leptons turns out to be a challenging endeavour. The aim of this summer student project is to implement a new algorithm for this task, using deep neural networks and machine learning. The results are compared to SVFit, an existing algorithm that uses dynamical likelihood techniques. A neural network is found that reaches the accuracy of SVFit at low masses and even surpasses it at higher masses, while at the same time providing results a thousand times faster.

  8. Boolean regulatory network reconstruction using literature based knowledge with a genetic algorithm optimization method.

    Science.gov (United States)

    Dorier, Julien; Crespo, Isaac; Niknejad, Anne; Liechti, Robin; Ebeling, Martin; Xenarios, Ioannis

    2016-10-06

    Prior knowledge networks (PKNs) provide a framework for the development of computational biological models, including Boolean models of regulatory networks which are the focus of this work. PKNs are created by a painstaking process of literature curation, and generally describe all relevant regulatory interactions identified using a variety of experimental conditions and systems, such as specific cell types or tissues. Certain of these regulatory interactions may not occur in all biological contexts of interest, and their presence may dramatically change the dynamical behaviour of the resulting computational model, hindering the elucidation of the underlying mechanisms and reducing the usefulness of model predictions. Methods are therefore required to generate optimized contextual network models from generic PKNs. We developed a new approach to generate and optimize Boolean networks, based on a given PKN. Using a genetic algorithm, a model network is built as a sub-network of the PKN and trained against experimental data to reproduce the experimentally observed behaviour in terms of attractors and the transitions that occur between them under specific perturbations. The resulting model network is therefore contextualized to the experimental conditions and constitutes a dynamical Boolean model closer to the observed biological process used to train the model than the original PKN. Such a model can then be interrogated to simulate response under perturbation, to detect stable states and their properties, to get insights into the underlying mechanisms and to generate new testable hypotheses. Generic PKNs attempt to synthesize knowledge of all interactions occurring in a biological process of interest, irrespective of the specific biological context. This limits their usefulness as a basis for the development of context-specific, predictive dynamical Boolean models. The optimization method presented in this article produces specific, contextualized models from generic

  9. Tau reconstruction and identification algorithm

    Indian Academy of Sciences (India)

    2012-11-15

    Nov 15, 2012 ... 13.35.Dx. 1. Introduction. Tau is the heaviest known lepton (Mτ = 1.78 GeV) which decays into lighter leptons. (BR ∼ 35%) or hadrons τh (BR ∼ 65%) in the presence of up to two neutrinos. The τ reconstruction algorithms are using decay mode identification techniques which allow one to reconstruct τh with ...

  10. Reconstruction of periodic signals using neural networks

    Directory of Open Access Journals (Sweden)

    José Danilo Rairán Antolines

    2014-01-01

    Full Text Available In this paper, we reconstruct a periodic signal by using two neural networks. The first network is trained to approximate the period of a signal, and the second network estimates the corresponding coefficients of the signal's Fourier expansion. The reconstruction strategy consists in minimizing the mean-square error via backpro-pagation algorithms over a single neuron with a sine transfer function. Additionally, this paper presents mathematical proof about the quality of the approximation as well as a first modification of the algorithm, which requires less data to reach the same estimation; thus making the algorithm suitable for real-time implementations.

  11. Dendroclimatic transfer functions revisited: Little Ice Age and Medieval Warm Period summer temperatures reconstructed using artificial neural networks and linear algorithms

    Directory of Open Access Journals (Sweden)

    S. Helama

    2009-03-01

    Full Text Available Tree-rings tell of past climates. To do so, tree-ring chronologies comprising numerous climate-sensitive living-tree and subfossil time-series need to be "transferred" into palaeoclimate estimates using transfer functions. The purpose of this study is to compare different types of transfer functions, especially linear and nonlinear algorithms. Accordingly, multiple linear regression (MLR, linear scaling (LSC and artificial neural networks (ANN, nonlinear algorithm were compared. Transfer functions were built using a regional tree-ring chronology and instrumental temperature observations from Lapland (northern Finland and Sweden. In addition, conventional MLR was compared with a hybrid model whereby climate was reconstructed separately for short- and long-period timescales prior to combining the bands of timescales into a single hybrid model. The fidelity of the different reconstructions was validated against instrumental climate data. The reconstructions by MLR and ANN showed reliable reconstruction capabilities over the instrumental period (AD 1802–1998. LCS failed to reach reasonable verification statistics and did not qualify as a reliable reconstruction: this was due mainly to exaggeration of the low-frequency climatic variance. Over this instrumental period, the reconstructed low-frequency amplitudes of climate variability were rather similar by MLR and ANN. Notably greater differences between the models were found over the actual reconstruction period (AD 802–1801. A marked temperature decline, as reconstructed by MLR, from the Medieval Warm Period (AD 931–1180 to the Little Ice Age (AD 1601–1850, was evident in all the models. This decline was approx. 0.5°C as reconstructed by MLR. Different ANN based palaeotemperatures showed simultaneous cooling of 0.2 to 0.5°C, depending on algorithm. The hybrid MLR did not seem to provide further benefit above conventional MLR in our sample. The robustness of the conventional MLR over the

  12. Dendroclimatic transfer functions revisited: Little Ice Age and Medieval Warm Period summer temperatures reconstructed using artificial neural networks and linear algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Helama, S.; Holopainen, J.; Eronen, M. [Department of Geology, University of Helsinki, (Finland); Makarenko, N.G. [Russian Academy of Sciences, St. Petersburg (Russian Federation). Pulkovo Astronomical Observatory; Karimova, L.M.; Kruglun, O.A. [Institute of Mathematics, Almaty (Kazakhstan); Timonen, M. [Finnish Forest Research Institute, Rovaniemi Research Unit (Finland); Merilaeinen, J. [SAIMA Unit of the Savonlinna Department of Teacher Education, University of Joensuu (Finland)

    2009-07-01

    Tree-rings tell of past climates. To do so, tree-ring chronologies comprising numerous climate-sensitive living-tree and subfossil time-series need to be 'transferred' into palaeoclimate estimates using transfer functions. The purpose of this study is to compare different types of transfer functions, especially linear and nonlinear algorithms. Accordingly, multiple linear regression (MLR), linear scaling (LSC) and artificial neural networks (ANN, nonlinear algorithm) were compared. Transfer functions were built using a regional tree-ring chronology and instrumental temperature observations from Lapland (northern Finland and Sweden). In addition, conventional MLR was compared with a hybrid model whereby climate was reconstructed separately for short- and long-period timescales prior to combining the bands of timescales into a single hybrid model. The fidelity of the different reconstructions was validated against instrumental climate data. The reconstructions by MLR and ANN showed reliable reconstruction capabilities over the instrumental period (AD 1802-1998). LCS failed to reach reasonable verification statistics and did not qualify as a reliable reconstruction: this was due mainly to exaggeration of the low-frequency climatic variance. Over this instrumental period, the reconstructed low-frequency amplitudes of climate variability were rather similar by MLR and ANN. Notably greater differences between the models were found over the actual reconstruction period (AD 802-1801). A marked temperature decline, as reconstructed by MLR, from the Medieval Warm Period (AD 931-1180) to the Little Ice Age (AD 1601-1850), was evident in all the models. This decline was approx. 0.5 C as reconstructed by MLR. Different ANN based palaeotemperatures showed simultaneous cooling of 0.2 to 0.5 C, depending on algorithm. The hybrid MLR did not seem to provide further benefit above conventional MLR in our sample. The robustness of the conventional MLR over the calibration

  13. Analysis of an Optimized MLOS Tomographic Reconstruction Algorithm and Comparison to the MART Reconstruction Algorithm

    Science.gov (United States)

    La Foy, Roderick; Vlachos, Pavlos

    2011-11-01

    An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.

  14. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network...... in the latter model implies optimality in the decomposable bulk synchronous parallel model, which is known to effectively describe a wide and significant class of parallel platforms. The proposed framework can be regarded as an attempt to port the notion of obliviousness, well established in the context...

  15. Variable Weighted Ordered Subset Image Reconstruction Algorithm

    Directory of Open Access Journals (Sweden)

    Jinxiao Pan

    2006-01-01

    Full Text Available We propose two variable weighted iterative reconstruction algorithms (VW-ART and VW-OS-SART to improve the algebraic reconstruction technique (ART and simultaneous algebraic reconstruction technique (SART and establish their convergence. In the two algorithms, the weighting varies with the geometrical direction of the ray. Experimental results with both numerical simulation and real CT data demonstrate that the VW-ART has a significant improvement in the quality of reconstructed images over ART and OS-SART. Moreover, both VW-ART and VW-OS-SART are more promising in convergence speed than the ART and SART, respectively.

  16. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  17. Innovative reconstruction algorithms in cardiac SPECT scintigraphy.

    Science.gov (United States)

    Zoccarato, O

    2012-06-01

    The recent entry into the market of some advanced iterative reconstruction algorithms (IA) optimized for bone and cardiac studies has raised a great interest among specialists in nuclear medicine. In particular, myocardial perfusion studies have received a significant boost thanks to the superior quality of images obtained with these new reconstruction methods. Differently from the filtered back-projection (FBP), the basic principles of the iterative reconstruction techniques are less known; unclear is the way by which the iterative methods are able to include compensations for the main degradation phenomena in SPECT imaging. Aim of this review is to provide a simple introduction to the iterative solution of the tomographic problem by using its matricial representation. This paper will also provide simple graphical examples of how phenomena such as attenuation and depth dependent resolution can be modelled in the projection operator. The main degrading factors in cardiac SPECT images will be retrieved along with some indication of the effectiveness of the corrections adopted. This step makes clear the noteworthy qualitative improvement obtained with these advanced algorithms. A brief summary of the main features of the most widespread new iterative reconstruction algorithms will be presented. The majority of manufacturers emphasize the reduction of acquisition times allowed by these innovative algorithms. Finally, because of the awareness of the increasing exposure of the population due to the increasing number of imaging studies with ionizing radiation, the use of these advanced algorithms to achieve a simultaneous reduction in patient dose and acquisition time will be also shown.

  18. Complex networks an algorithmic perspective

    CERN Document Server

    Erciyes, Kayhan

    2014-01-01

    Network science is a rapidly emerging field of study that encompasses mathematics, computer science, physics, and engineering. A key issue in the study of complex networks is to understand the collective behavior of the various elements of these networks.Although the results from graph theory have proven to be powerful in investigating the structures of complex networks, few books focus on the algorithmic aspects of complex network analysis. Filling this need, Complex Networks: An Algorithmic Perspective supplies the basic theoretical algorithmic and graph theoretic knowledge needed by every r

  19. Bayesian prediction and adaptive sampling algorithms for mobile sensor networks online environmental field reconstruction in space and time

    CERN Document Server

    Xu, Yunfei; Dass, Sarat; Maiti, Tapabrata

    2016-01-01

    This brief introduces a class of problems and models for the prediction of the scalar field of interest from noisy observations collected by mobile sensor networks. It also introduces the problem of optimal coordination of robotic sensors to maximize the prediction quality subject to communication and mobility constraints either in a centralized or distributed manner. To solve such problems, fully Bayesian approaches are adopted, allowing various sources of uncertainties to be integrated into an inferential framework effectively capturing all aspects of variability involved. The fully Bayesian approach also allows the most appropriate values for additional model parameters to be selected automatically by data, and the optimal inference and prediction for the underlying scalar field to be achieved. In particular, spatio-temporal Gaussian process regression is formulated for robotic sensors to fuse multifactorial effects of observations, measurement noise, and prior distributions for obtaining the predictive di...

  20. A very fast implementation of 2D iterative reconstruction algorithms

    DEFF Research Database (Denmark)

    Toft, Peter Aundal; Jensen, Peter James

    1996-01-01

    that iterative reconstruction algorithms can be implemented and run almost as fast as direct reconstruction algorithms. The method has been implemented in a software package that is available for free, providing reconstruction algorithms using ART, EM, and the Least Squares Conjugate Gradient Method...

  1. Gossip algorithms in quantum networks

    Science.gov (United States)

    Siomau, Michael

    2017-01-01

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up - in the best case exponentially - the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication.

  2. Gossip algorithms in quantum networks

    Energy Technology Data Exchange (ETDEWEB)

    Siomau, Michael, E-mail: siomau@nld.ds.mpg.de [Physics Department, Jazan University, P.O. Box 114, 45142 Jazan (Saudi Arabia); Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen (Germany)

    2017-01-23

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up – in the best case exponentially – the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication. - Highlights: • We analyze the performance of gossip algorithms in quantum networks. • Local operations and classical communication (LOCC) can speed the performance up. • The speed-up is exponential in the best case; the number of LOCC is polynomial.

  3. Network reconstruction via density sampling

    CERN Document Server

    Squartini, Tiziano; Gabrielli, Andrea; Garlaschelli, Diego

    2016-01-01

    Reconstructing weighted networks from partial information is necessary in many important circumstances, e.g. for a correct estimation of systemic risk. It has been shown that, in order to achieve an accurate reconstruction, it is crucial to reliably replicate the empirical degree sequence, which is however unknown in many realistic situations. More recently, it has been found that the knowledge of the degree sequence can be replaced by the knowledge of the strength sequence, which is typically accessible, complemented by that of the total number of links, thus considerably relaxing the observational requirements. Here we further relax these requirements and devise a procedure valid when even the the total number of links is unavailable. We assume that, apart from the heterogeneity induced by the degree sequence itself, the network is homogeneous, so that its link density can be estimated by sampling subsets of nodes with representative density. We show that the best way of sampling nodes is the random selecti...

  4. Greedy algorithms for diffuse optical tomography reconstruction

    Science.gov (United States)

    Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.

    2018-03-01

    Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of

  5. Advanced reconstruction algorithms for electron tomography: From comparison to combination

    Energy Technology Data Exchange (ETDEWEB)

    Goris, B. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Roelandts, T. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Batenburg, K.J. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1098XG Amsterdam (Netherlands); Heidari Mezerji, H. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Bals, S., E-mail: sara.bals@ua.ac.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)

    2013-04-15

    In this work, the simultaneous iterative reconstruction technique (SIRT), the total variation minimization (TVM) reconstruction technique and the discrete algebraic reconstruction technique (DART) for electron tomography are compared and the advantages and disadvantages are discussed. Furthermore, we describe how the result of a three dimensional (3D) reconstruction based on TVM can provide objective information that is needed as the input for a DART reconstruction. This approach results in a tomographic reconstruction of which the segmentation is carried out in an objective manner. - Highlights: ► A comparative study between different reconstruction algorithms for tomography is performed. ► Reconstruction algorithms that uses prior knowledge about the specimen have a superior result. ► One reconstruction algorithm can provide the prior knowledge for a second algorithm.

  6. Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV

    Science.gov (United States)

    Fahringer, Timothy W.; Thurow, Brian S.

    2016-09-01

    A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.

  7. Fast reconstruction of compact context-specific metabolic network models.

    Directory of Open Access Journals (Sweden)

    Nikos Vlassis

    2014-01-01

    Full Text Available Systemic approaches to the study of a biological cell or tissue rely increasingly on the use of context-specific metabolic network models. The reconstruction of such a model from high-throughput data can routinely involve large numbers of tests under different conditions and extensive parameter tuning, which calls for fast algorithms. We present fastcore, a generic algorithm for reconstructing context-specific metabolic network models from global genome-wide metabolic network models such as Recon X. fastcore takes as input a core set of reactions that are known to be active in the context of interest (e.g., cell or tissue, and it searches for a flux consistent subnetwork of the global network that contains all reactions from the core set and a minimal set of additional reactions. Our key observation is that a minimal consistent reconstruction can be defined via a set of sparse modes of the global network, and fastcore iteratively computes such a set via a series of linear programs. Experiments on liver data demonstrate speedups of several orders of magnitude, and significantly more compact reconstructions, over a rival method. Given its simplicity and its excellent performance, fastcore can form the backbone of many future metabolic network reconstruction algorithms.

  8. Detail displaying difference of the digital holographic reconstructed image between the convolution algorithm and Fresnel algorithm.

    Science.gov (United States)

    Zhong, Liyun; Li, Hongyan; Tao, Tao; Zhang, Zhun; Lu, Xiaoxu

    2011-11-07

    To reach the limiting resolution of a digital holographic system and improve the displaying quality of the reconstructed image, the subdivision convolution algorithm and the subdivision Fresnel algorithm are presented, respectively. The obtained results show that the lateral size of the reconstructed image obtained by two kinds of subdivision algorithms is the same in the central region of the reconstructed image-plane; moreover, the size of the central region is in proportional to the recording distance. Importantly, in the central region of the reconstructed image-plane, the reconstruction can be performed by the subdivision Fresnel algorithm instead of the subdivision convolution algorithm effectively, and, based on these subdivision approaches, both the displaying quality and the resolution of the reconstructed image can be improved significantly. Furthermore, in the reconstruction of the digital hologram with the large numerical aperture, the computer's memory consumed and the calculating time resulting from the subdivision Fresnel algorithm is significantly less than those from the subdivision convolution algorithm.

  9. Methods of graph network reconstruction in personalized medicine.

    Science.gov (United States)

    Danilov, A; Ivanov, Yu; Pryamonosov, R; Vassilevski, Yu

    2016-08-01

    The paper addresses methods for generation of individualized computational domains on the basis of medical imaging dataset. The computational domains will be used in one-dimensional (1D) and three-dimensional (3D)-1D coupled hemodynamic models. A 1D hemodynamic model employs a 1D network of a patient-specific vascular network with large number of vessels. The 1D network is the graph with nodes in the 3D space which bears additional geometric data such as length and radius of vessels. A 3D hemodynamic model requires a detailed 3D reconstruction of local parts of the vascular network. We propose algorithms which extend the automated segmentation of vascular and tubular structures, generation of centerlines, 1D network reconstruction, correction, and local adaptation. We consider two modes of centerline representation: (i) skeletal segments or sets of connected voxels and (ii) curved paths with corresponding radii. Individualized reconstruction of 1D networks depends on the mode of centerline representation. Efficiency of the proposed algorithms is demonstrated on several examples of 1D network reconstruction. The networks can be used in modeling of blood flows as well as other physiological processes in tubular structures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Evolutionary optimization of network reconstruction from derivative-variable correlations

    Science.gov (United States)

    Leguia, Marc G.; Andrzejak, Ralph G.; Levnajić, Zoran

    2017-08-01

    Topologies of real-world complex networks are rarely accessible, but can often be reconstructed from experimentally obtained time series via suitable network reconstruction methods. Extending our earlier work on methods based on statistics of derivative-variable correlations, we here present a new method built on integrating an evolutionary optimization algorithm into the derivative-variable correlation method. Results obtained from our modification of the method in general outperform the original results, demonstrating the suitability of evolutionary optimization logic in network reconstruction problems. We show the method’s usefulness in realistic scenarios where the reconstruction precision can be limited by the nature of the time series. We also discuss important limitations coming from various dynamical regimes that time series can belong to.

  11. PARALLEL ALGORITHM FOR BAYESIAN NETWORK STRUCTURE LEARNING

    Directory of Open Access Journals (Sweden)

    S. A. Arustamov

    2013-03-01

    Full Text Available The article deals with implementation of a scalable parallel algorithm for structure learning of Bayesian network. Comparative analysis of sequential and parallel algorithms is done.

  12. Application of Neural Networks for Energy Reconstruction

    CERN Document Server

    Damgov, Jordan

    2002-01-01

    The possibility to use Neural Networks for reconstruction ofthe energy deposited in the calorimetry system of the CMS detector is investigated. It is shown that using feed-forward neural network, good linearity, Gaussian energy distribution and good energy resolution can be achieved. Significant improvement of the energy resolution and linearity is reached in comparison with other weighting methods for energy reconstruction.

  13. Array antenna diagnostics with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Meincke, Peter; Pivnenko, Sergey

    2012-01-01

    The 3D reconstruction algorithm is applied to a slotted waveguide array measured at the DTU-ESA Spherical Near-Field Antenna Test Facility. One slot of the array is covered by conductive tape and an error is present in the array excitation. Results show the accuracy obtainable by the 3D...... reconstruction algorithm. Considerations on the measurement sampling, the obtainable spatial resolution, and the possibility of taking full advantage of the reconstruction geometry are provided....

  14. A Novel Iterative CT Reconstruction Approach Based on FBP Algorithm.

    Directory of Open Access Journals (Sweden)

    Hongli Shi

    Full Text Available The Filtered Back-Projection (FBP algorithm and its modified versions are the most important techniques for CT (Computerized tomography reconstruction, however, it may produce aliasing degradation in the reconstructed images due to projection discretization. The general iterative reconstruction (IR algorithms suffer from their heavy calculation burden and other drawbacks. In this paper, an iterative FBP approach is proposed to reduce the aliasing degradation. In the approach, the image reconstructed by FBP algorithm is treated as the intermediate image and projected along the original projection directions to produce the reprojection data. The difference between the original and reprojection data is filtered by a special digital filter, and then is reconstructed by FBP to produce a correction term. The correction term is added to the intermediate image to update it. This procedure can be performed iteratively to improve the reconstruction performance gradually until certain stopping criterion is satisfied. Some simulations and tests on real data show the proposed approach is better than FBP algorithm or some IR algorithms in term of some general image criteria. The calculation burden is several times that of FBP, which is much less than that of general IR algorithms and acceptable in the most situations. Therefore, the proposed algorithm has the potential applications in practical CT systems.

  15. Optimization of Cone Beam CT Reconstruction Algorithm Based on CUDA

    National Research Council Canada - National Science Library

    Wang Li-Fang; Zhang Shu-Hai

    2013-01-01

    .... This paper optimizes cone beam CT reconstruction algorithm by CUDA and improves the speed of weighted back-projection and filtering, and shortens the data access time by using the texture memory...

  16. Application aspects of advanced antenna diagnostics with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Pivnenko, Sergey

    2015-01-01

    This paper focuses on two important applications of the 3D reconstruction algorithm of the commercial software DIATOOL for antenna diagnostics. The first one is the accurate and detailed identification of array malfunctioning, thanks to the available enhanced spatial resolution of the reconstructed...... fields and currents. The second one is the filtering of the scattering from support structures and feed network leakage. Representative experimental results are presented and guidelines on the recommended measurement parameters for obtaining the best diagnostics results are provided....

  17. Performance of the ATLAS primary vertex reconstruction algorithms

    CERN Document Server

    Zhang, Matt

    2017-01-01

    The reconstruction of primary vertices in the busy, high pile up environment of the LHC is a challenging task. The challenges and novel methods developed by the ATLAS experiment to reconstruct vertices in such environments will be presented. Such advances in vertex seeding include methods taken from medical imagining, which allow for reconstruction of very nearby vertices will be highlighted. The performance of the current vertexing algorithms using early Run-2 data will be presented and compared to results from simulation.

  18. Convergence of iterative image reconstruction algorithms for Digital Breast Tomosynthesis

    DEFF Research Database (Denmark)

    Sidky, Emil; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    solutions can aid in iterative image reconstruction algorithm design. This issue is particularly acute for iterative image reconstruction in Digital Breast Tomosynthesis (DBT), where the corresponding data model IS particularly poorly conditioned. The impact of this poor conditioning is that iterative......Most iterative image reconstruction algorithms are based on some form of optimization, such as minimization of a data-fidelity term plus an image regularizing penalty term. While achieving the solution of these optimization problems may not directly be clinically relevant, accurate optimization...... algorithms applied to this system can be slow to converge. Recent developments in first-order algorithms are now beginning to allow for accurate solutions to optimization problems of interest to tomographic imaging in general. In particular, we investigate an algorithm developed by Chambolle and Pock (2011 J...

  19. Novel quantum inspired binary neural network algorithm

    Indian Academy of Sciences (India)

    In this paper, a quantum based binary neural network algorithm is proposed, named as novel quantum binary neural network algorithm (NQ-BNN). It forms a neural network structure by deciding weights and separability parameter in quantum based manner. Quantum computing concept represents solution probabilistically ...

  20. A new algorithm for 3D reconstruction from support functions

    DEFF Research Database (Denmark)

    Gardner, Richard; Kiderlen, Markus

    2009-01-01

    We introduce a new algorithm for reconstructing an unknown shape from a finite number of noisy measurements of its support function. The algorithm, based on a least squares procedure, is very easy to program in standard software such as Matlab and allows, for the first time, good 3D reconstructio...

  1. Reconstruction of stochastic temporal networks through diffusive arrival times

    Science.gov (United States)

    Li, Xun; Li, Xiang

    2017-01-01

    Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications. PMID:28604687

  2. A new iterative algorithm to reconstruct the refractive index.

    Science.gov (United States)

    Liu, Y J; Zhu, P P; Chen, B; Wang, J Y; Yuan, Q X; Huang, W X; Shu, H; Li, E R; Liu, X S; Zhang, K; Ming, H; Wu, Z Y

    2007-06-21

    The latest developments in x-ray imaging are associated with techniques based on the phase contrast. However, the image reconstruction procedures demand significant improvements of the traditional methods, and/or new algorithms have to be introduced to take advantage of the high contrast and sensitivity of the new experimental techniques. In this letter, an improved iterative reconstruction algorithm based on the maximum likelihood expectation maximization technique is presented and discussed in order to reconstruct the distribution of the refractive index from data collected by an analyzer-based imaging setup. The technique considered probes the partial derivative of the refractive index with respect to an axis lying in the meridional plane and perpendicular to the propagation direction. Computer simulations confirm the reliability of the proposed algorithm. In addition, the comparison between an analytical reconstruction algorithm and the iterative method has been also discussed together with the convergent characteristic of this latter algorithm. Finally, we will show how the proposed algorithm may be applied to reconstruct the distribution of the refractive index of an epoxy cylinder containing small air bubbles of about 300 micro of diameter.

  3. A new jet reconstruction algorithm for lepton colliders

    CERN Document Server

    Boronat, Marça; Vos, Marcel

    2014-01-01

    We propose a new sequential jet reconstruction algorithm for future lepton colliders at the energy frontier. The Valencia algorithm combines the natural distance criterion for lepton colliders with the greater robustness against backgrounds of algorithms adapted to hadron colliders. Results on a detailed Monte Carlo simulation of $t\\bar{t}$ and $ZZ$ production at future linear $e^+e^-$ colliders (ILC and CLIC) with a realistic level of background overlaid, show that it achieves better performance in the presence of background.

  4. Accelerating Popular Tomographic Reconstruction Algorithms on Commodity PC Graphics Hardware

    Science.gov (United States)

    Xu, Fang; Mueller, K.

    2005-06-01

    The task of reconstructing an object from its projections via tomographic methods is a time-consuming process due to the vast complexity of the data. For this reason, manufacturers of equipment for medical computed tomography (CT) rely mostly on special application specified integrated circuits (ASICs) to obtain the fast reconstruction times required in clinical settings. Although modern CPUs have gained sufficient power in recent years to be competitive for two-dimensional (2D) reconstruction, this is not the case for three-dimensional (3D) reconstructions, especially not when iterative algorithms must be applied. The recent evolution of commodity PC computer graphics boards (GPUs) has the potential to change this picture in a very dramatic way. In this paper we will show how the new floating point GPUs can be exploited to perform both analytical and iterative reconstruction from X-ray and functional imaging data. For this purpose, we decompose three popular three-dimensional (3D) reconstruction algorithms (Feldkamp filtered backprojection, the simultaneous algebraic reconstruction technique, and expectation maximization) into a common set of base modules, which all can be executed on the GPU and their output linked internally. Visualization of the reconstructed object is easily achieved since the object already resides in the graphics hardware, allowing one to run a visualization module at any time to view the reconstruction results. Our implementation allows speedups of over an order of magnitude with respect to CPU implementations, at comparable image quality.

  5. Scalp reconstruction: an algorithmic approach and systematic review.

    Science.gov (United States)

    Desai, Shaun C; Sand, Jordan P; Sharon, Jeffrey D; Branham, Gregory; Nussenbaum, Brian

    2015-01-01

    Reconstruction of the scalp after acquired defects remains a common challenge for the reconstructive surgeon, especially in a patient with a history of radiation to the area. To review the current literature and describe a novel algorithm to help guide the reconstructive surgeon in determining the optimal reconstruction from a cosmetic and functional standpoint. Pertinent surgical anatomy, considerations for patient and technique selection, reconstructive goals, as well as the reconstructive ladder, are also discussed. A PubMed and Medline search was performed of the entire English literature with respect to scalp reconstruction. Priority of review was given to those studies with higher-quality levels of evidence. Size, location, radiation history, and potential for hairline distortion are important factors in determining the ideal reconstruction. The tighter and looser areas of the scalp play a major role in the potential for primary or local flap closure. Patients with medium to large defects and a history of radiation will likely benefit from free tissue transfer. Ideal reconstruction of scalp defects relies on a comprehensive understanding of scalp anatomy, a full consideration of the armamentarium of surgical techniques, and a detailed appraisal of patient factors and expectations. The simplest reconstruction should be used whenever possible to provide the most functional and aesthetic scalp reconstruction, with the least amount of complexity. NA.

  6. Scalable Virtual Network Mapping Algorithm for Internet-Scale Networks

    Science.gov (United States)

    Yang, Qiang; Wu, Chunming; Zhang, Min

    The proper allocation of network resources from a common physical substrate to a set of virtual networks (VNs) is one of the key technical challenges of network virtualization. While a variety of state-of-the-art algorithms have been proposed in an attempt to address this issue from different facets, the challenge still remains in the context of large-scale networks as the existing solutions mainly perform in a centralized manner which requires maintaining the overall and up-to-date information of the underlying substrate network. This implies the restricted scalability and computational efficiency when the network scale becomes large. This paper tackles the virtual network mapping problem and proposes a novel hierarchical algorithm in conjunction with a substrate network decomposition approach. By appropriately transforming the underlying substrate network into a collection of sub-networks, the hierarchical virtual network mapping algorithm can be carried out through a global virtual network mapping algorithm (GVNMA) and a local virtual network mapping algorithm (LVNMA) operated in the network central server and within individual sub-networks respectively with their cooperation and coordination as necessary. The proposed algorithm is assessed against the centralized approaches through a set of numerical simulation experiments for a range of network scenarios. The results show that the proposed hierarchical approach can be about 5-20 times faster for VN mapping tasks than conventional centralized approaches with acceptable communication overhead between GVNCA and LVNCA for all examined networks, whilst performs almost as well as the centralized solutions.

  7. New vertex reconstruction algorithms for CMS

    CERN Document Server

    Frühwirth, R; Prokofiev, Kirill; Speer, T.; Vanlaer, P.; Chabanat, E.; Estre, N.

    2003-01-01

    The reconstruction of interaction vertices can be decomposed into a pattern recognition problem (``vertex finding'') and a statistical problem (``vertex fitting''). We briefly review classical methods. We introduce novel approaches and motivate them in the framework of high-luminosity experiments like at the LHC. We then show comparisons with the classical methods in relevant physics channels

  8. Reconstruction Algorithms in Undersampled AFM Imaging

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Oxvig, Christian Schou; Pedersen, Patrick Steffen

    2016-01-01

    This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation. The main reasons for using undersampling is that it reduces the path length and thereby the s...

  9. Objective performance assessment of five computed tomography iterative reconstruction algorithms.

    Science.gov (United States)

    Omotayo, Azeez; Elbakri, Idris

    2016-11-22

    Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with

  10. Evolutionary algorithms for mobile ad hoc networks

    CERN Document Server

    Dorronsoro, Bernabé; Danoy, Grégoire; Pigné, Yoann; Bouvry, Pascal

    2014-01-01

    Describes how evolutionary algorithms (EAs) can be used to identify, model, and minimize day-to-day problems that arise for researchers in optimization and mobile networking. Mobile ad hoc networks (MANETs), vehicular networks (VANETs), sensor networks (SNs), and hybrid networks—each of these require a designer’s keen sense and knowledge of evolutionary algorithms in order to help with the common issues that plague professionals involved in optimization and mobile networking. This book introduces readers to both mobile ad hoc networks and evolutionary algorithms, presenting basic concepts as well as detailed descriptions of each. It demonstrates how metaheuristics and evolutionary algorithms (EAs) can be used to help provide low-cost operations in the optimization process—allowing designers to put some “intelligence” or sophistication into the design. It also offers efficient and accurate information on dissemination algorithms topology management, and mobility models to address challenges in the ...

  11. Algorithms for radio networks with dynamic topology

    Science.gov (United States)

    Shacham, Nachum; Ogier, Richard; Rutenburg, Vladislav V.; Garcia-Luna-Aceves, Jose

    1991-08-01

    The objective of this project was the development of advanced algorithms and protocols that efficiently use network resources to provide optimal or nearly optimal performance in future communication networks with highly dynamic topologies and subject to frequent link failures. As reflected by this report, we have achieved our objective and have significantly advanced the state-of-the-art in this area. The research topics of the papers summarized include the following: efficient distributed algorithms for computing shortest pairs of disjoint paths; minimum-expected-delay alternate routing algorithms for highly dynamic unreliable networks; algorithms for loop-free routing; multipoint communication by hierarchically encoded data; efficient algorithms for extracting the maximum information from event-driven topology updates; methods for the neural network solution of link scheduling and other difficult problems arising in communication networks; and methods for robust routing in networks subject to sophisticated attacks.

  12. Reconstructing complex networks without time series

    Science.gov (United States)

    Ma, Chuang; Zhang, Hai-Feng; Lai, Ying-Cheng

    2017-08-01

    In the real world there are situations where the network dynamics are transient (e.g., various spreading processes) and the final nodal states represent the available data. Can the network topology be reconstructed based on data that are not time series? Assuming that an ensemble of the final nodal states resulting from statistically independent initial triggers (signals) of the spreading dynamics is available, we develop a maximum likelihood estimation-based framework to accurately infer the interaction topology. For dynamical processes that result in a binary final state, the framework enables network reconstruction based solely on the final nodal states. Additional information, such as the first arrival time of each signal at each node, can improve the reconstruction accuracy. For processes with a uniform final state, the first arrival times can be exploited to reconstruct the network. We derive a mathematical theory for our framework and validate its performance and robustness using various combinations of spreading dynamics and real-world network topologies.

  13. An imaging algorithm for vertex reconstruction for ATLAS Run-2

    CERN Document Server

    The ATLAS collaboration

    2015-01-01

    The reconstruction of vertices corresponding to proton--proton collisions in ATLAS is an essential element of event reconstruction used in many performance studies and physics analyses. During Run-1 of the LHC, ATLAS has employed an iterative approach to vertex finding. In order to improve the flexibility of the algorithm and ensure continued performance for very high numbers of simultaneous collisions in future LHC data taking, a new approach to seeding vertex finding is being developed inspired by image reconstruction techniques. This note provides a brief outline of how reconstructed tracks are used to create an image of likely vertex collisions in an event and presents some preliminary results of the performance of the algorithm in simulation approximating early Run-2 conditions.

  14. Efficient iterative image reconstruction algorithm for dedicated breast CT

    Science.gov (United States)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  15. Unmatched Projector/Backprojector Pairs in an Iterative Reconstruction Algorithm

    OpenAIRE

    Zeng, Gengsheng L.; Gullberg, Grant T.

    2000-01-01

    Computational burden is a major concern when an iterative algorithm is used to reconstruct a three-dimensional (3-D) image with attenuation, detector response, and scatter corrections. Most of the computation time is spent executing the projector and backprojector of an iterative algorithm. Usually, the projector and the backprojector are transposed operators of each other. The projector should model the imaging geometry and physics as accurately as possible. Some researchers have used backpr...

  16. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction.

    Science.gov (United States)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

  17. Concluding Report: Quantitative Tomography Simulations and Reconstruction Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Aufderheide, M B; Martz, H E; Slone, D M; Jackson, J A; Schach von Wittenau, A E; Goodman, D M; Logan, C M; Hall, J M

    2002-02-01

    In this report we describe the original goals and final achievements of this Laboratory Directed Research and Development project. The Quantitative was Tomography Simulations and Reconstruction Algorithms project (99-ERD-015) funded as a multi-directorate, three-year effort to advance the state of the art in radiographic simulation and tomographic reconstruction by improving simulation and including this simulation in the tomographic reconstruction process. Goals were to improve the accuracy of radiographic simulation, and to couple advanced radiographic simulation tools with a robust, many-variable optimization algorithm. In this project, we were able to demonstrate accuracy in X-Ray simulation at the 2% level, which is an improvement of roughly a factor of 5 in accuracy, and we have successfully coupled our simulation tools with the CCG (Constrained Conjugate Gradient) optimization algorithm, allowing reconstructions that include spectral effects and blurring in the reconstructions. Another result of the project was the assembly of a low-scatter X-Ray imaging facility for use in nondestructive evaluation applications. We conclude with a discussion of future work.

  18. A CUDA-based reverse gridding algorithm for MR reconstruction.

    Science.gov (United States)

    Yang, Jingzhu; Feng, Chaolu; Zhao, Dazhe

    2013-02-01

    MR raw data collected using non-Cartesian method can be transformed on Cartesian grids by traditional gridding algorithm (GA) and reconstructed by Fourier transform. However, its runtime complexity is O(K×N(2)), where resolution of raw data is N×N and size of convolution window (CW) is K. And it involves a large number of matrix calculation including modulus, addition, multiplication and convolution. Therefore, a Compute Unified Device Architecture (CUDA)-based algorithm is proposed to improve the reconstruction efficiency of PROPELLER (a globally recognized non-Cartesian sampling method). Experiment shows a write-write conflict among multiple CUDA threads. This induces an inconsistent result when synchronously convoluting multiple k-space data onto the same grid. To overcome this problem, a reverse gridding algorithm (RGA) was developed. Different from the method of generating a grid window for each trajectory as in traditional GA, RGA calculates a trajectory window for each grid. This is what "reverse" means. For each k-space point in the CW, contribution is cumulated to this grid. Although this algorithm can be easily extended to reconstruct other non-Cartesian sampled raw data, we only implement it based on PROPELLER. Experiment illustrates that this CUDA-based RGA has successfully solved the write-write conflict and its reconstruction speed is 7.5 times higher than that of traditional GA. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Measuring the performance of super-resolution reconstruction algorithms

    NARCIS (Netherlands)

    Dijk, J.; Schutte, K.; Eekeren, A.W.M. van; Bijl, P.

    2012-01-01

    For many military operations situational awareness is of great importance. This situational awareness and related tasks such as Target Acquisition can be acquired using cameras, of which the resolution is an important characteristic. Super resolution reconstruction algorithms can be used to improve

  20. Algorithms and networking for computer games

    CERN Document Server

    Smed, Jouni

    2006-01-01

    Algorithms and Networking for Computer Games is an essential guide to solving the algorithmic and networking problems of modern commercial computer games, written from the perspective of a computer scientist. Combining algorithmic knowledge and game-related problems, the authors discuss all the common difficulties encountered in game programming. The first part of the book tackles algorithmic problems by presenting how they can be solved practically. As well as ""classical"" topics such as random numbers, tournaments and game trees, the authors focus on how to find a path in, create the terrai

  1. Limited angle C-arm tomosynthesis reconstruction algorithms

    Science.gov (United States)

    Malalla, Nuhad A. Y.; Xu, Shiyu; Chen, Ying

    2015-03-01

    In this paper, C-arm tomosynthesis with digital detector was investigated as a novel three dimensional (3D) imaging technique. Digital tomosythses is an imaging technique to provide 3D information of the object by reconstructing slices passing through the object, based on a series of angular projection views with respect to the object. C-arm tomosynthesis provides two dimensional (2D) X-ray projection images with rotation (-/+20 angular range) of both X-ray source and detector. In this paper, four representative reconstruction algorithms including point by point back projection (BP), filtered back projection (FBP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were investigated. Dataset of 25 projection views of 3D spherical object that located at center of C-arm imaging space was simulated from 25 angular locations over a total view angle of 40 degrees. With reconstructed images, 3D mesh plot and 2D line profile of normalized pixel intensities on focus reconstruction plane crossing the center of the object were studied with each reconstruction algorithm. Results demonstrated the capability to generate 3D information from limited angle C-arm tomosynthesis. Since C-arm tomosynthesis is relatively compact, portable and can avoid moving patients, it has been investigated for different clinical applications ranging from tumor surgery to interventional radiology. It is very important to evaluate C-arm tomosynthesis for valuable applications.

  2. Datasets for radiation network algorithm development and testing

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S [ORNL; Sen, Satyabrata [ORNL; Berry, M. L.. [New Jersey Institute of Technology; Wu, Qishi [University of Memphis; Grieme, M. [New Jersey Institute of Technology; Brooks, Richard R [ORNL; Cordone, G. [Clemson University

    2016-01-01

    Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) program supported the development of networks of commercial-off-the-shelf (COTS) radiation counters for detecting, localizing, and identifying low-level radiation sources. Under this program, a series of indoor and outdoor tests were conducted with multiple source strengths and types, different background profiles, and various types of source and detector movements. Following the tests, network algorithms were replayed in various re-constructed scenarios using sub-networks. These measurements and algorithm traces together provide a rich collection of highly valuable datasets for testing the current and next generation radiation network algorithms, including the ones (to be) developed by broader R&D communities such as distributed detection, information fusion, and sensor networks. From this multiple TeraByte IRSS database, we distilled out and packaged the first batch of canonical datasets for public release. They include measurements from ten indoor and two outdoor tests which represent increasingly challenging baseline scenarios for robustly testing radiation network algorithms.

  3. Filtered gradient reconstruction algorithm for compressive spectral imaging

    Science.gov (United States)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  4. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Genetic Algorithm Optimized Neural Networks Ensemble as. Calibration Model for Simultaneous Spectrophotometric. Estimation of Atenolol and Losartan Potassium in Tablets. Dondeti Satyanarayana*, Kamarajan Kannan and Rajappan Manavalan. Department of Pharmacy, Annamalai University, Annamalainagar, Tamil ...

  5. Flow enforcement algorithms for ATM networks

    DEFF Research Database (Denmark)

    Dittmann, Lars; Jacobsen, Søren B.; Moth, Klaus

    1991-01-01

    Four measurement algorithms for flow enforcement in asynchronous transfer mode (ATM) networks are presented. The algorithms are the leaky bucket, the rectangular sliding window, the triangular sliding window, and the exponentially weighted moving average. A comparison, based partly on teletraffic....... Implementations are proposed on the block diagram level, and dimensioning examples are carried out when flow enforcing a renewal-type connection using the four algorithms. The corresponding hardware demands are estimated aid compared......Four measurement algorithms for flow enforcement in asynchronous transfer mode (ATM) networks are presented. The algorithms are the leaky bucket, the rectangular sliding window, the triangular sliding window, and the exponentially weighted moving average. A comparison, based partly on teletraffic...... theory and partly on signal processing theory, is carried out. It is seen that the time constant involved increases with the increasing burstiness of the connection. It is suggested that the RMS measurement bandwidth be used to dimension linear algorithms for equal flow enforcement characteristics...

  6. Principal component analysis networks and algorithms

    CERN Document Server

    Kong, Xiangyu; Duan, Zhansheng

    2017-01-01

    This book not only provides a comprehensive introduction to neural-based PCA methods in control science, but also presents many novel PCA algorithms and their extensions and generalizations, e.g., dual purpose, coupled PCA, GED, neural based SVD algorithms, etc. It also discusses in detail various analysis methods for the convergence, stabilizing, self-stabilizing property of algorithms, and introduces the deterministic discrete-time systems method to analyze the convergence of PCA/MCA algorithms. Readers should be familiar with numerical analysis and the fundamentals of statistics, such as the basics of least squares and stochastic algorithms. Although it focuses on neural networks, the book only presents their learning law, which is simply an iterative algorithm. Therefore, no a priori knowledge of neural networks is required. This book will be of interest and serve as a reference source to researchers and students in applied mathematics, statistics, engineering, and other related fields.

  7. A simple algorithm for analyzing uncertainty of accident reconstruction results.

    Science.gov (United States)

    Zou, Tiefang; Hu, Lin; Li, Pingfan; Wu, Hequan

    2015-12-01

    In order to analyzing the uncertainty in accident reconstruction, based on the theory of extreme value and the convex model theory, the uncertainty analysis problem is turn to an extreme value problem. In order to calculate the range of the dependent variable, the extreme value in the definition domain and on the boundary of the definition domain are calculated independently, and then the upper and lower bound of the dependent variable can be given by these obtained extreme values. Based on such idea and through analyzing five numerical cases, a simple algorithm for calculating the range of an accident reconstruction result was given; appropriate results can be obtained through the proposed algorithm in these cases. Finally, a real world vehicle-motorcycle accident was given, the range of the reconstructed velocity of the vehicle was calculated by employing the Pc-Crash, the response surface methodology and the new proposed algorithm, the range was [66.1-67.3] km/h. This research will provide another choice for uncertainty analysis in accident reconstruction. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Fast Tomographic Reconstruction From Limited Data Using Artificial Neural Networks

    NARCIS (Netherlands)

    D.M. Pelt (Daniel); K.J. Batenburg (Joost)

    2013-01-01

    htmlabstractImage reconstruction from a small number of projections is a challenging problem in tomography. Advanced algorithms that incorporate prior knowledge can sometimes produce accurate reconstructions, but they typically require long computation times. Furthermore, the required prior

  9. Gene expression network reconstruction by convex feature selection when incorporating genetic perturbations.

    Directory of Open Access Journals (Sweden)

    Benjamin A Logsdon

    Full Text Available Cellular gene expression measurements contain regulatory information that can be used to discover novel network relationships. Here, we present a new algorithm for network reconstruction powered by the adaptive lasso, a theoretically and empirically well-behaved method for selecting the regulatory features of a network. Any algorithms designed for network discovery that make use of directed probabilistic graphs require perturbations, produced by either experiments or naturally occurring genetic variation, to successfully infer unique regulatory relationships from gene expression data. Our approach makes use of appropriately selected cis-expression Quantitative Trait Loci (cis-eQTL, which provide a sufficient set of independent perturbations for maximum network resolution. We compare the performance of our network reconstruction algorithm to four other approaches: the PC-algorithm, QTLnet, the QDG algorithm, and the NEO algorithm, all of which have been used to reconstruct directed networks among phenotypes leveraging QTL. We show that the adaptive lasso can outperform these algorithms for networks of ten genes and ten cis-eQTL, and is competitive with the QDG algorithm for networks with thirty genes and thirty cis-eQTL, with rich topologies and hundreds of samples. Using this novel approach, we identify unique sets of directed relationships in Saccharomyces cerevisiae when analyzing genome-wide gene expression data for an intercross between a wild strain and a lab strain. We recover novel putative network relationships between a tyrosine biosynthesis gene (TYR1, and genes involved in endocytosis (RCY1, the spindle checkpoint (BUB2, sulfonate catabolism (JLP1, and cell-cell communication (PRM7. Our algorithm provides a synthesis of feature selection methods and graphical model theory that has the potential to reveal new directed regulatory relationships from the analysis of population level genetic and gene expression data.

  10. A digitally reconstructed radiograph algorithm calculated from first principles

    Science.gov (United States)

    Staub, David; Murphy, Martin J.

    2013-01-01

    Purpose: To develop an algorithm for computing realistic digitally reconstructed radiographs (DRRs) that match real cone-beam CT (CBCT) projections with no artificial adjustments. Methods: The authors used measured attenuation data from cone-beam CT projection radiographs of different materials to obtain a function to convert CT number to linear attenuation coefficient (LAC). The effects of scatter, beam hardening, and veiling glare were first removed from the attenuation data. Using this conversion function the authors calculated the line integral of LAC through a CT along rays connecting the radiation source and detector pixels with a ray-tracing algorithm, producing raw DRRs. The effects of scatter, beam hardening, and veiling glare were then included in the DRRs through postprocessing. Results: The authors compared actual CBCT projections to DRRs produced with all corrections (scatter, beam hardening, and veiling glare) and to uncorrected DRRs. Algorithm accuracy was assessed through visual comparison of projections and DRRs, pixel intensity comparisons, intensity histogram comparisons, and correlation plots of DRR-to-projection pixel intensities. In general, the fully corrected algorithm provided a small but nontrivial improvement in accuracy over the uncorrected algorithm. The authors also investigated both measurement- and computation-based methods for determining the beam hardening correction, and found the computation-based method to be superior, as it accounted for nonuniform bowtie filter thickness. The authors benchmarked the algorithm for speed and found that it produced DRRs in about 0.35 s for full detector and CT resolution at a ray step-size of 0.5 mm. Conclusions: The authors have demonstrated a DRR algorithm calculated from first principles that accounts for scatter, beam hardening, and veiling glare in order to produce accurate DRRs. The algorithm is computationally efficient, making it a good candidate for iterative CT reconstruction techniques

  11. A Multidomain Survivable Virtual Network Mapping Algorithm

    Directory of Open Access Journals (Sweden)

    Xiancui Xiao

    2017-01-01

    Full Text Available Although the existing networks are more often deployed in the multidomain environment, most of existing researches focus on single-domain networks and there are no appropriate solutions for the multidomain virtual network mapping problem. In fact, most studies assume that the underlying network can operate without any interruption. However, physical networks cannot ensure the normal provision of network services for external reasons and traditional single-domain networks have difficulties to meet user needs, especially for the high security requirements of the network transmission. In order to solve the above problems, this paper proposes a survivable virtual network mapping algorithm (IntD-GRC-SVNE that implements multidomain mapping in network virtualization. IntD-GRC-SVNE maps the virtual communication networks onto different domain networks and provides backup resources for virtual links which improve the survivability of the special networks. Simulation results show that IntD-GRC-SVNE can not only improve the survivability of multidomain communications network but also render the network load more balanced and greatly improve the network acceptance rate due to employment of GRC (global resource capacity.

  12. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  13. Vectorized algorithms for spiking neural network simulation.

    Science.gov (United States)

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.

  14. Inferring gene regulatory networks by singular value decomposition and gravitation field algorithm.

    Science.gov (United States)

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms.

  15. Harnessing diversity towards the reconstructing of large scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Takeshi Hase

    Full Text Available Elucidating gene regulatory network (GRN from large scale experimental data remains a central challenge in systems biology. Recently, numerous techniques, particularly consensus driven approaches combining different algorithms, have become a potentially promising strategy to infer accurate GRNs. Here, we develop a novel consensus inference algorithm, TopkNet that can integrate multiple algorithms to infer GRNs. Comprehensive performance benchmarking on a cloud computing framework demonstrated that (i a simple strategy to combine many algorithms does not always lead to performance improvement compared to the cost of consensus and (ii TopkNet integrating only high-performance algorithms provide significant performance improvement compared to the best individual algorithms and community prediction. These results suggest that a priori determination of high-performance algorithms is a key to reconstruct an unknown regulatory network. Similarity among gene-expression datasets can be useful to determine potential optimal algorithms for reconstruction of unknown regulatory networks, i.e., if expression-data associated with known regulatory network is similar to that with unknown regulatory network, optimal algorithms determined for the known regulatory network can be repurposed to infer the unknown regulatory network. Based on this observation, we developed a quantitative measure of similarity among gene-expression datasets and demonstrated that, if similarity between the two expression datasets is high, TopkNet integrating algorithms that are optimal for known dataset perform well on the unknown dataset. The consensus framework, TopkNet, together with the similarity measure proposed in this study provides a powerful strategy towards harnessing the wisdom of the crowds in reconstruction of unknown regulatory networks.

  16. Harnessing diversity towards the reconstructing of large scale gene regulatory networks.

    Science.gov (United States)

    Hase, Takeshi; Ghosh, Samik; Yamanaka, Ryota; Kitano, Hiroaki

    2013-01-01

    Elucidating gene regulatory network (GRN) from large scale experimental data remains a central challenge in systems biology. Recently, numerous techniques, particularly consensus driven approaches combining different algorithms, have become a potentially promising strategy to infer accurate GRNs. Here, we develop a novel consensus inference algorithm, TopkNet that can integrate multiple algorithms to infer GRNs. Comprehensive performance benchmarking on a cloud computing framework demonstrated that (i) a simple strategy to combine many algorithms does not always lead to performance improvement compared to the cost of consensus and (ii) TopkNet integrating only high-performance algorithms provide significant performance improvement compared to the best individual algorithms and community prediction. These results suggest that a priori determination of high-performance algorithms is a key to reconstruct an unknown regulatory network. Similarity among gene-expression datasets can be useful to determine potential optimal algorithms for reconstruction of unknown regulatory networks, i.e., if expression-data associated with known regulatory network is similar to that with unknown regulatory network, optimal algorithms determined for the known regulatory network can be repurposed to infer the unknown regulatory network. Based on this observation, we developed a quantitative measure of similarity among gene-expression datasets and demonstrated that, if similarity between the two expression datasets is high, TopkNet integrating algorithms that are optimal for known dataset perform well on the unknown dataset. The consensus framework, TopkNet, together with the similarity measure proposed in this study provides a powerful strategy towards harnessing the wisdom of the crowds in reconstruction of unknown regulatory networks.

  17. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Saifullah

    2002-07-01

    This dissertation deals with (1) experimental measurements on petrophysical, reservoir engineering and morphological properties of Ekofisk chalk, (2) numerical simulation of core flood experiments to analyze and improve relative permeability data, (3) stochastic reconstruction of chalk samples from limited morphological information, (4) extraction of pore space parameters from the reconstructed samples, development of network model using pore space information, and computation of petrophysical and reservoir engineering properties from network model, and (5) development of 2D and 3D idealized fractured reservoir models and verification of the applicability of several widely used conventional up scaling techniques in fractured reservoir simulation. Experiments have been conducted on eight Ekofisk chalk samples and porosity, absolute permeability, formation factor, and oil-water relative permeability, capillary pressure and resistivity index are measured at laboratory conditions. Mercury porosimetry data and backscatter scanning electron microscope images have also been acquired for the samples. A numerical simulation technique involving history matching of the production profiles is employed to improve the relative permeability curves and to analyze hysteresis of the Ekofisk chalk samples. The technique was found to be a powerful tool to supplement the uncertainties in experimental measurements. Porosity and correlation statistics obtained from backscatter scanning electron microscope images are used to reconstruct microstructures of chalk and particulate media. The reconstruction technique involves a simulated annealing algorithm, which can be constrained by an arbitrary number of morphological parameters. This flexibility of the algorithm is exploited to successfully reconstruct particulate media and chalk samples using more than one correlation functions. A technique based on conditional simulated annealing has been introduced for exact reproduction of vuggy

  18. Energy reconstruction and calibration algorithms for the ATLAS electromagnetic calorimeter

    CERN Document Server

    Delmastro, M

    2003-01-01

    The work of this thesis is devoted to the study, development and optimization of the algorithms of energy reconstruction and calibration for the electromagnetic calorimeter (EMC) of the ATLAS experiment, presently under installation and commissioning at the CERN Large Hadron Collider in Geneva (Switzerland). A deep study of the electrical characteristics of the detector and of the signals formation and propagation is conduced: an electrical model of the detector is developed and analyzed through simulations; a hardware model (mock-up) of a group of the EMC readout cells has been built, allowing the direct collection and properties study of the signals emerging from the EMC cells. We analyze the existing multiple-sampled signal reconstruction strategy, showing the need of an improvement in order to reach the advertised performances of the detector. The optimal filtering reconstruction technique is studied and implemented, taking into account the differences between the ionization and calibration waveforms as e...

  19. New Algorithm of Seed Finding for Track Reconstruction

    OpenAIRE

    Baranov Dmitry; Merts Sergei; Ososkov Gennady; Rogachevsky Oleg

    2016-01-01

    Event reconstruction is a fundamental problem in the high energy physics experiments. It consists of track finding and track fitting procedures in the experiment tracking detectors. This requires a tremendous search of detector responses belonging to each track aimed at obtaining so-called “seeds”, i.e. initial approximations of track parameters of charged particles. In the paper we propose a new algorithm of the seedfinding procedure for the BM@N experiment.

  20. New Algorithm of Seed Finding for Track Reconstruction

    Directory of Open Access Journals (Sweden)

    Baranov Dmitry

    2016-01-01

    Full Text Available Event reconstruction is a fundamental problem in the high energy physics experiments. It consists of track finding and track fitting procedures in the experiment tracking detectors. This requires a tremendous search of detector responses belonging to each track aimed at obtaining so-called “seeds”, i.e. initial approximations of track parameters of charged particles. In the paper we propose a new algorithm of the seedfinding procedure for the BM@N experiment.

  1. Genome-scale reconstruction of the Saccharomyces cerevisiae metabolic network

    DEFF Research Database (Denmark)

    Förster, Jochen; Famili, I.; Fu, P.

    2003-01-01

    and the environment were included. A total of 708 structural open reading frames (ORFs) were accounted for in the reconstructed network, corresponding to 1035 metabolic reactions. Further, 140 reactions were included on the basis of biochemical evidence resulting in a genome-scale reconstructed metabolic network...... with Escherichia coli. The reconstructed metabolic network is the first comprehensive network for a eukaryotic organism, and it may be used as the basis for in silico analysis of phenotypic functions....

  2. Reconstructing Causal Biological Networks through Active Learning.

    Directory of Open Access Journals (Sweden)

    Hyunghoon Cho

    Full Text Available Reverse-engineering of biological networks is a central problem in systems biology. The use of intervention data, such as gene knockouts or knockdowns, is typically used for teasing apart causal relationships among genes. Under time or resource constraints, one needs to carefully choose which intervention experiments to carry out. Previous approaches for selecting most informative interventions have largely been focused on discrete Bayesian networks. However, continuous Bayesian networks are of great practical interest, especially in the study of complex biological systems and their quantitative properties. In this work, we present an efficient, information-theoretic active learning algorithm for Gaussian Bayesian networks (GBNs, which serve as important models for gene regulatory networks. In addition to providing linear-algebraic insights unique to GBNs, leading to significant runtime improvements, we demonstrate the effectiveness of our method on data simulated with GBNs and the DREAM4 network inference challenge data sets. Our method generally leads to faster recovery of underlying network structure and faster convergence to final distribution of confidence scores over candidate graph structures using the full data, in comparison to random selection of intervention experiments.

  3. Convergence of Algorithms for Reconstructing Convex Bodies and Directional Measures

    DEFF Research Database (Denmark)

    Gardner, Richard; Kiderlen, Markus; Milanfar, Peyman

    2006-01-01

    We investigate algorithms for reconstructing a convex body K in Rn from noisy measurements of its support function or its brightness function in k directions u1, . . . , uk. The key idea of these algorithms is to construct a convex polytope Pk whose support function (or brightness function) best...... approximates the given measurements in the directions u1, . . . , uk (in the least squares sense). The measurement errors are assumed to be stochastically independent and Gaussian. It is shown that this procedure is (strongly) consistent, meaning that almost surely, Pk tends to K in the Hausdor metric as k ! 1...... in k directions u1, . . . , uk. Here the Dudley and Prohorov metrics are used. The methods are linked to those employed for the support and brightness function algorithms via the fact that the rose of intersections is the support function of a projection body....

  4. Impact of Reconstruction Algorithms on CT Radiomic Features of Pulmonary Tumors: Analysis of Intra- and Inter-Reader Variability and Inter-Reconstruction Algorithm Variability.

    Science.gov (United States)

    Kim, Hyungjin; Park, Chang Min; Lee, Myunghee; Park, Sang Joon; Song, Yong Sub; Lee, Jong Hyuk; Hwang, Eui Jin; Goo, Jin Mo

    2016-01-01

    To identify the impact of reconstruction algorithms on CT radiomic features of pulmonary tumors and to reveal and compare the intra- and inter-reader and inter-reconstruction algorithm variability of each feature. Forty-two patients (M:F = 19:23; mean age, 60.43±10.56 years) with 42 pulmonary tumors (22.56±8.51mm) underwent contrast-enhanced CT scans, which were reconstructed with filtered back projection and commercial iterative reconstruction algorithm (level 3 and 5). Two readers independently segmented the whole tumor volume. Fifteen radiomic features were extracted and compared among reconstruction algorithms. Intra- and inter-reader variability and inter-reconstruction algorithm variability were calculated using coefficients of variation (CVs) and then compared. Among the 15 features, 5 first-order tumor intensity features and 4 gray level co-occurrence matrix (GLCM)-based features showed significant differences (palgorithms. As for the variability, effective diameter, sphericity, entropy, and GLCM entropy were the most robust features (CV≤5%). Inter-reader variability was larger than intra-reader or inter-reconstruction algorithm variability in 9 features. However, for entropy, homogeneity, and 4 GLCM-based features, inter-reconstruction algorithm variability was significantly greater than inter-reader variability (palgorithms. Inter-reconstruction algorithm variability was greater than inter-reader variability for entropy, homogeneity, and GLCM-based features.

  5. Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography

    Science.gov (United States)

    Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.

    2014-11-01

    Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.

  6. Quantifying the multi-scale performance of network inference algorithms.

    Science.gov (United States)

    Oates, Chris J; Amos, Richard; Spencer, Simon E F

    2014-10-01

    Graphical models are widely used to study complex multivariate biological systems. Network inference algorithms aim to reverse-engineer such models from noisy experimental data. It is common to assess such algorithms using techniques from classifier analysis. These metrics, based on ability to correctly infer individual edges, possess a number of appealing features including invariance to rank-preserving transformation. However, regulation in biological systems occurs on multiple scales and existing metrics do not take into account the correctness of higher-order network structure. In this paper novel performance scores are presented that share the appealing properties of existing scores, whilst capturing ability to uncover regulation on multiple scales. Theoretical results confirm that performance of a network inference algorithm depends crucially on the scale at which inferences are to be made; in particular strong local performance does not guarantee accurate reconstruction of higher-order topology. Applying these scores to a large corpus of data from the DREAM5 challenge, we undertake a data-driven assessment of estimator performance. We find that the "wisdom of crowds" network, that demonstrated superior local performance in the DREAM5 challenge, is also among the best performing methodologies for inference of regulation on multiple length scales.

  7. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    Science.gov (United States)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; hide

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  8. An improved Bayesian network method for reconstructing gene regulatory network based on candidate auto selection.

    Science.gov (United States)

    Xing, Linlin; Guo, Maozu; Liu, Xiaoyan; Wang, Chunyu; Wang, Lei; Zhang, Yin

    2017-11-17

    The reconstruction of gene regulatory network (GRN) from gene expression data can discover regulatory relationships among genes and gain deep insights into the complicated regulation mechanism of life. However, it is still a great challenge in systems biology and bioinformatics. During the past years, numerous computational approaches have been developed for this goal, and Bayesian network (BN) methods draw most of attention among these methods because of its inherent probability characteristics. However, Bayesian network methods are time consuming and cannot handle large-scale networks due to their high computational complexity, while the mutual information-based methods are highly effective but directionless and have a high false-positive rate. To solve these problems, we propose a Candidate Auto Selection algorithm (CAS) based on mutual information and breakpoint detection to restrict the search space in order to accelerate the learning process of Bayesian network. First, the proposed CAS algorithm automatically selects the neighbor candidates of each node before searching the best structure of GRN. Then based on CAS algorithm, we propose a globally optimal greedy search method (CAS + G), which focuses on finding the highest rated network structure, and a local learning method (CAS + L), which focuses on faster learning the structure with little loss of quality. Results show that the proposed CAS algorithm can effectively reduce the search space of Bayesian networks through identifying the neighbor candidates of each node. In our experiments, the CAS + G method outperforms the state-of-the-art method on simulation data for inferring GRNs, and the CAS + L method is significantly faster than the state-of-the-art method with little loss of accuracy. Hence, the CAS based methods effectively decrease the computational complexity of Bayesian network and are more suitable for GRN inference.

  9. Double and multiple knockout simulations for genome-scale metabolic network reconstructions.

    Science.gov (United States)

    Goldstein, Yaron Ab; Bockmayr, Alexander

    2015-01-01

    Constraint-based modeling of genome-scale metabolic network reconstructions has become a widely used approach in computational biology. Flux coupling analysis is a constraint-based method that analyses the impact of single reaction knockouts on other reactions in the network. We present an extension of flux coupling analysis for double and multiple gene or reaction knockouts, and develop corresponding algorithms for an in silico simulation. To evaluate our method, we perform a full single and double knockout analysis on a selection of genome-scale metabolic network reconstructions and compare the results. A prototype implementation of double knockout simulation is available at http://hoverboard.io/L4FC.

  10. Routing algorithms in networks-on-chip

    CERN Document Server

    Daneshtalab, Masoud

    2014-01-01

    This book provides a single-source reference to routing algorithms for Networks-on-Chip (NoCs), as well as in-depth discussions of advanced solutions applied to current and next generation, many core NoC-based Systems-on-Chip (SoCs). After a basic introduction to the NoC design paradigm and architectures, routing algorithms for NoC architectures are presented and discussed at all abstraction levels, from the algorithmic level to actual implementation.  Coverage emphasizes the role played by the routing algorithm and is organized around key problems affecting current and next generation, many-core SoCs. A selection of routing algorithms is included, specifically designed to address key issues faced by designers in the ultra-deep sub-micron (UDSM) era, including performance improvement, power, energy, and thermal issues, fault tolerance and reliability.   ·         Provides a comprehensive overview of routing algorithms for Networks-on-Chip and NoC-based, manycore systems; ·         Describe...

  11. Shape reconstruction from apparent contours theory and algorithms

    CERN Document Server

    Bellettini, Giovanni; Paolini, Maurizio

    2015-01-01

    Motivated by a variational model concerning the depth of the objects in a picture and the problem of hidden and illusory contours, this book investigates one of the central problems of computer vision: the topological and algorithmic reconstruction of a smooth three dimensional scene starting from the visible part of an apparent contour. The authors focus their attention on the manipulation of apparent contours using a finite set of elementary moves, which correspond to diffeomorphic deformations of three dimensional scenes. A large part of the book is devoted to the algorithmic part, with implementations, experiments, and computed examples. The book is intended also as a user's guide to the software code appcontour, written for the manipulation of apparent contours and their invariants. This book is addressed to theoretical and applied scientists working in the field of mathematical models of image segmentation.

  12. Reconstruction of network topology using status-time-series data

    Science.gov (United States)

    Pandey, Pradumn Kumar; Badarla, Venkataramana

    2018-01-01

    Uncovering the heterogeneous connection pattern of a networked system from the available status-time-series (STS) data of a dynamical process on the network is of great interest in network science and known as a reverse engineering problem. Dynamical processes on a network are affected by the structure of the network. The dependency between the diffusion dynamics and structure of the network can be utilized to retrieve the connection pattern from the diffusion data. Information of the network structure can help to devise the control of dynamics on the network. In this paper, we consider the problem of network reconstruction from the available status-time-series (STS) data using matrix analysis. The proposed method of network reconstruction from the STS data is tested successfully under susceptible-infected-susceptible (SIS) diffusion dynamics on real-world and computer-generated benchmark networks. High accuracy and efficiency of the proposed reconstruction procedure from the status-time-series data define the novelty of the method. Our proposed method outperforms compressed sensing theory (CST) based method of network reconstruction using STS data. Further, the same procedure of network reconstruction is applied to the weighted networks. The ordering of the edges in the weighted networks is identified with high accuracy.

  13. Statistical reconstruction algorithms for continuous wave electron spin resonance imaging

    Science.gov (United States)

    Kissos, Imry; Levit, Michael; Feuer, Arie; Blank, Aharon

    2013-06-01

    Electron spin resonance imaging (ESRI) is an important branch of ESR that deals with heterogeneous samples ranging from semiconductor materials to small live animals and even humans. ESRI can produce either spatial images (providing information about the spatially dependent radical concentration) or spectral-spatial images, where an extra dimension is added to describe the absorption spectrum of the sample (which can also be spatially dependent). The mapping of oxygen in biological samples, often referred to as oximetry, is a prime example of an ESRI application. ESRI suffers frequently from a low signal-to-noise ratio (SNR), which results in long acquisition times and poor image quality. A broader use of ESRI is hampered by this slow acquisition, which can also be an obstacle for many biological applications where conditions may change relatively quickly over time. The objective of this work is to develop an image reconstruction scheme for continuous wave (CW) ESRI that would make it possible to reduce the data acquisition time without degrading the reconstruction quality. This is achieved by adapting the so-called "statistical reconstruction" method, recently developed for other medical imaging modalities, to the specific case of CW ESRI. Our new algorithm accounts for unique ESRI aspects such as field modulation, spectral-spatial imaging, and possible limitation on the gradient magnitude (the so-called "limited angle" problem). The reconstruction method shows improved SNR and contrast recovery vs. commonly used back-projection-based methods, for a variety of simulated synthetic samples as well as in actual CW ESRI experiments.

  14. Network-based recommendation algorithms: A review

    Science.gov (United States)

    Yu, Fei; Zeng, An; Gillard, Sébastien; Medo, Matúš

    2016-06-01

    Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use-such as the possible influence of recommendation on the evolution of systems that use it-and finally discuss open research directions and challenges.

  15. Efficient network reconstruction from dynamical cascades identifies small-world topology of neuronal avalanches.

    Directory of Open Access Journals (Sweden)

    Sinisa Pajevic

    2009-01-01

    Full Text Available Cascading activity is commonly found in complex systems with directed interactions such as metabolic networks, neuronal networks, or disease spreading in social networks. Substantial insight into a system's organization can be obtained by reconstructing the underlying functional network architecture from the observed activity cascades. Here we focus on Bayesian approaches and reduce their computational demands by introducing the Iterative Bayesian (IB and Posterior Weighted Averaging (PWA methods. We introduce a special case of PWA, cast in nonparametric form, which we call the normalized count (NC algorithm. NC efficiently reconstructs random and small-world functional network topologies and architectures from subcritical, critical, and supercritical cascading dynamics and yields significant improvements over commonly used correlation methods. With experimental data, NC identified a functional and structural small-world topology and its corresponding traffic in cortical networks with neuronal avalanche dynamics.

  16. Double and multiple knockout simulations for genome-scale metabolic network reconstructions

    OpenAIRE

    Goldstein, Yaron AB; Bockmayr, Alexander

    2015-01-01

    Background Constraint-based modeling of genome-scale metabolic network reconstructions has become a widely used approach in computational biology. Flux coupling analysis is a constraint-based method that analyses the impact of single reaction knockouts on other reactions in the network. Results We present an extension of flux coupling analysis for double and multiple gene or reaction knockouts, and develop corresponding algorithms for an in silico simulation. To evaluate our method, we perfor...

  17. Algorithm For A Self-Growing Neural Network

    Science.gov (United States)

    Cios, Krzysztof J.

    1996-01-01

    CID3 algorithm simulates self-growing neural network. Constructs decision trees equivalent to hidden layers of neural network. Based on ID3 algorithm, which dynamically generates decision tree while minimizing entropy of information. CID3 algorithm generates feedforward neural network by use of either crisp or fuzzy measure of entropy.

  18. Reconstruction of chalk pore networks from 2D backscatter electron micrographs using a simulated annealing technique

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, M.S.; Torsaeter, O. [Department of Petroleum Engineering and Applied Geophysics, Norwegian University of Science and Technology, Trondheim (Norway)

    2002-05-01

    We report the stochastic reconstruction of chalk pore networks from limited morphological information that may be readily extracted from 2D backscatter electron (BSE) images of the pore space. The reconstruction technique employs a simulated annealing (SA) algorithm, which can be constrained by an arbitrary number of morphological descriptors. Backscatter electron images of a high-porosity North Sea chalk sample are analyzed and the morphological descriptors of the pore space are determined. The morphological descriptors considered are the void-phase two-point probability function and lineal path function computed with or without the application of periodic boundary conditions (PBC). 2D and 3D samples have been reconstructed with different combinations of the descriptors and the reconstructed pore networks have been analyzed quantitatively to evaluate the quality of reconstructions. The results demonstrate that simulated annealing technique may be used to reconstruct chalk pore networks with reasonable accuracy using the void-phase two-point probability function and/or void-phase lineal path function. Void-phase two-point probability function produces slightly better reconstruction than the void-phase lineal path function. Imposing void-phase lineal path function results in slight improvement over what is achieved by using the void-phase two-point probability function as the only constraint. Application of periodic boundary conditions appears to be not critically important when reasonably large samples are reconstructed.

  19. Quantitatively assessed CT imaging measures of pulmonary interstitial pneumonia: Effects of reconstruction algorithms on histogram parameters

    Energy Technology Data Exchange (ETDEWEB)

    Koyama, Hisanobu [Department of Radiology, Hyogo Kaibara Hospital, 5208-1 Kaibara, Kaibara-cho, Tanba 669-3395 (Japan)], E-mail: hisanobu19760104@yahoo.co.jp; Ohno, Yoshiharu [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: yosirad@kobe-u.ac.jp; Yamazaki, Youichi [Department of Medical Physics and Engineering, Faculty of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita 565-0871 (Japan)], E-mail: y.yamazk@sahs.med.osaka-u.ac.jp; Nogami, Munenobu [Division of PET, Institute of Biomedical Research and Innovation, 2-2 MInamimachi, Minatojima, Chu0-ku, Kobe 650-0047 (Japan)], E-mail: aznogami@fbri.org; Kusaka, Akiko [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: a.kusaka@hosp.kobe-u.ac.jp; Murase, Kenya [Department of Medical Physics and Engineering, Faculty of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita 565-0871 (Japan)], E-mail: murase@sahs.med.osaka-u.ac.jp; Sugimura, Kazuro [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: sugimura@med.kobe-u.ac.jp

    2010-04-15

    This study aimed the influences of reconstruction algorithm for quantitative assessments in interstitial pneumonia patients. A total of 25 collagen vascular disease patients (nine male patients and 16 female patients; mean age, 57.2 years; age range 32-77 years) underwent thin-section MDCT examinations, and MDCT data were reconstructed with three kinds of reconstruction algorithm (two high-frequencies [A and B] and one standard [C]). In reconstruction algorithm B, the effect of low- and middle-frequency space was suppressed compared with reconstruction algorithm A. As quantitative CT parameters, kurtosis, skewness, and mean lung density (MLD) were acquired from a frequency histogram of the whole lung parenchyma in each reconstruction algorithm. To determine the difference of quantitative CT parameters affected by reconstruction algorithms, these parameters were compared statistically. To determine the relationships with the disease severity, these parameters were correlated with PFTs. In the results, all the histogram parameters values had significant differences each other (p < 0.0001) and those of reconstruction algorithm C were the highest. All MLDs had fair or moderate correlation with all parameters of PFT (-0.64 < r < -0.45, p < 0.05). Though kurtosis and skewness in high-frequency reconstruction algorithm A had significant correlations with all parameters of PFT (-0.61 < r < -0.45, p < 0.05), there were significant correlations only with diffusing capacity of carbon monoxide (DLco) and total lung capacity (TLC) in reconstruction algorithm C and with forced expiratory volume in 1 s (FEV1), DLco and TLC in reconstruction algorithm B. In conclusion, reconstruction algorithm has influence to quantitative assessments on chest thin-section MDCT examination in interstitial pneumonia patients.

  20. Synthetic Event Reconstruction Experiments for Defining Sensor Network Characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Lundquist, J K; Kosovic, B; Belles, R

    2005-12-15

    An event reconstruction technology system has been designed and implemented at Lawrence Livermore National Laboratory (LLNL). This system integrates sensor observations, which may be sparse and/or conflicting, with transport and dispersion models via Bayesian stochastic sampling methodologies to characterize the sources of atmospheric releases of hazardous materials. We demonstrate the application of this event reconstruction technology system to designing sensor networks for detecting and responding to atmospheric releases of hazardous materials. The quantitative measure of the reduction in uncertainty, or benefit of a given network, can be utilized by policy makers to determine the cost/benefit of certain networks. Herein we present two numerical experiments demonstrating the utility of the event reconstruction methodology for sensor network design. In the first set of experiments, only the time resolution of the sensors varies between three candidate networks. The most ''expensive'' sensor network offers few advantages over the moderately-priced network for reconstructing the release examined here. The second set of experiments explores the significance of the sensors detection limit, which can have a significant impact on sensor cost. In this experiment, the expensive network can most clearly define the source location and source release rate. The other networks provide data insufficient for distinguishing between two possible clusters of source locations. When the reconstructions from all networks are aggregated into a composite plume, a decision-maker can distinguish the utility of the expensive sensor network.

  1. A parallel stereo reconstruction algorithm with applications in entomology (APSRA)

    Science.gov (United States)

    Bhasin, Rajesh; Jang, Won Jun; Hart, John C.

    2012-03-01

    We propose a fast parallel algorithm for the reconstruction of 3-Dimensional point clouds of insects from binocular stereo image pairs using a hierarchical approach for disparity estimation. Entomologists study various features of insects to classify them, build their distribution maps, and discover genetic links between specimens among various other essential tasks. This information is important to the pesticide and the pharmaceutical industries among others. When considering the large collections of insects entomologists analyze, it becomes difficult to physically handle the entire collection and share the data with researchers across the world. With the method presented in our work, Entomologists can create an image database for their collections and use the 3D models for studying the shape and structure of the insects thus making it easier to maintain and share. Initial feedback shows that the reconstructed 3D models preserve the shape and size of the specimen. We further optimize our results to incorporate multiview stereo which produces better overall structure of the insects. Our main contribution is applying stereoscopic vision techniques to entomology to solve the problems faced by entomologists.

  2. Stereo Matching Based on Immune Neural Network in Abdomen Reconstruction

    Directory of Open Access Journals (Sweden)

    Huan Liu

    2015-01-01

    Full Text Available Stereo feature matching is a technique that finds an optimal match in two images from the same entity in the three-dimensional world. The stereo correspondence problem is formulated as an optimization task where an energy function, which represents the constraints on the solution, is to be minimized. A novel intelligent biological network (Bio-Net, which involves the human B-T cells immune system into neural network, is proposed in this study in order to learn the robust relationship between the input feature points and the output matched points. A model from input-output data (left reference point-right target point is established. In the experiments, the abdomen reconstructions for different-shape mannequins are then performed by means of the proposed method. The final results are compared and analyzed, which demonstrate that the proposed approach greatly outperforms the single neural network and the conventional matching algorithm in precise. Particularly, as far as time cost and efficiency, the proposed method exhibits its significant promising and potential for improvement. Hence, it is entirely considered as an effective and feasible alternative option for stereo matching.

  3. Reconstruction of extended Petri nets from time series data and its application to signal transduction and to gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Marwan Wolfgang

    2011-07-01

    Full Text Available Abstract Background Network inference methods reconstruct mathematical models of molecular or genetic networks directly from experimental data sets. We have previously reported a mathematical method which is exclusively data-driven, does not involve any heuristic decisions within the reconstruction process, and deliveres all possible alternative minimal networks in terms of simple place/transition Petri nets that are consistent with a given discrete time series data set. Results We fundamentally extended the previously published algorithm to consider catalysis and inhibition of the reactions that occur in the underlying network. The results of the reconstruction algorithm are encoded in the form of an extended Petri net involving control arcs. This allows the consideration of processes involving mass flow and/or regulatory interactions. As a non-trivial test case, the phosphate regulatory network of enterobacteria was reconstructed using in silico-generated time-series data sets on wild-type and in silico mutants. Conclusions The new exact algorithm reconstructs extended Petri nets from time series data sets by finding all alternative minimal networks that are consistent with the data. It suggested alternative molecular mechanisms for certain reactions in the network. The algorithm is useful to combine data from wild-type and mutant cells and may potentially integrate physiological, biochemical, pharmacological, and genetic data in the form of a single model.

  4. A fast and efficient gene-network reconstruction method from multiple over-expression experiments

    Directory of Open Access Journals (Sweden)

    Thurner Stefan

    2009-08-01

    Full Text Available Abstract Background Reverse engineering of gene regulatory networks presents one of the big challenges in systems biology. Gene regulatory networks are usually inferred from a set of single-gene over-expressions and/or knockout experiments. Functional relationships between genes are retrieved either from the steady state gene expressions or from respective time series. Results We present a novel algorithm for gene network reconstruction on the basis of steady-state gene-chip data from over-expression experiments. The algorithm is based on a straight forward solution of a linear gene-dynamics equation, where experimental data is fed in as a first predictor for the solution. We compare the algorithm's performance with the NIR algorithm, both on the well known E. coli experimental data and on in-silico experiments. Conclusion We show superiority of the proposed algorithm in the number of correctly reconstructed links and discuss computational time and robustness. The proposed algorithm is not limited by combinatorial explosion problems and can be used in principle for large networks.

  5. Robust Reconstruction of Complex Networks from Sparse Data

    Science.gov (United States)

    Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Di, Zengru

    2015-01-01

    Reconstructing complex networks from measurable data is a fundamental problem for understanding and controlling collective dynamics of complex networked systems. However, a significant challenge arises when we attempt to decode structural information hidden in limited amounts of data accompanied by noise and in the presence of inaccessible nodes. Here, we develop a general framework for robust reconstruction of complex networks from sparse and noisy data. Specifically, we decompose the task of reconstructing the whole network into recovering local structures centered at each node. Thus, the natural sparsity of complex networks ensures a conversion from the local structure reconstruction into a sparse signal reconstruction problem that can be addressed by using the lasso, a convex optimization method. We apply our method to evolutionary games, transportation, and communication processes taking place in a variety of model and real complex networks, finding that universal high reconstruction accuracy can be achieved from sparse data in spite of noise in time series and missing data of partial nodes. Our approach opens new routes to the network reconstruction problem and has potential applications in a wide range of fields.

  6. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  7. Reconstructing cancer drug response networks using multitask learning.

    Science.gov (United States)

    Ruffalo, Matthew; Stojanov, Petar; Pillutla, Venkata Krishna; Varma, Rohan; Bar-Joseph, Ziv

    2017-10-10

    Translating in vitro results to clinical tests is a major challenge in systems biology. Here we present a new Multi-Task learning framework which integrates thousands of cell line expression experiments to reconstruct drug specific response networks in cancer. The reconstructed networks correctly identify several shared key proteins and pathways while simultaneously highlighting many cell type specific proteins. We used top proteins from each drug network to predict survival for patients prescribed the drug. Predictions based on proteins from the in-vitro derived networks significantly outperformed predictions based on known cancer genes indicating that Multi-Task learning can indeed identify accurate drug response networks.

  8. Localization Algorithms of Underwater Wireless Sensor Networks: A Survey

    Directory of Open Access Journals (Sweden)

    Yongjun Xu

    2012-02-01

    Full Text Available In Underwater Wireless Sensor Networks (UWSNs, localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes’ mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs.

  9. Localization algorithms of Underwater Wireless Sensor Networks: a survey.

    Science.gov (United States)

    Han, Guangjie; Jiang, Jinfang; Shu, Lei; Xu, Yongjun; Wang, Feng

    2012-01-01

    In Underwater Wireless Sensor Networks (UWSNs), localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes' mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs.

  10. An Improved Harmony Search Algorithm for Power Distribution Network Planning

    Directory of Open Access Journals (Sweden)

    Wei Sun

    2015-01-01

    Full Text Available Distribution network planning because of involving many variables and constraints is a multiobjective, discrete, nonlinear, and large-scale optimization problem. Harmony search (HS algorithm is a metaheuristic algorithm inspired by the improvisation process of music players. HS algorithm has several impressive advantages, such as easy implementation, less adjustable parameters, and quick convergence. But HS algorithm still has some defects such as premature convergence and slow convergence speed. According to the defects of the standard algorithm and characteristics of distribution network planning, an improved harmony search (IHS algorithm is proposed in this paper. We set up a mathematical model of distribution network structure planning, whose optimal objective function is to get the minimum annual cost and constraint conditions are overload and radial network. IHS algorithm is applied to solve the complex optimization mathematical model. The empirical results strongly indicate that IHS algorithm can effectively provide better results for solving the distribution network planning problem compared to other optimization algorithms.

  11. Motion tolerant iterative reconstruction algorithm for cone-beam helical CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu [Hitachi Medical Corporation, Chiba-ken (Japan). CT System Div.

    2011-07-01

    We have developed a new advanced iterative reconstruction algorithm for cone-beam helical CT. The features of this algorithm are: (a) it uses separable paraboloidal surrogate (SPS) technique as a foundation for reconstruction to reduce noise and cone-beam artifact, (b) it uses a view weight in the back-projection process to reduce motion artifact. To confirm the improvement of our proposed algorithm over other existing algorithm, such as Feldkamp-Davis-Kress (FDK) or SPS algorithm, we compared the motion artifact reduction, image noise reduction (standard deviation of CT number), and cone-beam artifact reduction on simulated and clinical data set. Our results demonstrate that the proposed algorithm dramatically reduces motion artifacts compared with the SPS algorithm, and decreases image noise compared with the FDK algorithm. In addition, the proposed algorithm potentially improves time resolution of iterative reconstruction. (orig.)

  12. Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.

  13. Reconstructing Generalized Logical Networks of Transcriptional Regulation in Mouse Brain from Temporal Gene Expression Data

    Energy Technology Data Exchange (ETDEWEB)

    Song, Mingzhou (Joe) [New Mexico State University, Las Cruces; Lewis, Chris K. [New Mexico State University, Las Cruces; Lance, Eric [New Mexico State University, Las Cruces; Chesler, Elissa J [ORNL; Kirova, Roumyana [Bristol-Myers Squibb Pharmaceutical Research & Development, NJ; Langston, Michael A [University of Tennessee, Knoxville (UTK); Bergeson, Susan [Texas Tech University, Lubbock

    2009-01-01

    The problem of reconstructing generalized logical networks to account for temporal dependencies among genes and environmental stimuli from high-throughput transcriptomic data is addressed. A network reconstruction algorithm was developed that uses the statistical significance as a criterion for network selection to avoid false-positive interactions arising from pure chance. Using temporal gene expression data collected from the brains of alcohol-treated mice in an analysis of the molecular response to alcohol, this algorithm identified genes from a major neuronal pathway as putative components of the alcohol response mechanism. Three of these genes have known associations with alcohol in the literature. Several other potentially relevant genes, highlighted and agreeing with independent results from literature mining, may play a role in the response to alcohol. Additional, previously-unknown gene interactions were discovered that, subject to biological verification, may offer new clues in the search for the elusive molecular mechanisms of alcoholism.

  14. Efficient discrete cosine transform model-based algorithm for photoacoustic image reconstruction

    Science.gov (United States)

    Zhang, Yan; Wang, Yuanyuan; Zhang, Chen

    2013-06-01

    The model-based algorithm is an effective reconstruction method for photoacoustic imaging (PAI). Compared with the analytical reconstruction algorithms, the model-based algorithm is able to provide a more accurate and high-resolution reconstructed image. However, the relatively heavy computational complexity and huge memory storage requirement often impose restrictions on its applications. We incorporate the discrete cosine transform (DCT) in PAI reconstruction and establish a new photoacoustic model. With this new model, an efficient algorithm is proposed for PAI reconstruction. Relatively significant DCT coefficients of the measured signals are used to reconstruct the image. As a result, the calculation can be saved. The theoretical computation complexity of the proposed algorithm is figured out and it is proved that the proposed method is efficient in calculation. The proposed algorithm is also verified through the numerical simulations and in vitro experiments. Compared with former developed model-based methods, the proposed algorithm is able to provide an equivalent reconstruction with the cost of much less time. From the theoretical analysis and the experiment results, it would be concluded that the model-based PAI reconstruction can be accelerated by using the proposed algorithm, so that the practical applicability of PAI may be enhanced.

  15. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  16. Reconstruction of a random phase dynamics network from observations

    Science.gov (United States)

    Pikovsky, A.

    2018-01-01

    We consider networks of coupled phase oscillators of different complexity: Kuramoto-Daido-type networks, generalized Winfree networks, and hypernetworks with triple interactions. For these setups an inverse problem of reconstruction of the network connections and of the coupling function from the observations of the phase dynamics is addressed. We show how a reconstruction based on the minimization of the squared error can be implemented in all these cases. Examples include random networks with full disorder both in the connections and in the coupling functions, as well as networks where the coupling functions are taken from experimental data of electrochemical oscillators. The method can be directly applied to asynchronous dynamics of units, while in the case of synchrony, additional phase resettings are necessary for reconstruction.

  17. A Decomposition Algorithm for Learning Bayesian Network Structures from Data

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Cordero Hernandez, Jorge

    2008-01-01

    It is a challenging task of learning a large Bayesian network from a small data set. Most conventional structural learning approaches run into the computational as well as the statistical problems. We propose a decomposition algorithm for the structure construction without having to learn...... the complete network. The new learning algorithm firstly finds local components from the data, and then recover the complete network by joining the learned components. We show the empirical performance of the decomposition algorithm in several benchmark networks....

  18. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    Science.gov (United States)

    Choo, Ji Yung; Goo, Jin Mo; Lee, Chang Hyun; Park, Chang Min; Park, Sang Joon; Shim, Mi-Suk

    2014-04-01

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. • Computed tomography is increasingly used to provide objective measurements of intra-thoracic structures. • Iterative reconstruction algorithms can affect quantitative measurements of lung and airways. • Care should be taken in selecting reconstruction algorithms in longitudinal analysis. • Model-based iterative reconstruction seems to provide the most accurate airway measurements.

  19. MR Image Reconstruction Based on Iterative Split Bregman Algorithm and Nonlocal Total Variation

    Directory of Open Access Journals (Sweden)

    Varun P. Gopi

    2013-01-01

    and least-square data-fitting term to reconstruct the MR images from undersampled k-space data. The nonlocal total variation is taken as the L1-regularization functional and solved using Split Bregman iteration. The proposed algorithm is compared with previous methods in terms of the reconstruction accuracy and computational complexity. The comparison results demonstrate the superiority of the proposed algorithm for compressed MR image reconstruction.

  20. X-Ray Dose Reduction in Abdominal Computed Tomography Using Advanced Iterative Reconstruction Algorithms

    OpenAIRE

    Peigang Ning; Shaocheng Zhu; Dapeng Shi; Ying Guo; Minghua Sun

    2014-01-01

    OBJECTIVE: This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) algorithms in reducing computed tomography (CT) radiation dosages in abdominal imaging. METHODS: CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP), 50% ASiR and MBIR algorithms and compared. The CT value, image noise a...

  1. Time Reversal Reconstruction Algorithm Based on PSO Optimized SVM Interpolation for Photoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Mingjian Sun

    2015-01-01

    Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.

  2. CMIP: a software package capable of reconstructing genome-wide regulatory networks using gene expression data.

    Science.gov (United States)

    Zheng, Guangyong; Xu, Yaochen; Zhang, Xiujun; Liu, Zhi-Ping; Wang, Zhuo; Chen, Luonan; Zhu, Xin-Guang

    2016-12-23

    A gene regulatory network (GRN) represents interactions of genes inside a cell or tissue, in which vertexes and edges stand for genes and their regulatory interactions respectively. Reconstruction of gene regulatory networks, in particular, genome-scale networks, is essential for comparative exploration of different species and mechanistic investigation of biological processes. Currently, most of network inference methods are computationally intensive, which are usually effective for small-scale tasks (e.g., networks with a few hundred genes), but are difficult to construct GRNs at genome-scale. Here, we present a software package for gene regulatory network reconstruction at a genomic level, in which gene interaction is measured by the conditional mutual information measurement using a parallel computing framework (so the package is named CMIP). The package is a greatly improved implementation of our previous PCA-CMI algorithm. In CMIP, we provide not only an automatic threshold determination method but also an effective parallel computing framework for network inference. Performance tests on benchmark datasets show that the accuracy of CMIP is comparable to most current network inference methods. Moreover, running tests on synthetic datasets demonstrate that CMIP can handle large datasets especially genome-wide datasets within an acceptable time period. In addition, successful application on a real genomic dataset confirms its practical applicability of the package. This new software package provides a powerful tool for genomic network reconstruction to biological community. The software can be accessed at http://www.picb.ac.cn/CMIP/ .

  3. SA-SOM algorithm for detecting communities in complex networks

    Science.gov (United States)

    Chen, Luogeng; Wang, Yanran; Huang, Xiaoming; Hu, Mengyu; Hu, Fang

    2017-10-01

    Currently, community detection is a hot topic. This paper, based on the self-organizing map (SOM) algorithm, introduced the idea of self-adaptation (SA) that the number of communities can be identified automatically, a novel algorithm SA-SOM of detecting communities in complex networks is proposed. Several representative real-world networks and a set of computer-generated networks by LFR-benchmark are utilized to verify the accuracy and the efficiency of this algorithm. The experimental findings demonstrate that this algorithm can identify the communities automatically, accurately and efficiently. Furthermore, this algorithm can also acquire higher values of modularity, NMI and density than the SOM algorithm does.

  4. Reconstructing Generalized Logical Networks of Transcriptional Regulation in Mouse Brain from Temporal Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Lodowski Kerrie H

    2009-01-01

    Full Text Available Gene expression time course data can be used not only to detect differentially expressed genes but also to find temporal associations among genes. The problem of reconstructing generalized logical networks to account for temporal dependencies among genes and environmental stimuli from transcriptomic data is addressed. A network reconstruction algorithm was developed that uses statistical significance as a criterion for network selection to avoid false-positive interactions arising from pure chance. The multinomial hypothesis testing-based network reconstruction allows for explicit specification of the false-positive rate, unique from all extant network inference algorithms. The method is superior to dynamic Bayesian network modeling in a simulation study. Temporal gene expression data from the brains of alcohol-treated mice in an analysis of the molecular response to alcohol are used for modeling. Genes from major neuronal pathways are identified as putative components of the alcohol response mechanism. Nine of these genes have associations with alcohol reported in literature. Several other potentially relevant genes, compatible with independent results from literature mining, may play a role in the response to alcohol. Additional, previously unknown gene interactions were discovered that, subject to biological verification, may offer new clues in the search for the elusive molecular mechanisms of alcoholism.

  5. A Superresolution Image Reconstruction Algorithm Based on Landweber in Electrical Capacitance Tomography

    Directory of Open Access Journals (Sweden)

    Chen Deyun

    2013-01-01

    Full Text Available According to the image reconstruction accuracy influenced by the “soft field” nature and ill-conditioned problems in electrical capacitance tomography, a superresolution image reconstruction algorithm based on Landweber is proposed in the paper, which is based on the working principle of the electrical capacitance tomography system. The method uses the algorithm which is derived by regularization of solutions derived and derives closed solution by fast Fourier transform of the convolution kernel. So, it ensures the certainty of the solution and improves the stability and quality of image reconstruction results. Simulation results show that the imaging precision and real-time imaging of the algorithm are better than Landweber algorithm, and this algorithm proposes a new method for the electrical capacitance tomography image reconstruction algorithm.

  6. PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.

    Science.gov (United States)

    Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A

    2016-06-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Reconstructing Networks from Profit Sequences in Evolutionary Games via a Multiobjective Optimization Approach with Lasso Initialization

    Science.gov (United States)

    Wu, Kai; Liu, Jing; Wang, Shuai

    2016-11-01

    Evolutionary games (EG) model a common type of interactions in various complex, networked, natural and social systems. Given such a system with only profit sequences being available, reconstructing the interacting structure of EG networks is fundamental to understand and control its collective dynamics. Existing approaches used to handle this problem, such as the lasso, a convex optimization method, need a user-defined constant to control the tradeoff between the natural sparsity of networks and measurement error (the difference between observed data and simulated data). However, a shortcoming of these approaches is that it is not easy to determine these key parameters which can maximize the performance. In contrast to these approaches, we first model the EG network reconstruction problem as a multiobjective optimization problem (MOP), and then develop a framework which involves multiobjective evolutionary algorithm (MOEA), followed by solution selection based on knee regions, termed as MOEANet, to solve this MOP. We also design an effective initialization operator based on the lasso for MOEA. We apply the proposed method to reconstruct various types of synthetic and real-world networks, and the results show that our approach is effective to avoid the above parameter selecting problem and can reconstruct EG networks with high accuracy.

  8. Multiview and light-field reconstruction algorithms for 360° multiple-projector-type 3D display.

    Science.gov (United States)

    Zhong, Qing; Peng, Yifan; Li, Haifeng; Su, Chen; Shen, Weidong; Liu, Xu

    2013-07-01

    Both multiview and light-field reconstructions are proposed for a multiple-projector 3D display system. To compare the performance of the reconstruction algorithms in the same system, an optimized multiview reconstruction algorithm with sub-view-zones (SVZs) is proposed. The algorithm divided the conventional view zones in multiview display into several SVZs and allocates more view images. The optimized reconstruction algorithm unifies the conventional multiview reconstruction and light-field reconstruction algorithms, which can indicate the difference in performance when multiview reconstruction is changed to light-field reconstruction. A prototype consisting of 60 projectors with an arc diffuser as its screen is constructed to verify the algorithms. Comparison of different configurations of SVZs shows that light-field reconstruction provides large-scale 3D images with the smoothest motion parallax; thus it may provide better overall performance for large-scale 360° display than multiview reconstruction.

  9. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    Science.gov (United States)

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  10. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm.

    Science.gov (United States)

    Sidky, Emil Y; Jørgensen, Jakob H; Pan, Xiaochuan

    2012-05-21

    The primal-dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1-26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity x-ray illumination is presented.

  11. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle–Pock algorithm

    DEFF Research Database (Denmark)

    Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    The primal–dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1–26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems...... for the purpose of designing iterative image reconstruction algorithms for CT. The primal–dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application...

  12. A simple and efficient algorithm for modeling modular complex networks

    Science.gov (United States)

    Kowalczyk, Mateusz; Fronczak, Piotr; Fronczak, Agata

    2017-09-01

    In this paper we introduce a new algorithm to generate networks in which node degrees and community sizes can follow any arbitrary distribution. We compare the quality and efficiency of the proposed algorithm and the well-known algorithm by Lancichinetti et al. In contrast to the later one, the new algorithm, at the cost of accuracy, allows to generate two orders of magnitude larger networks in a reasonable time and it can be easily described analytically.

  13. Recurrent neural networks training with stable bounding ellipsoid algorithm.

    Science.gov (United States)

    Yu, Wen; de Jesús Rubio, José

    2009-06-01

    Bounding ellipsoid (BE) algorithms offer an attractive alternative to traditional training algorithms for neural networks, for example, backpropagation and least squares methods. The benefits include high computational efficiency and fast convergence speed. In this paper, we propose an ellipsoid propagation algorithm to train the weights of recurrent neural networks for nonlinear systems identification. Both hidden layers and output layers can be updated. The stability of the BE algorithm is proven.

  14. Training product unit neural networks with genetic algorithms

    Science.gov (United States)

    Janson, D. J.; Frenzel, J. F.; Thelen, D. C.

    1991-01-01

    The training of product neural networks using genetic algorithms is discussed. Two unusual neural network techniques are combined; product units are employed instead of the traditional summing units and genetic algorithms train the network rather than backpropagation. As an example, a neural netork is trained to calculate the optimum width of transistors in a CMOS switch. It is shown how local minima affect the performance of a genetic algorithm, and one method of overcoming this is presented.

  15. Total variation based gradient descent algorithm for sparse-view photoacoustic image reconstruction.

    Science.gov (United States)

    Zhang, Yan; Wang, Yuanyuan; Zhang, Chen

    2012-12-01

    In photoacoustic imaging (PAI), reconstruction from sparse-view sampling data is a remaining challenge in the cases of fast or real-time imaging. In this paper, we present our study on a total variation based gradient descent (TV-GD) algorithm for sparse-view PAI reconstruction. This algorithm involves the total variation (TV) method in compressed sensing (CS) theory. The objective function of the algorithm is modified by adding the TV value of the reconstructed image. With this modification, the reconstructed image could be closer to the real optical energy distribution map. Additionally in the proposed algorithm, the photoacoustic data is processed and the image is updated individually at each detection point. In this way, the calculation with large matrix can be avoided and a more frequent image update can be obtained. Through the numerical simulations, the proposed algorithm is verified and compared with other reconstruction algorithms which have been widely used in PAI. The peak signal-to-noise ratio (PSNR) of the image reconstructed by this algorithm is higher than those by the other algorithms. Additionally, the convergence of the algorithm, the robustness to noise and the tunable parameter are further discussed. The TV-based algorithm is also implemented in the in vitro experiment. The better performance of the proposed method is revealed in the experiments results. From the results, it is seen that the TV-GD algorithm may be a practical and efficient algorithm for sparse-view PAI reconstruction. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. The Convolutional Visual Network for Identification and Reconstruction of NOvA Events

    Energy Technology Data Exchange (ETDEWEB)

    Psihas, Fernanda [Indiana U.

    2017-11-22

    In 2016 the NOvA experiment released results for the observation of oscillations in the vμ and ve channels as well as ve cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identification and reconstruction of the neutrino flavor and energy recorded by our detectors. This presentation describes the first application of convolutional neural network technology for event identification and reconstruction in particle detectors like NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identification, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the ve appearance signal by 40% and studies show potential impact to the vμ disappearance analysis.

  17. CVN A Convolutional Visual Network for Identication and Reconstruction of NOvA Events

    Science.gov (United States)

    Psihas, Fernanda; NOvA Collaboration

    2017-09-01

    In the past year, the NOvA experiment released results for the observation of neutrino oscillations in the νμ and νe channels as well as νe cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identication and reconstruction of the neutrino avor and energy recorded by our detectors. This presentation describes the rst application of convolutional neural network technology for event identication and reconstruction in particle detectors such as NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identication, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the νe appearance signal by 40% and studies show potential impact to the νμ disappearance analysis.

  18. The Convolutional Visual Network for Identification and Reconstruction of NOvA Events

    Science.gov (United States)

    Psihas, Fernanda; NOvA Collaboration

    2017-10-01

    In 2016 the NOvA experiment released results for the observation of oscillations in the vμ and ve channels as well as ve cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identification and reconstruction of the neutrino flavor and energy recorded by our detectors. This presentation describes the first application of convolutional neural network technology for event identification and reconstruction in particle detectors like NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identification, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the ve appearance signal by 40% and studies show potential impact to the vμ disappearance analysis.

  19. A Comparative Analysis of Community Detection Algorithms on Artificial Networks.

    Science.gov (United States)

    Yang, Zhao; Algesheimer, René; Tessone, Claudio J

    2016-08-01

    Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms' computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm's predicting power and the effective computing time.

  20. Research on super-resolution image reconstruction based on an improved POCS algorithm

    Science.gov (United States)

    Xu, Haiming; Miao, Hong; Yang, Chong; Xiong, Cheng

    2015-07-01

    Super-resolution image reconstruction (SRIR) can improve the fuzzy image's resolution; solve the shortage of the spatial resolution, excessive noise, and low-quality problem of the image. Firstly, we introduce the image degradation model to reveal the essence of super-resolution reconstruction process is an ill-posed inverse problem in mathematics. Secondly, analysis the blurring reason of optical imaging process - light diffraction and small angle scattering is the main reason for the fuzzy; propose an image point spread function estimation method and an improved projection onto convex sets (POCS) algorithm which indicate effectiveness by analyzing the changes between the time domain and frequency domain algorithm in the reconstruction process, pointed out that the improved POCS algorithms based on prior knowledge have the effect to restore and approach the high frequency of original image scene. Finally, we apply the algorithm to reconstruct synchrotron radiation computer tomography (SRCT) image, and then use these images to reconstruct the three-dimensional slice images. Comparing the differences between the original method and super-resolution algorithm, it is obvious that the improved POCS algorithm can restrain the noise and enhance the image resolution, so it is indicated that the algorithm is effective. This study and exploration to super-resolution image reconstruction by improved POCS algorithm is proved to be an effective method. It has important significance and broad application prospects - for example, CT medical image processing and SRCT ceramic sintering analyze of microstructure evolution mechanism.

  1. Human metabolic network: reconstruction, simulation, and applications in systems biology.

    Science.gov (United States)

    Wu, Ming; Chan, Christina

    2012-03-02

    Metabolism is crucial to cell growth and proliferation. Deficiency or alterations in metabolic functions are known to be involved in many human diseases. Therefore, understanding the human metabolic system is important for the study and treatment of complex diseases. Current reconstructions of the global human metabolic network provide a computational platform to integrate genome-scale information on metabolism. The platform enables a systematic study of the regulation and is applicable to a wide variety of cases, wherein one could rely on in silico perturbations to predict novel targets, interpret systemic effects, and identify alterations in the metabolic states to better understand the genotype-phenotype relationships. In this review, we describe the reconstruction of the human metabolic network, introduce the constraint based modeling approach to analyze metabolic networks, and discuss systems biology applications to study human physiology and pathology. We highlight the challenges and opportunities in network reconstruction and systems modeling of the human metabolic system.

  2. Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available We study the local region-of-interest (ROI reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b the ROI are reconstructed by either the generalized filtered backprojection (FBP or backprojection-filtration (BPF algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction.

  3. Reconstruction of metabolic networks from high-throughput metabolite profiling data: in silico analysis of red blood cell metabolism

    OpenAIRE

    Nemenman, Ilya; Escola, G. Sean; Hlavacek, William S.; Unkefer, Pat J.; Unkefer, Clifford J.; Wall, Michael E.

    2007-01-01

    We investigate the ability of algorithms developed for reverse engineering of transcriptional regulatory networks to reconstruct metabolic networks from high-throughput metabolite profiling data. For this, we generate synthetic metabolic profiles for benchmarking purposes based on a well-established model for red blood cell metabolism. A variety of data sets is generated, accounting for different properties of real metabolic networks, such as experimental noise, metabolite correlations, and t...

  4. An Efficient Hierarchy Algorithm for Community Detection in Complex Networks

    Directory of Open Access Journals (Sweden)

    Lili Zhang

    2014-01-01

    Full Text Available Community structure is one of the most fundamental and important topology characteristics of complex networks. The research on community structure has wide applications and is very important for analyzing the topology structure, understanding the functions, finding the hidden properties, and forecasting the time-varying of the networks. This paper analyzes some related algorithms and proposes a new algorithm—CN agglomerative algorithm based on graph theory and the local connectedness of network to find communities in network. We show this algorithm is distributed and polynomial; meanwhile the simulations show it is accurate and fine-grained. Furthermore, we modify this algorithm to get one modified CN algorithm and apply it to dynamic complex networks, and the simulations also verify that the modified CN algorithm has high accuracy too.

  5. Reconstruction of LGT networks from tri-LGT-nets.

    Science.gov (United States)

    Cardona, Gabriel; Pons, Joan Carles

    2017-12-01

    Phylogenetic networks have gained attention from the scientific community due to the evidence of the existence of evolutionary events that cannot be represented using trees. A variant of phylogenetic networks, called LGT networks, models specifically lateral gene transfer events, which cannot be properly represented with generic phylogenetic networks. In this paper we treat the problem of the reconstruction of LGT networks from substructures induced by three leaves, which we call tri-LGT-nets. We first restrict ourselves to a class of LGT networks that are both mathematically treatable and biologically significant, called BAN-LGT networks. Then, we study the decomposition of such networks in subnetworks with three leaves and ask whether or not this decomposition determines the network. The answer to this question is negative, but if we further impose time-consistency (species involved in a later gene transfer must coexist) the answer is affirmative, up to some redundancy that can never be recovered but is fully characterized.

  6. Missing and spurious interactions and the reconstruction of complex networks

    CERN Document Server

    Guimera, R; 10.1073/pnas.0908366106

    2010-01-01

    Network analysis is currently used in a myriad of contexts: from identifying potential drug targets to predicting the spread of epidemics and designing vaccination strategies, and from finding friends to uncovering criminal activity. Despite the promise of the network approach, the reliability of network data is a source of great concern in all fields where complex networks are studied. Here, we present a general mathematical and computational framework to deal with the problem of data reliability in complex networks. In particular, we are able to reliably identify both missing and spurious interactions in noisy network observations. Remarkably, our approach also enables us to obtain, from those noisy observations, network reconstructions that yield estimates of the true network properties that are more accurate than those provided by the observations themselves. Our approach has the potential to guide experiments, to better characterize network data sets, and to drive new discoveries.

  7. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  8. A practical local tomography reconstruction algorithm based on known subregion

    CERN Document Server

    Paleo, Pierre; Mirone, Alessandro

    2016-01-01

    We propose a new method to reconstruct data acquired in a local tomography setup. This method uses an initial reconstruction and refines it by correcting the low frequency artifacts known as the cupping effect. A basis of Gaussian functions is used to correct the initial reconstruction. The coefficients of this basis are iteratively optimized under the constraint of a known subregion. Using a coarse basis reduces the degrees of freedom of the problem while actually correcting the cupping effect. Simulations show that the known region constraint yields an unbiased reconstruction, in accordance to uniqueness theorems stated in local tomography.

  9. The reconstruction and analysis of tissue specific human metabolic networks.

    Science.gov (United States)

    Hao, Tong; Ma, Hong-Wu; Zhao, Xue-Ming; Goryanin, Igor

    2012-02-01

    Human tissues have distinct biological functions. Many proteins/enzymes are known to be expressed only in specific tissues and therefore the metabolic networks in various tissues are different. Though high quality global human metabolic networks and metabolic networks for certain tissues such as liver have already been studied, a systematic study of tissue specific metabolic networks for all main tissues is still missing. In this work, we reconstruct the tissue specific metabolic networks for 15 main tissues in human based on the previously reconstructed Edinburgh Human Metabolic Network (EHMN). The tissue information is firstly obtained for enzymes from Human Protein Reference Database (HPRD) and UniprotKB databases and transfers to reactions through the enzyme-reaction relationships in EHMN. As our knowledge of tissue distribution of proteins is still very limited, we replenish the tissue information of the metabolic network based on network connectivity analysis and thorough examination of the literature. Finally, about 80% of proteins and reactions in EHMN are determined to be in at least one of the 15 tissues. To validate the quality of the tissue specific network, the brain specific metabolic network is taken as an example for functional module analysis and the results reveal that the function of the brain metabolic network is closely related with its function as the centre of the human nervous system. The tissue specific human metabolic networks are available at .

  10. Study of Vivaldi Algorithm in Energy Constraint Networks

    Directory of Open Access Journals (Sweden)

    Tomas Handl

    2011-01-01

    Full Text Available The presented paper discusses a viability of Vivaldi localization algorithm and synthetic coordinate system in general to be used for localization purposes in energy constraint networks. Synthetic coordinate systems achieve good results in IP based networks and thus, it could be a perspective way of node localization in other types of networks. However, transfer of Vivaldi algorithm into a different kind of network is a difficult task because the different basic characteristic of the network and network nodes. In this paper we focus on the different aspects of IP based networks and networks of wireless sensors which suffer from strict energy limitation. During our work we proposed a modified version of two dimensional Vivaldi localization algorithm with height system and developed a simulator tool for initial investigation of its function in ad-hoc energy constraint networks.

  11. A Location-Aware Vertical Handoff Algorithm for Hybrid Networks

    KAUST Repository

    Mehbodniya, Abolfazl

    2010-07-01

    One of the main objectives of wireless networking is to provide mobile users with a robust connection to different networks so that they can move freely between heterogeneous networks while running their computing applications with no interruption. Horizontal handoff, or generally speaking handoff, is a process which maintains a mobile user\\'s active connection as it moves within a wireless network, whereas vertical handoff (VHO) refers to handover between different types of networks or different network layers. Optimizing VHO process is an important issue, required to reduce network signalling and mobile device power consumption as well as to improve network quality of service (QoS) and grade of service (GoS). In this paper, a VHO algorithm in multitier (overlay) networks is proposed. This algorithm uses pattern recognition to estimate user\\'s position, and decides on the handoff based on this information. For the pattern recognition algorithm structure, the probabilistic neural network (PNN) which has considerable simplicity and efficiency over existing pattern classifiers is used. Further optimization is proposed to improve the performance of the PNN algorithm. Performance analysis and comparisons with the existing VHO algorithm are provided and demonstrate a significant improvement with the proposed algorithm. Furthermore, incorporating the proposed algorithm, a structure is proposed for VHO from the medium access control (MAC) layer point of view. © 2010 ACADEMY PUBLISHER.

  12. A CT Reconstruction Algorithm Based on L1/2 Regularization

    Directory of Open Access Journals (Sweden)

    Mianyi Chen

    2014-01-01

    Full Text Available Computed tomography (CT reconstruction with low radiation dose is a significant research point in current medical CT field. Compressed sensing has shown great potential reconstruct high-quality CT images from few-view or sparse-view data. In this paper, we use the sparser L1/2 regularization operator to replace the traditional L1 regularization and combine the Split Bregman method to reconstruct CT images, which has good unbiasedness and can accelerate iterative convergence. In the reconstruction experiments with simulation and real projection data, we analyze the quality of reconstructed images using different reconstruction methods in different projection angles and iteration numbers. Compared with algebraic reconstruction technique (ART and total variance (TV based approaches, the proposed reconstruction algorithm can not only get better images with higher quality from few-view data but also need less iteration numbers.

  13. Filtering of measurement noise with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Pivnenko, Sergey

    2014-01-01

    Two different antenna models are set up in GRASP and CHAMP, and noise is added to the radiated field. The noisy field is then given as input to the 3D reconstruction of DIATOOL and the SWE coefficients and the far-field radiated by the reconstructed currents are compared with the noise-free results...

  14. An Approximate Cone Beam Reconstruction Algorithm for Gantry-Tilted CT Using Tangential Filtering

    Directory of Open Access Journals (Sweden)

    Ming Yan

    2006-01-01

    Full Text Available FDK algorithm is a well-known 3D (three-dimensional approximate algorithm for CT (computed tomography image reconstruction and is also known to suffer from considerable artifacts when the scanning cone angle is large. Recently, it has been improved by performing the ramp filtering along the tangential direction of the X-ray source helix for dealing with the large cone angle problem. In this paper, we present an FDK-type approximate reconstruction algorithm for gantry-tilted CT imaging. The proposed method improves the image reconstruction by filtering the projection data along a proper direction which is determined by CT parameters and gantry-tilted angle. As a result, the proposed algorithm for gantry-tilted CT reconstruction can provide more scanning flexibilities in clinical CT scanning and is efficient in computation. The performance of the proposed algorithm is evaluated with turbell clock phantom and thorax phantom and compared with FDK algorithm and a popular 2D (two-dimensional approximate algorithm. The results show that the proposed algorithm can achieve better image quality for gantry-tilted CT image reconstruction.

  15. A High-Efficiency Uneven Cluster Deployment Algorithm Based on Network Layered for Event Coverage in UWSNs

    Directory of Open Access Journals (Sweden)

    Shanen Yu

    2016-12-01

    Full Text Available Most existing deployment algorithms for event coverage in underwater wireless sensor networks (UWSNs usually do not consider that network communication has non-uniform characteristics on three-dimensional underwater environments. Such deployment algorithms ignore that the nodes are distributed at different depths and have different probabilities for data acquisition, thereby leading to imbalances in the overall network energy consumption, decreasing the network performance, and resulting in poor and unreliable late network operation. Therefore, in this study, we proposed an uneven cluster deployment algorithm based network layered for event coverage. First, according to the energy consumption requirement of the communication load at different depths of the underwater network, we obtained the expected value of deployment nodes and the distribution density of each layer network after theoretical analysis and deduction. Afterward, the network is divided into multilayers based on uneven clusters, and the heterogeneous communication radius of nodes can improve the network connectivity rate. The recovery strategy is used to balance the energy consumption of nodes in the cluster and can efficiently reconstruct the network topology, which ensures that the network has a high network coverage and connectivity rate in a long period of data acquisition. Simulation results show that the proposed algorithm improves network reliability and prolongs network lifetime by significantly reducing the blind movement of overall network nodes while maintaining a high network coverage and connectivity rate.

  16. DART: a robust algorithm for fast reconstruction of three-dimensional grain maps

    DEFF Research Database (Denmark)

    Batenburg, K.J.; Sijbers, J.; Poulsen, Henning Friis

    2010-01-01

    A novel algorithm is introduced for fast and nondestructive reconstruction of grain maps from X-ray diffraction data. The discrete algebraic reconstruction technique (DART) takes advantage of the intrinsic discrete nature of grain maps, while being based on iterative algebraic methods known from...... classical tomography. To test the properties of the algorithm, three-dimensional X-ray diffraction microscopy data are simulated and reconstructed with DART as well as by a conventional iterative technique, namely SIRT (simultaneous iterative reconstruction technique). For 100 × 100 pixel reconstructions...... and moderate noise levels, DART is shown to generate essentially perfect two-dimensional grain maps for as few as three projections per grain with running times on a PC in the range of less than a second. This is seen as opening up the possibility for fast reconstructions in connection with in situ studies....

  17. Enhanced Handover Decision Algorithm in Heterogeneous Wireless Network.

    Science.gov (United States)

    Abdullah, Radhwan Mohamed; Zukarnain, Zuriati Ahmad

    2017-07-14

    Transferring a huge amount of data between different network locations over the network links depends on the network's traffic capacity and data rate. Traditionally, a mobile device may be moved to achieve the operations of vertical handover, considering only one criterion, that is the Received Signal Strength (RSS). The use of a single criterion may cause service interruption, an unbalanced network load and an inefficient vertical handover. In this paper, we propose an enhanced vertical handover decision algorithm based on multiple criteria in the heterogeneous wireless network. The algorithm consists of three technology interfaces: Long-Term Evolution (LTE), Worldwide interoperability for Microwave Access (WiMAX) and Wireless Local Area Network (WLAN). It also employs three types of vertical handover decision algorithms: equal priority, mobile priority and network priority. The simulation results illustrate that the three types of decision algorithms outperform the traditional network decision algorithm in terms of handover number probability and the handover failure probability. In addition, it is noticed that the network priority handover decision algorithm produces better results compared to the equal priority and the mobile priority handover decision algorithm. Finally, the simulation results are validated by the analytical model.

  18. An Energy Efficient Multipath Routing Algorithm for Wireless Sensor Networks

    NARCIS (Netherlands)

    Dulman, S.O.; Wu Jian, W.J.; Havinga, Paul J.M.

    In this paper we introduce a new routing algorithm for wireless sensor networks. The aim of this algorithm is to provide on-demand multiple disjoint paths between a data source and a destination. Our Multipath On-Demand Routing Algorithm (MDR) improves the reliability of data routing in a wireless

  19. Array diagnostics, spatial resolution, and filtering of undesired radiation with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, C.; Pivnenko, Sergey; Jørgensen, E.

    2013-01-01

    This paper focuses on three important features of the 3D reconstruction algorithm of DIATOOL: the identification of array elements improper functioning and failure, the obtainable spatial resolution of the reconstructed fields and currents, and the filtering of undesired radiation and scattering...

  20. Spectrum Assignment Algorithm for Cognitive Machine-to-Machine Networks

    Directory of Open Access Journals (Sweden)

    Soheil Rostami

    2016-01-01

    Full Text Available A novel aggregation-based spectrum assignment algorithm for Cognitive Machine-To-Machine (CM2M networks is proposed. The introduced algorithm takes practical constraints including interference to the Licensed Users (LUs, co-channel interference (CCI among CM2M devices, and Maximum Aggregation Span (MAS into consideration. Simulation results show clearly that the proposed algorithm outperforms State-Of-The-Art (SOTA algorithms in terms of spectrum utilisation and network capacity. Furthermore, the convergence analysis of the proposed algorithm verifies its high convergence rate.

  1. Adaptive clustering algorithm for community detection in complex networks

    Science.gov (United States)

    Ye, Zhenqing; Hu, Songnian; Yu, Jun

    2008-10-01

    Community structure is common in various real-world networks; methods or algorithms for detecting such communities in complex networks have attracted great attention in recent years. We introduced a different adaptive clustering algorithm capable of extracting modules from complex networks with considerable accuracy and robustness. In this approach, each node in a network acts as an autonomous agent demonstrating flocking behavior where vertices always travel toward their preferable neighboring groups. An optimal modular structure can emerge from a collection of these active nodes during a self-organization process where vertices constantly regroup. In addition, we show that our algorithm appears advantageous over other competing methods (e.g., the Newman-fast algorithm) through intensive evaluation. The applications in three real-world networks demonstrate the superiority of our algorithm to find communities that are parallel with the appropriate organization in reality.

  2. Damage detection and localization algorithm using a dense sensor network of thin film sensors

    Science.gov (United States)

    Downey, Austin; Ubertini, Filippo; Laflamme, Simon

    2017-04-01

    The authors have recently proposed a hybrid dense sensor network consisting of a novel, capacitive-based thin-film electronic sensor for monitoring strain on mesosurfaces and fiber Bragg grating sensors for enforcing boundary conditions on the perimeter of the monitored area. The thin-film sensor monitors local strain over a global area through transducing a change in strain into a change in capacitance. In the case of bidirectional in-plane strain, the sensor output contains the additive measurement of both principal strain components. When combined with the mature technology of fiber Bragg grating sensors, the hybrid dense sensor network shows potential for the monitoring of mesoscale systems. In this paper, we present an algorithm for the detection, quantification, and localization of strain within a hybrid dense sensor network. The algorithm leverages the advantages of a hybrid dense sensor network for the monitoring of large scale systems. The thin film sensor is used to monitor strain over a large area while the fiber Bragg grating sensors are used to enforce the uni-directional strain along the perimeter of the hybrid dense sensor network. Orthogonal strain maps are reconstructed by assuming different bidirectional shape functions and are solved using the least squares estimator to reconstruct the planar strain maps within the hybrid dense sensor network. Error between the estimated strain maps and measured strains is extracted to derive damage detecting features, dependent on the selected shape functions. Results from numerical simulations show good performance of the proposed algorithm.

  3. A regularization-free Young's modulus reconstruction algorithm for ultrasound elasticity imaging.

    Science.gov (United States)

    Pan, Xiaochang; Gao, Jing; Shao, Jinhua; Luo, Jianwen; Bai, Jing

    2013-01-01

    Ultrasound elasticity imaging aims to reconstruct the distribution of elastic modulus (e.g., Young's modulus) within biological tissues, since the value of elastic modulus is often related to pathological changes. Currently, most elasticity imaging algorithms face a challenge of choosing the value of the regularization constant. We propose a more applicable algorithm without the need of any regularization. This algorithm is not only simple to use, but has a relatively high accuracy. Our method comprises of a nonrigid registration technique and tissue incompressibility assumption to estimate the two-dimensional (2D) displacement field, and finite element method (FEM) to reconstruct the Young's modulus distribution. Simulation and phantom experiments are performed to evaluate the algorithm. Simulation and phantom results showed that the proposed algorithm can reconstruct the Young's modulus with an accuracy of 63∼85%.

  4. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  5. A pre-identification for electron reconstruction in the CMS particle-flow algorithm

    CERN Document Server

    Pioppi, Michele

    2008-01-01

    In the CMS software, a dedicated electron track reconstruction algorithm, based on a Gaussian Sum Filter (GSF), is used. This algorithm is able to follow an electron along its complete path up to the electromagnetic calorimeter, even in the case of a large amount of Bremsstrahlung emission. Because of the significant CPU consumption of this algorithm, it can, however, only be run on a limited number of candidates. The standard GSF electron track reconstruction is triggered by the presence of high energy isolated electromagnetic clusters, but it is not suited for electrons in jets (usually soft and not isolated). A pre-identification algorithm based on both the tracker and the calorimeter, was therefore recently developed. It allows electron tracks within jets to be efficiently reconstructed even for small electron transverse momentum. This algorithm as well as its performance in terms of efficiency, mis-identification probability are presented.

  6. A generic algorithm for layout of biological networks.

    Science.gov (United States)

    Schreiber, Falk; Dwyer, Tim; Marriott, Kim; Wybrow, Michael

    2009-11-12

    Biological networks are widely used to represent processes in biological systems and to capture interactions and dependencies between biological entities. Their size and complexity is steadily increasing due to the ongoing growth of knowledge in the life sciences. To aid understanding of biological networks several algorithms for laying out and graphically representing networks and network analysis results have been developed. However, current algorithms are specialized to particular layout styles and therefore different algorithms are required for each kind of network and/or style of layout. This increases implementation effort and means that new algorithms must be developed for new layout styles. Furthermore, additional effort is necessary to compose different layout conventions in the same diagram. Also the user cannot usually customize the placement of nodes to tailor the layout to their particular need or task and there is little support for interactive network exploration. We present a novel algorithm to visualize different biological networks and network analysis results in meaningful ways depending on network types and analysis outcome. Our method is based on constrained graph layout and we demonstrate how it can handle the drawing conventions used in biological networks. The presented algorithm offers the ability to produce many of the fundamental popular drawing styles while allowing the exibility of constraints to further tailor these layouts.

  7. A generic algorithm for layout of biological networks

    Directory of Open Access Journals (Sweden)

    Dwyer Tim

    2009-11-01

    Full Text Available Abstract Background Biological networks are widely used to represent processes in biological systems and to capture interactions and dependencies between biological entities. Their size and complexity is steadily increasing due to the ongoing growth of knowledge in the life sciences. To aid understanding of biological networks several algorithms for laying out and graphically representing networks and network analysis results have been developed. However, current algorithms are specialized to particular layout styles and therefore different algorithms are required for each kind of network and/or style of layout. This increases implementation effort and means that new algorithms must be developed for new layout styles. Furthermore, additional effort is necessary to compose different layout conventions in the same diagram. Also the user cannot usually customize the placement of nodes to tailor the layout to their particular need or task and there is little support for interactive network exploration. Results We present a novel algorithm to visualize different biological networks and network analysis results in meaningful ways depending on network types and analysis outcome. Our method is based on constrained graph layout and we demonstrate how it can handle the drawing conventions used in biological networks. Conclusion The presented algorithm offers the ability to produce many of the fundamental popular drawing styles while allowing the exibility of constraints to further tailor these layouts.

  8. Learning algorithms for feedforward networks based on finite samples

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.V.; Protopopescu, V.; Mann, R.C.; Oblow, E.M.; Iyengar, S.S.

    1994-09-01

    Two classes of convergent algorithms for learning continuous functions (and also regression functions) that are represented by feedforward networks, are discussed. The first class of algorithms, applicable to networks with unknown weights located only in the output layer, is obtained by utilizing the potential function methods of Aizerman et al. The second class, applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods. Conditions relating the sample sizes to the error bounds are derived for both classes of algorithms using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems.

  9. ISTA-Net: Iterative Shrinkage-Thresholding Algorithm Inspired Deep Network for Image Compressive Sensing

    KAUST Repository

    Zhang, Jian

    2017-06-24

    Traditional methods for image compressive sensing (CS) reconstruction solve a well-defined inverse problem that is based on a predefined CS model, which defines the underlying structure of the problem and is generally solved by employing convergent iterative solvers. These optimization-based CS methods face the challenge of choosing optimal transforms and tuning parameters in their solvers, while also suffering from high computational complexity in most cases. Recently, some deep network based CS algorithms have been proposed to improve CS reconstruction performance, while dramatically reducing time complexity as compared to optimization-based methods. Despite their impressive results, the proposed networks (either with fully-connected or repetitive convolutional layers) lack any structural diversity and they are trained as a black box, void of any insights from the CS domain. In this paper, we combine the merits of both types of CS methods: the structure insights of optimization-based method and the performance/speed of network-based ones. We propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $l_1$ norm CS reconstruction model. ISTA-Net essentially implements a truncated form of ISTA, where all ISTA-Net parameters are learned end-to-end to minimize a reconstruction error in training. Borrowing more insights from the optimization realm, we propose an accelerated version of ISTA-Net, dubbed FISTA-Net, which is inspired by the fast iterative shrinkage-thresholding algorithm (FISTA). Interestingly, this acceleration naturally leads to skip connections in the underlying network design. Extensive CS experiments demonstrate that the proposed ISTA-Net and FISTA-Net outperform existing optimization-based and network-based CS methods by large margins, while maintaining a fast runtime.

  10. An artificial immune system algorithm approach for reconfiguring distribution network

    Science.gov (United States)

    Syahputra, Ramadoni; Soesanti, Indah

    2017-08-01

    This paper proposes an artificial immune system (AIS) algorithm approach for reconfiguring distribution network with the presence distributed generators (DG). The distribution network with high-performance is a network that has a low power loss, better voltage profile, and loading balance among feeders. The task for improving the performance of the distribution network is optimization of network configuration. The optimization has become a necessary study with the presence of DG in entire networks. In this work, optimization of network configuration is based on an AIS algorithm. The methodology has been tested in a model of 33 bus IEEE radial distribution networks with and without DG integration. The results have been showed that the optimal configuration of the distribution network is able to reduce power loss and to improve the voltage profile of the distribution network significantly.

  11. Mean Field Theory for Nonequilibrium Network Reconstruction

    DEFF Research Database (Denmark)

    Roudi, Yasser; Hertz, John

    2011-01-01

    , as an example, the question of recovering the interactions in an asymmetrically-coupled, synchronously-updated SK model. We derive an exact iterative inversion algorithm and develop efficient approximations based on dynamical mean-field and TAP equations that express the interactions in terms of equal...

  12. FAST ZEROX ALGORITHM FOR ROUTING IN OPTICAL MULTISTAGE INTERCONNECTION NETWORKS

    Directory of Open Access Journals (Sweden)

    T. D. Shahida

    2010-05-01

    Full Text Available Based on the ZeroX algorithm, a fast and efficient crosstalk-free time- domain algorithm called the Fast ZeroX or shortly FastZ_X algorithm is proposed for solving optical crosstalk problem in optical Omega multistage interconnection networks. A new pre-routing technique called the inverse Conflict Matrix (iCM is also introduced to map all possible conflicts identified between each node in the network as another representation of the standard conflict matrix commonly used in previous Zero-based algorithms. It is shown that using the new iCM, the original ZeroX algorithm is simplified, thus improved the algorithm by reducing the time to complete routing process. Through simulation modeling, the new approach yields the best performance in terms of minimal routing time in comparison to the original ZeroX algorithm as well as previous algorithms tested for comparison in this paper.

  13. Quasi Gradient Projection Algorithm for Sparse Reconstruction in Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Xin Meng

    2014-02-01

    Full Text Available Compressed sensing is a novel signal sampling theory under the condition that the signal is sparse or compressible. The existing recovery algorithms based on the gradient projection can either need prior knowledge or recovery the signal poorly. In this paper, a new algorithm based on gradient projection is proposed, which is referred as Quasi Gradient Projection. The algorithm presented quasi gradient direction and two step sizes schemes along this direction. The algorithm doesn’t need any prior knowledge of the original signal. Simulation results demonstrate that the presented algorithm cans recovery the signal more correctly than GPSR which also don’t need prior knowledge. Meanwhile, the algorithm has a lower computation complexity.

  14. Power control algorithms for mobile ad hoc networks

    Directory of Open Access Journals (Sweden)

    Nuraj L. Pradhan

    2011-07-01

    We will also focus on an adaptive distributed power management (DISPOW algorithm as an example of the multi-parameter optimization approach which manages the transmit power of nodes in a wireless ad hoc network to preserve network connectivity and cooperatively reduce interference. We will show that the algorithm in a distributed manner builds a unique stable network topology tailored to its surrounding node density and propagation environment over random topologies in a dynamic mobile wireless channel.

  15. A Network Selection Algorithm Considering Power Consumption in Hybrid Wireless Networks

    Science.gov (United States)

    Joe, Inwhee; Kim, Won-Tae; Hong, Seokjoon

    In this paper, we propose a novel network selection algorithm considering power consumption in hybrid wireless networks for vertical handover. CDMA, WiBro, WLAN networks are candidate networks for this selection algorithm. This algorithm is composed of the power consumption prediction algorithm and the final network selection algorithm. The power consumption prediction algorithm estimates the expected lifetime of the mobile station based on the current battery level, traffic class and power consumption for each network interface card of the mobile station. If the expected lifetime of the mobile station in a certain network is not long enough compared the handover delay, this particular network will be removed from the candidate network list, thereby preventing unnecessary handovers in the preprocessing procedure. On the other hand, the final network selection algorithm consists of AHP (Analytic Hierarchical Process) and GRA (Grey Relational Analysis). The global factors of the network selection structure are QoS, cost and lifetime. If user preference is lifetime, our selection algorithm selects the network that offers longest service duration due to low power consumption. Also, we conduct some simulations using the OPNET simulation tool. The simulation results show that the proposed algorithm provides longer lifetime in the hybrid wireless network environment.

  16. A low memory cost model based reconstruction algorithm exploiting translational symmetry for photoacustic microscopy.

    Science.gov (United States)

    Aguirre, Juan; Giannoula, Alexia; Minagawa, Taisuke; Funk, Lutz; Turon, Pau; Durduran, Turgut

    2013-01-01

    A model based reconstruction algorithm that exploits translational symmetries for photoacoustic microscopy to drastically reduce the memory cost is presented. The memory size needed to store the model matrix is independent of the number of acquisitions at different positions. This helps us to overcome one of the main limitations of previous algorithms. Furthermore, using the algebraic reconstruction technique and building the model matrix "on the fly", we have obtained fast reconstructions of simulated and experimental data on both two- and three-dimensional grids using a traditional dark field photoacoustic microscope and a standard personal computer.

  17. Feed Forward Neural Network Algorithm for Frequent Patterns Mining

    OpenAIRE

    Dr. K.R.Pardasani; Sanjay Sharma; Amit Bhagat

    2010-01-01

    Association rule mining is used to find relationships among items in large data sets. Frequent patterns mining is an important aspect in association rule mining. In this paper, an efficient algorithm named Apriori-Feed Forward(AFF) based on Apriori algorithm and the Feed Forward Neural Network is presented to mine frequent patterns. Apriori algorithm scans database many times to generate frequent itemsets whereas Apriori-Feed Forward(AFF) algorithm scans database Only Once. Computational resu...

  18. Implementation of a local principal curves algorithm for neutrino interaction reconstruction in a liquid argon volume

    Science.gov (United States)

    Back, J. J.; Barker, G. J.; Boyd, S. B.; Einbeck, J.; Haigh, M.; Morgan, B.; Oakley, B.; Ramachers, Y. A.; Roythorne, D.

    2014-03-01

    A local principal curve algorithm has been implemented in three dimensions for automated track and shower reconstruction of neutrino interactions in a liquid argon time projection chamber. We present details of the algorithm and characterise its performance on simulated data sets.

  19. Algorithms and software for total variation image reconstruction via first-order methods

    DEFF Research Database (Denmark)

    Dahl, Joahim; Hansen, Per Christian; Jensen, Søren Holdt

    2010-01-01

    This paper describes new algorithms and related software for total variation (TV) image reconstruction, more specifically: denoising, inpainting, and deblurring. The algorithms are based on one of Nesterov's first-order methods, tailored to the image processing applications in such a way that...

  20. SCENERY: a web application for (causal) network reconstruction from cytometry data

    KAUST Repository

    Papoutsoglou, Georgios

    2017-05-08

    Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/.

  1. A novel weighted total difference based image reconstruction algorithm for few-view computed tomography.

    Directory of Open Access Journals (Sweden)

    Wei Yu

    Full Text Available In practical applications of computed tomography (CT imaging, due to the risk of high radiation dose imposed on the patients, it is desired that high quality CT images can be accurately reconstructed from limited projection data. While with limited projections, the images reconstructed often suffer severe artifacts and the edges of the objects are blurred. In recent years, the compressed sensing based reconstruction algorithm has attracted major attention for CT reconstruction from a limited number of projections. In this paper, to eliminate the streak artifacts and preserve the edge structure information of the object, we present a novel iterative reconstruction algorithm based on weighted total difference (WTD minimization, and demonstrate the superior performance of this algorithm. The WTD measure enforces both the sparsity and the directional continuity in the gradient domain, while the conventional total difference (TD measure simply enforces the gradient sparsity horizontally and vertically. To solve our WTD-based few-view CT reconstruction model, we use the soft-threshold filtering approach. Numerical experiments are performed to validate the efficiency and the feasibility of our algorithm. For a typical slice of FORBILD head phantom, using 40 projections in the experiments, our algorithm outperforms the TD-based algorithm with more than 60% gains in terms of the root-mean-square error (RMSE, normalized root mean square distance (NRMSD and normalized mean absolute distance (NMAD measures and with more than 10% gains in terms of the peak signal-to-noise ratio (PSNR measure. While for the experiments of noisy projections, our algorithm outperforms the TD-based algorithm with more than 15% gains in terms of the RMSE, NRMSD and NMAD measures and with more than 4% gains in terms of the PSNR measure. The experimental results indicate that our algorithm achieves better performance in terms of suppressing streak artifacts and preserving the edge

  2. Congested Link Inference Algorithms in Dynamic Routing IP Network

    Directory of Open Access Journals (Sweden)

    Yu Chen

    2017-01-01

    Full Text Available The performance descending of current congested link inference algorithms is obviously in dynamic routing IP network, such as the most classical algorithm CLINK. To overcome this problem, based on the assumptions of Markov property and time homogeneity, we build a kind of Variable Structure Discrete Dynamic Bayesian (VSDDB network simplified model of dynamic routing IP network. Under the simplified VSDDB model, based on the Bayesian Maximum A Posteriori (BMAP and Rest Bayesian Network Model (RBNM, we proposed an Improved CLINK (ICLINK algorithm. Considering the concurrent phenomenon of multiple link congestion usually happens, we also proposed algorithm CLILRS (Congested Link Inference algorithm based on Lagrangian Relaxation Subgradient to infer the set of congested links. We validated our results by the experiments of analogy, simulation, and actual Internet.

  3. Solving Hub Network Problem Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Mursyid Hasan Basri

    2012-01-01

    Full Text Available This paper addresses a network problem that described as follows. There are n ports that interact, and p of those will be designated as hubs. All hubs are fully interconnected. Each spoke will be allocated to only one of available hubs. Direct connection between two spokes is allowed only if they are allocated to the same hub. The latter is a distinct characteristic that differs it from pure hub-and-spoke system. In case of pure hub-and-spoke system, direct connection between two spokes is not allowed. The problem is where to locate hub ports and to which hub a spoke should be allocated so that total transportation cost is minimum. In the first model, there are some additional aspects are taken into consideration in order to achieve a better representation of the problem. The first, weekly service should be accomplished. Secondly, various vessel types should be considered. The last, a concept of inter-hub discount factor is introduced. Regarding the last aspect, it represents cost reduction factor at hub ports due to economies of scale. In practice, it is common that the cost rate for inter-hub movement is less than the cost rate for movement between hub and origin/destination. In this first model, inter-hub discount factor is assumed independent with amount of flows on inter-hub links (denoted as flow-independent discount policy. The results indicated that the patterns of enlargement of container ship size, to some degree, are similar with those in Kurokawa study. However, with regard to hub locations, the results have not represented the real practice. In the proposed model, unsatisfactory result on hub locations is addressed. One aspect that could possibly be improved to find better hub locations is inter-hub discount factor. Then inter-hub discount factor is assumed to depend on amount of inter-hub flows (denoted as flow-dependent discount policy. There are two discount functions examined in this paper. Both functions are characterized by

  4. Reconstruction and Application of Protein–Protein Interaction Network

    Directory of Open Access Journals (Sweden)

    Tong Hao

    2016-06-01

    Full Text Available The protein-protein interaction network (PIN is a useful tool for systematic investigation of the complex biological activities in the cell. With the increasing interests on the proteome-wide interaction networks, PINs have been reconstructed for many species, including virus, bacteria, plants, animals, and humans. With the development of biological techniques, the reconstruction methods of PIN are further improved. PIN has gradually penetrated many fields in biological research. In this work we systematically reviewed the development of PIN in the past fifteen years, with respect to its reconstruction and application of function annotation, subsystem investigation, evolution analysis, hub protein analysis, and regulation mechanism analysis. Due to the significant role of PIN in the in-depth exploration of biological process mechanisms, PIN will be preferred by more and more researchers for the systematic study of the protein systems in various kinds of organisms.

  5. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  6. Analysis of Community Detection Algorithms for Large Scale Cyber Networks

    Energy Technology Data Exchange (ETDEWEB)

    Mane, Prachita; Shanbhag, Sunanda; Kamath, Tanmayee; Mackey, Patrick S.; Springer, John

    2016-09-30

    The aim of this project is to use existing community detection algorithms on an IP network dataset to create supernodes within the network. This study compares the performance of different algorithms on the network in terms of running time. The paper begins with an introduction to the concept of clustering and community detection followed by the research question that the team aimed to address. Further the paper describes the graph metrics that were considered in order to shortlist algorithms followed by a brief explanation of each algorithm with respect to the graph metric on which it is based. The next section in the paper describes the methodology used by the team in order to run the algorithms and determine which algorithm is most efficient with respect to running time. Finally, the last section of the paper includes the results obtained by the team and a conclusion based on those results as well as future work.

  7. Bayesian network structure learning using chaos hybrid genetic algorithm

    Science.gov (United States)

    Shen, Jiajie; Lin, Feng; Sun, Wei; Chang, KC

    2012-06-01

    A new Bayesian network (BN) learning method using a hybrid algorithm and chaos theory is proposed. The principles of mutation and crossover in genetic algorithm and the cloud-based adaptive inertia weight were incorporated into the proposed simple particle swarm optimization (sPSO) algorithm to achieve better diversity, and improve the convergence speed. By means of ergodicity and randomicity of chaos algorithm, the initial network structure population is generated by using chaotic mapping with uniform search under structure constraints. When the algorithm converges to a local minimal, a chaotic searching is started to skip the local minima and to identify a potentially better network structure. The experiment results show that this algorithm can be effectively used for BN structure learning.

  8. Hybrid iterative reconstruction algorithm improves image quality in craniocervical CT angiography.

    Science.gov (United States)

    Löve, Askell; Siemund, Roger; Höglund, Peter; Ramgren, Birgitta; Undrén, Per; Björkman-Burtscher, Isabella M

    2013-12-01

    The purpose of this study was to evaluate the potential of a hybrid iterative reconstruction algorithm for improving image quality in craniocervical CT angiography (CTA) and to assess observer performance. Thirty patients (mean age, 58 years; range 16-80 years) underwent standard craniocervical CTA (volume CT dose index, 6.8 mGy, 2.8 mSv). Images were reconstructed using both filtered back projection (FBP) and a hybrid iterative reconstruction algorithm. Five neuroradiologists assessed general image quality and delineation of the vessel lumen in seven arterial segments using a 4-grade scale. Interobserver and intraobserver variability were determined. Mean attenuation and noise were measured and signal-to-noise and contrast-to-noise ratios calculated. Descriptive statistics are presented and data analyzed using linear mixed-effects models. In pooled data, image quality in iterative reconstruction was graded superior to FBP regarding all five quality criteria (p Iterative reconstruction resulted in elimination of arterial segments graded poor. Interobserver percentage agreement was significantly better (p = 0.024) for iterative reconstruction (69%) than for FBP (66%) but worse than intraobserver percentage agreement (mean, 79%). Noise levels, signal-to-noise ratio, and contrast-to-noise ratio were significantly (p iterative reconstruction at all measured levels. The iterative reconstruction algorithm significantly improves image quality in craniocervical CT, especially at the thoracic inlet. Despite careful study design, considerable interobserver and intraobserver variability was noted.

  9. Optimization of digital breast tomosynthesis (DBT) acquisition parameters for human observers: effect of reconstruction algorithms

    Science.gov (United States)

    Zeng, Rongping; Badano, Aldo; Myers, Kyle J.

    2017-04-01

    We showed in our earlier work that the choice of reconstruction methods does not affect the optimization of DBT acquisition parameters (angular span and number of views) using simulated breast phantom images in detecting lesions with a channelized Hotelling observer (CHO). In this work we investigate whether the model-observer based conclusion is valid when using humans to interpret images. We used previously generated DBT breast phantom images and recruited human readers to find the optimal geometry settings associated with two reconstruction algorithms, filtered back projection (FBP) and simultaneous algebraic reconstruction technique (SART). The human reader results show that image quality trends as a function of the acquisition parameters are consistent between FBP and SART reconstructions. The consistent trends confirm that the optimization of DBT system geometry is insensitive to the choice of reconstruction algorithm. The results also show that humans perform better in SART reconstructed images than in FBP reconstructed images. In addition, we applied CHOs with three commonly used channel models, Laguerre-Gauss (LG) channels, square (SQR) channels and sparse difference-of-Gaussian (sDOG) channels. We found that LG channels predict human performance trends better than SQR and sDOG channel models for the task of detecting lesions in tomosynthesis backgrounds. Overall, this work confirms that the choice of reconstruction algorithm is not critical for optimizing DBT system acquisition parameters.

  10. The Vital Network: An Algorithmic Milieu of Communication and Control

    Directory of Open Access Journals (Sweden)

    Sandra Robinson

    2016-09-01

    Full Text Available The biological turn in computing has influenced the development of algorithmic control and what I call the vital network: a dynamic, relational, and generative assemblage that is self-organizing in response to the heterogeneity of contemporary network processes, connections, and communication. I discuss this biological turn in computation and control for communication alongside historically significant developments in cybernetics that set out the foundation for the development of self-regulating computer systems. Control is shifting away from models that historically relied on the human-animal model of cognition to govern communication and control, as in early cybernetics and computer science, to a decentred, nonhuman model of control by algorithm for communication and networks. To illustrate the rise of contemporary algorithmic control, I outline a particular example, that of the biologically-inspired routing algorithm known as a ‘quorum sensing’ algorithm. The increasing expansion of algorithms as a sense-making apparatus is important in the context of social media, but also in the subsystems that coordinate networked flows of information. In that domain, algorithms are not inferring categories of identity, sociality, and practice associated with Internet consumers, rather, these algorithms are designed to act on information flows as they are transmitted along the network. The development of autonomous control realized through the power of the algorithm to monitor, sort, organize, determine, and transmit communication is the form of control emerging as a postscript to Gilles Deleuze’s ‘postscript on societies of control.’

  11. Computing autocatalytic sets to unravel inconsistencies in metabolic network reconstructions

    DEFF Research Database (Denmark)

    Schmidt, R.; Waschina, S.; Boettger-Schmidt, D.

    2015-01-01

    by inherent inconsistencies and gaps. RESULTS: Here we present a novel method to validate metabolic network reconstructions based on the concept of autocatalytic sets. Autocatalytic sets correspond to collections of metabolites that, besides enzymes and a growth medium, are required to produce all biomass...

  12. Sparse time series chain graphical models for reconstructing genetic networks

    NARCIS (Netherlands)

    Abegaz, Fentaw; Wit, Ernst

    We propose a sparse high-dimensional time series chain graphical model for reconstructing genetic networks from gene expression data parametrized by a precision matrix and autoregressive coefficient matrix. We consider the time steps as blocks or chains. The proposed approach explores patterns of

  13. TauFinder: A Reconstruction Algorithm for τ Leptons at Linear Colliders

    CERN Document Server

    Muennich, A

    2010-01-01

    An algorithm to find and reconstruct τ leptons was developed, which targets τs that produce high energetic, low multiplicity jets as can be observed at multi TeV e+e− collisions. However, it makes no assumption about the decay of the τ candidate thus finding hadronic as well as leptonic decays. The algorithm delivers a reconstructed τ as seen by the detector. This note provides an overview of the algorithm, the cuts used and gives some evaluation of the performance. A first implementation is available within the ILC software framework as a MAR- LIN processor . Appendix A is intended as a short user manual.

  14. A novel wavefront reconstruction algorithm based on interpolation coefficient matrix for radial shearing interferometry

    Science.gov (United States)

    Zhang, Chen; Li, Dahai; Li, Mengyang; E, Kewei

    2017-10-01

    A novel wavefront reconstruction algorithm for radial shearing interferometer (RSI) is proposed in this paper. Based on the shearing relationship of RSI, an interpolation coefficient matrix is established by the radial shearing ratio and the number of discrete points of test wavefront. Accordingly, the expanded wavefront is characterized by the interpolation coefficient matrix and the test wavefront. Consequently the test wavefront can be calculated from the phase difference wavefront. The numerical simulation is conducted to confirm the correctness of the proposed algorithm. Compared with the previous wavefront reconstruction methods, the proposed algorithm is more accurate and stable.

  15. Fast algorithms for nonconvex compression sensing: MRI reconstruction from very few data

    Energy Technology Data Exchange (ETDEWEB)

    Chartrand, Rick [Los Alamos National Laboratory

    2009-01-01

    Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.

  16. Reconstruction algorithm medical imaging DRR; Algoritmo de construccion de imagenes medicas DRR

    Energy Technology Data Exchange (ETDEWEB)

    Estrada Espinosa, J. C.

    2013-07-01

    The method of reconstruction for digital radiographic Imaging (DRR), is based on two orthogonal images, on the dorsal and lateral decubitus position of the simulation. DRR images are reconstructed with an algorithm that simulates running a conventional X-ray, a single rendition team, beam emitted is not divergent, in this case, the rays are considered to be parallel in the image reconstruction DRR, for this purpose, it is necessary to use all the values of the units (HU) hounsfield of each voxel in all axial cuts that form the study TC, finally obtaining the reconstructed image DRR performing a transformation from 3D to 2D. (Author)

  17. Practical algorithms for simulation and reconstruction of digital in-line holograms.

    Science.gov (United States)

    Latychevskaia, Tatiana; Fink, Hans-Werner

    2015-03-20

    Here we present practical methods for simulation and reconstruction of in-line digital holograms recorded with plane and spherical waves. The algorithms described here are applicable to holographic imaging of an object exhibiting absorption as well as phase-shifting properties. Optimal parameters, related to distances, sampling rate, and other factors for successful simulation and reconstruction of holograms are evaluated and criteria for the achievable resolution are worked out. Moreover, we show that the numerical procedures for the reconstruction of holograms recorded with plane and spherical waves are identical under certain conditions. Experimental examples of holograms and their reconstructions are also discussed.

  18. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT.

    Science.gov (United States)

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A; Pan, Xiaochuan

    2010-01-01

    Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredback-projection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.

  19. Accelerating electron tomography reconstruction algorithm ICON with GPU.

    Science.gov (United States)

    Chen, Yu; Wang, Zihao; Zhang, Jingrong; Li, Lun; Wan, Xiaohua; Sun, Fei; Zhang, Fa

    2017-01-01

    Electron tomography (ET) plays an important role in studying in situ cell ultrastructure in three-dimensional space. Due to limited tilt angles, ET reconstruction always suffers from the "missing wedge" problem. With a validation procedure, iterative compressed-sensing optimized NUFFT reconstruction (ICON) demonstrates its power in the restoration of validated missing information for low SNR biological ET dataset. However, the huge computational demand has become a major problem for the application of ICON. In this work, we analyzed the framework of ICON and classified the operations of major steps of ICON reconstruction into three types. Accordingly, we designed parallel strategies and implemented them on graphics processing units (GPU) to generate a parallel program ICON-GPU. With high accuracy, ICON-GPU has a great acceleration compared to its CPU version, up to 83.7×, greatly relieving ICON's dependence on computing resource.

  20. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET

    Science.gov (United States)

    Mikhaylova, E.; Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-01-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm3) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics. PMID:25018777

  1. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    Science.gov (United States)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  2. A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy

    Directory of Open Access Journals (Sweden)

    C. O. S. Sorzano

    2017-01-01

    Full Text Available One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D map of the specimen being studied from a set of two-dimensional (2D projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA as well as in Electron Tomography (ET.

  3. Reconstruction of a digital core containing clay minerals based on a clustering algorithm.

    Science.gov (United States)

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K-means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  4. Reconstruction of a digital core containing clay minerals based on a clustering algorithm

    Science.gov (United States)

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K -means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  5. Multidimensional Scaling Localization Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhang Dongyang

    2014-02-01

    Full Text Available Due to the localization algorithm in large-scale wireless sensor network exists shortcomings both in positioning accuracy and time complexity compared to traditional localization algorithm, this paper presents a fast multidimensional scaling location algorithm. By positioning algorithm for fast multidimensional scaling, fast mapping initialization, fast mapping and coordinate transform can get schematic coordinates of node, coordinates Initialize of MDS algorithm, an accurate estimate of the node coordinates and using the PRORUSTES to analysis alignment of the coordinate and final position coordinates of nodes etc. There are four steps, and the thesis gives specific implementation steps of the algorithm. Finally, compared with stochastic algorithms and classical MDS algorithm experiment, the thesis takes application of specific examples. Experimental results show that: the proposed localization algorithm has fast multidimensional scaling positioning accuracy in ensuring certain circumstances, but also greatly improves the speed of operation.

  6. Efficient reconstruction of biological networks via transitive reduction on general purpose graphics processors.

    Science.gov (United States)

    Bošnački, Dragan; Odenbrett, Maximilian R; Wijs, Anton; Ligtenberg, Willem; Hilbers, Peter

    2012-10-30

    Techniques for reconstruction of biological networks which are based on perturbation experiments often predict direct interactions between nodes that do not exist. Transitive reduction removes such relations if they can be explained by an indirect path of influences. The existing algorithms for transitive reduction are sequential and might suffer from too long run times for large networks. They also exhibit the anomaly that some existing direct interactions are also removed. We develop efficient scalable parallel algorithms for transitive reduction on general purpose graphics processing units for both standard (unweighted) and weighted graphs. Edge weights are regarded as uncertainties of interactions. A direct interaction is removed only if there exists an indirect interaction path between the same nodes which is strictly more certain than the direct one. This is a refinement of the removal condition for the unweighted graphs and avoids to a great extent the erroneous elimination of direct edges. Parallel implementations of these algorithms can achieve speed-ups of two orders of magnitude compared to their sequential counterparts. Our experiments show that: i) taking into account the edge weights improves the reconstruction quality compared to the unweighted case; ii) it is advantageous not to distinguish between positive and negative interactions since this lowers the complexity of the algorithms from NP-complete to polynomial without loss of quality.

  7. An improved localization algorithm based on genetic algorithm in wireless sensor networks.

    Science.gov (United States)

    Peng, Bo; Li, Lei

    2015-04-01

    Wireless sensor network (WSN) are widely used in many applications. A WSN is a wireless decentralized structure network comprised of nodes, which autonomously set up a network. The node localization that is to be aware of position of the node in the network is an essential part of many sensor network operations and applications. The existing localization algorithms can be classified into two categories: range-based and range-free. The range-based localization algorithm has requirements on hardware, thus is expensive to be implemented in practice. The range-free localization algorithm reduces the hardware cost. Because of the hardware limitations of WSN devices, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. However, these techniques usually have higher localization error compared to the range-based algorithms. DV-Hop is a typical range-free localization algorithm utilizing hop-distance estimation. In this paper, we propose an improved DV-Hop algorithm based on genetic algorithm. Simulation results show that our proposed algorithm improves the localization accuracy compared with previous algorithms.

  8. Projection learning algorithm for threshold - controlled neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Reznik, A.M.

    1995-03-01

    The projection learning algorithm proposed in [1, 2] and further developed in [3] substantially improves the efficiency of memorizing information and accelerates the learning process in neural networks. This algorithm is compatible with the completely connected neural network architecture (the Hopfield network [4]), but its application to other networks involves a number of difficulties. The main difficulties include constraints on interconnection structure and the need to eliminate the state uncertainty of latent neurons if such are present in the network. Despite the encouraging preliminary results of [3], further extension of the applications of the projection algorithm therefore remains problematic. In this paper, which is a continuation of the work begun in [3], we consider threshold-controlled neural networks. Networks of this type are quite common. They represent the receptor neuron layers in some neurocomputer designs. A similar structure is observed in the lower divisions of biological sensory systems [5]. In multilayer projection neural networks with lateral interconnections, the neuron layers or parts of these layers may also have the structure of a threshold-controlled completely connected network. Here the thresholds are the potentials delivered through the projection connections from other parts of the network. The extension of the projection algorithm to the class of threshold-controlled networks may accordingly prove to be useful both for extending its technical applications and for better understanding of the operation of the nervous system in living organisms.

  9. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  10. Algorithms For Phylogeny Reconstruction In a New Mathematical Model

    NARCIS (Netherlands)

    Lenzini, Gabriele; Marianelli, Silvia

    1997-01-01

    The evolutionary history of a set of species is represented by a tree called phylogenetic tree or phylogeny. Its structure depends on precise biological assumptions about the evolution of species. Problems related to phylogeny reconstruction (i.e., finding a tree representation of information

  11. 3D noise power spectrum applied on clinical MDCT scanners: effects of reconstruction algorithms and reconstruction filters

    Science.gov (United States)

    Miéville, Frédéric A.; Bolard, Gregory; Benkreira, Mohamed; Ayestaran, Paul; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2011-03-01

    The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters. A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed. In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements. The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.

  12. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    Science.gov (United States)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  13. Improved Degree Search Algorithms in Unstructured P2P Networks

    Directory of Open Access Journals (Sweden)

    Guole Liu

    2012-01-01

    Full Text Available Searching and retrieving the demanded correct information is one important problem in networks; especially, designing an efficient search algorithm is a key challenge in unstructured peer-to-peer (P2P networks. Breadth-first search (BFS and depth-first search (DFS are the current two typical search methods. BFS-based algorithms show the perfect performance in the aspect of search success rate of network resources, while bringing the huge search messages. On the contrary, DFS-based algorithms reduce the search message quantity and also cause the dropping of search success ratio. To address the problem that only one of performances is excellent, we propose two memory function degree search algorithms: memory function maximum degree algorithm (MD and memory function preference degree algorithm (PD. We study their performance including the search success rate and the search message quantity in different networks, which are scale-free networks, random graph networks, and small-world networks. Simulations show that the two performances are both excellent at the same time, and the performances are improved at least 10 times.

  14. An Algorithmic Approach for the Reconstruction of Nasal Skin Defects: Retrospective Analysis of 130 Cases

    Directory of Open Access Journals (Sweden)

    Berrak Akşam

    2016-06-01

    Full Text Available Objective: Most of the malignant cutaneous carcinomas are seen in the nasal region. Reconstruction of nasal defects is challenging because of the unique anatomic properties and complex structure of this region. In this study, we present our algorithm for the nasal skin defects that occurred after malignant skin tumor excisions. Material and Methods: Patients whose nasal skin was reconstructed after malignant skin tumor excision were included in the study. These patients were evaluated by their age, gender, comorbities, tumor location, tumor size, reconstruction type, histopathological diagnosis, and tumor recurrence. Results: A total of 130 patients (70 female, 60 male were evaluated. The average age of the patients was 67.8 years. Tumors were located mostly at the dorsum, alar region, and tip of the nose. When reconstruction methods were evaluated, primary closure was preferred in 14.6% patients, full thickness skin grafts were used in 25.3% patients, and reconstruction with flaps were the choice in 60% patients. Different flaps were used according to the subunits. Mostly, dorsal nasal flaps, bilobed flaps, nasolabial flaps, and forehead flaps were used. Conclusion: The defect-only reconstruction principle was accepted in this study. Previously described subunits, such as the dorsum, tip, alar region, lateral wall, columella, and soft triangles, of the nose were further divided into subregions by their anatomical relations. An algorithm was planned with these sub regions. In nasal skin reconstruction, this algorithm helps in selection the methods for the best results and minimize the complications.

  15. Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm

    Science.gov (United States)

    Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.

    2017-03-01

    Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.

  16. A New Optimized GA-RBF Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Weikuan Jia

    2014-01-01

    Full Text Available When confronting the complex problems, radial basis function (RBF neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm, which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer’s neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  17. A new optimized GA-RBF neural network algorithm.

    Science.gov (United States)

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  18. Energy Aware Clustering Algorithms for Wireless Sensor Networks

    Science.gov (United States)

    Rakhshan, Noushin; Rafsanjani, Marjan Kuchaki; Liu, Chenglian

    2011-09-01

    The sensor nodes deployed in wireless sensor networks (WSNs) are extremely power constrained, so maximizing the lifetime of the entire networks is mainly considered in the design. In wireless sensor networks, hierarchical network structures have the advantage of providing scalable and energy efficient solutions. In this paper, we investigate different clustering algorithms for WSNs and also compare these clustering algorithms based on metrics such as clustering distribution, cluster's load balancing, Cluster Head's (CH) selection strategy, CH's role rotation, node mobility, clusters overlapping, intra-cluster communications, reliability, security and location awareness.

  19. Hybrid Wireless Sensor Network Coverage Holes Restoring Algorithm

    Directory of Open Access Journals (Sweden)

    Liu Zhouzhou

    2016-01-01

    Full Text Available Aiming at the perception hole caused by the necessary movement or failure of nodes in the wireless sensor actuator network, this paper proposed a kind of coverage restoring scheme based on hybrid particle swarm optimization algorithm. The scheme first introduced network coverage based on grids, transformed the coverage restoring problem into unconstrained optimization problem taking the network coverage as the optimization target, and then solved the optimization problem in the use of the hybrid particle swarm optimization algorithm with the idea of simulated annealing. Simulation results show that the probabilistic jumping property of simulated annealing algorithm could make up for the defect that particle swarm optimization algorithm is easy to fall into premature convergence, and the hybrid algorithm can effectively solve the coverage restoring problem.

  20. Algorithms for Finding Small Attractors in Boolean Networks

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2007-01-01

    Full Text Available A Boolean network is a model used to study the interactions between different genes in genetic regulatory networks. In this paper, we present several algorithms using gene ordering and feedback vertex sets to identify singleton attractors and small attractors in Boolean networks. We analyze the average case time complexities of some of the proposed algorithms. For instance, it is shown that the outdegree-based ordering algorithm for finding singleton attractors works in time for , which is much faster than the naive time algorithm, where is the number of genes and is the maximum indegree. We performed extensive computational experiments on these algorithms, which resulted in good agreement with theoretical results. In contrast, we give a simple and complete proof for showing that finding an attractor with the shortest period is NP-hard.

  1. Neural Network Algorithm for Prediction of Secondary Protein Structure

    National Research Council Canada - National Science Library

    Zikrija Avdagic; Elvir Purisevic; Emir Buza; Zlatan Coralic

    2009-01-01

    .... In this paper we describe the method and results of using CB513 as a dataset suitable for development of artificial neural network algorithms for prediction of secondary protein structure with MATLAB...

  2. Systemic risk analysis in reconstructed economic and financial networks

    CERN Document Server

    Cimini, Giulio; Gabrielli, Andrea; Garlaschelli, Diego

    2014-01-01

    The assessment of fundamental properties for economic and financial systems, such as systemic risk, is systematically hindered by privacy issues$-$that put severe limitations on the available information. Here we introduce a novel method to reconstruct partially-accessible networked systems of this kind. The method is based on the knowledge of the fitnesses, $i.e.$, intrinsic node-specific properties, and of the number of connections of only a limited subset of nodes. Such information is used to calibrate a directed configuration model which can generate ensembles of networks intended to represent the real system, so that the real network properties can be estimated within the generated ensemble in terms of mean values of the observables. Here we focus on estimating those properties that are commonly used to measure the network resilience to shock and crashes. Tests on both artificial and empirical networks shows that the method is remarkably robust with respect to the limitedness of the information available...

  3. Regular Network Class Features Enhancement Using an Evolutionary Synthesis Algorithm

    Directory of Open Access Journals (Sweden)

    O. G. Monahov

    2014-01-01

    Full Text Available This paper investigates a solution of the optimization problem concerning the construction of diameter-optimal regular networks (graphs. Regular networks are of practical interest as the graph-theoretical models of reliable communication networks of parallel supercomputer systems, as a basis of the structure in a model of small world in optical and neural networks. It presents a new class of parametrically described regular networks - hypercirculant networks (graphs. An approach that uses evolutionary algorithms for the automatic generation of parametric descriptions of optimal hypercirculant networks is developed. Synthesis of optimal hypercirculant networks is based on the optimal circulant networks with smaller degree of nodes. To construct optimal hypercirculant networks is used a template of circulant network from the known optimal families of circulant networks with desired number of nodes and with smaller degree of nodes. Thus, a generating set of the circulant network is used as a generating subset of the hypercirculant network, and the missing generators are synthesized by means of the evolutionary algorithm, which is carrying out minimization of diameter (average diameter of networks. A comparative analysis of the structural characteristics of hypercirculant, toroidal, and circulant networks is conducted. The advantage hypercirculant networks under such structural characteristics, as diameter, average diameter, and the width of bisection, with comparable costs of the number of nodes and the number of connections is demonstrated. It should be noted the advantage of hypercirculant networks of dimension three over four higher-dimensional tori. Thus, the optimization of hypercirculant networks of dimension three is more efficient than the introduction of an additional dimension for the corresponding toroidal structures. The paper also notes the best structural parameters of hypercirculant networks in comparison with iBT-networks previously

  4. MART-type CT algorithms for the reconstruction of multidirectional interferometric data

    Science.gov (United States)

    Verhoeven, Dean D.

    1992-01-01

    There has been much recent interest in the application of optical tomography to the study of transport phenomena and chemical reactions in transparent fluid flows. An example is the use of multidirectional holographic interferometry and computed tomography for the study of crystal growth from solution under microgravity conditions. A critical part of any such measurement system is the computed tomography program used to convert the measured interferometric data to refractive index distributions in the object under study. Several of the most promising CT algorithms for this application are presented and compared here. Because of the practical difficulty of making multidirectional interferometric measurements, these measurements generally provide only limited amounts of data. Recent studies have indicated that of the several classes of reconstruction algorithms applicable in the limited-data situation, those based on the Multiplicative Algebraic Reconstruction Technique (MART) are the fastest, most flexible, and most accurate. Several MART-type algorithms have been proposed in the literature. In this paper we compare the performance of state-of-the-art implementations of four such algorithms under conditions of interest to those reconstructing multidirectional interferometric data. The algorithms are tested using numerically-generated data from two phantom objects, with two levels of added noise and with two different imaging geometries. A reconstruction of real data from a multidirectional holographic interferometer using the best of the algorithms is shown.

  5. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  6. An algorithm for link restoration in wavwlength translating networks

    DEFF Research Database (Denmark)

    Limal, Emmanuel; Gliese, Ulrik Bo

    1999-01-01

    We propose the BONRA, a new and innovative algorithm for dynamic allocation of working and spare channel capacity for single link restoration in wavelength translating optical networks. The BONRA has very low calculation complexity yet gives high capacity utilisation.......We propose the BONRA, a new and innovative algorithm for dynamic allocation of working and spare channel capacity for single link restoration in wavelength translating optical networks. The BONRA has very low calculation complexity yet gives high capacity utilisation....

  7. Engineering Algorithms for Route Planning in Multimodal Transportation Networks

    OpenAIRE

    Dibbelt, Julian Matthias

    2016-01-01

    Practical algorithms for route planning in transportation networks are a showpiece of successful Algorithm Engineering. This has produced many speedup techniques, varying in preprocessing time, space, query performance, simplicity, and ease of implementation. This thesis explores solutions to more realistic scenarios, taking into account, e.g., traffic, user preferences, public transit schedules, and the options offered by the many modalities of modern transportation networks.

  8. An algorithm for link restoration of wavelength routing optical networks

    DEFF Research Database (Denmark)

    Limal, Emmanuel; Stubkjær, Kristian

    1999-01-01

    We present an algorithm for restoration of single link failure in wavelength routing multihop optical networks. The algorithm is based on an innovative study of networks using graph theory. It has the following original features: it (i) assigns working and spare channels simultaneously, (ii) prev...... low complexity is studied in detail and compared to the complexity of a classical path assignment algorithm. Finally, we explain how to use the algorithm to control the restoration path lengths.......We present an algorithm for restoration of single link failure in wavelength routing multihop optical networks. The algorithm is based on an innovative study of networks using graph theory. It has the following original features: it (i) assigns working and spare channels simultaneously, (ii......) prevents the search for unacceptable routing paths by pointing out channels required for restoration, (iii) offers a high utilization of the capacity resources and (iv) allows a trivial search for the restoration paths. The algorithm is for link restoration of networks without wavelength translation. Its...

  9. Recommending Learning Activities in Social Network Using Data Mining Algorithms

    Science.gov (United States)

    Mahnane, Lamia

    In this paper, we show how data mining algorithms (e.g. Apriori Algorithm (AP) and Collaborative Filtering (CF)) is useful in New Social Network (NSN-AP-CF). "NSN-AP-CF" processes the clusters based on different learning styles. Next, it analyzes the habits and the interests of the users through mining the frequent episodes by the…

  10. Recommending Learning Activities in Social Network Using Data Mining Algorithms

    Science.gov (United States)

    Mahnane, Lamia

    2017-01-01

    In this paper, we show how data mining algorithms (e.g. Apriori Algorithm (AP) and Collaborative Filtering (CF)) is useful in New Social Network (NSN-AP-CF). "NSN-AP-CF" processes the clusters based on different learning styles. Next, it analyzes the habits and the interests of the users through mining the frequent episodes by the…

  11. A generalized clustering algorithm for dynamic wireless sensor networks

    NARCIS (Netherlands)

    Marin Perianu, Raluca; Hurink, Johann L.; Hartel, Pieter H.

    We propose a general clustering algorithm for dynamic sensor networks, that makes localized decisions (1-hop neighbourhood) and produces disjoint clusters. The purpose is to extract and emphasise the essential clustering mechanisms common for a set of state-of-the-art algorithms, which allows for a

  12. A Generalized Clustering Algorithm for Dynamic Wireless Sensor Networks

    NARCIS (Netherlands)

    Marin Perianu, Raluca; Hurink, Johann L.; Hartel, Pieter H.

    2008-01-01

    We propose a general clustering algorithm for dynamic sensor networks, that makes localized decisions (1-hop neighbourhood) and produces disjoint clusters. The purpose is to extract and emphasise the essential clustering mechanisms common for a set of state-of-the-art algorithms, which allows for a

  13. Practical Algorithms for Subgroup Detection in Covert Networks

    DEFF Research Database (Denmark)

    Memon, Nasrullah; Wiil, Uffe Kock; Qureshi, Pir Abdul Rasool

    2010-01-01

    In this paper, we present algorithms for subgroup detection and demonstrated them with a real-time case study of USS Cole bombing terrorist network. The algorithms are demonstrated in an application by a prototype system. The system finds associations between terrorist and terrorist organisations...

  14. A stand-alone track reconstruction algorithm for the scintillating fibre tracker at the LHCb upgrade

    CERN Multimedia

    Quagliani, Renato

    2017-01-01

    The LHCb upgrade detector project foresees the presence of a scintillating fiber tracker (SciFi) to be used during the LHC Run III, starting in 2020. The instantaneous luminosity will be increased up to $2\\times10^{33}$, five times larger than in Run II and a full software event reconstruction will be performed at the full bunch crossing rate by the trigger. The new running conditions, and the tighter timing constraints in the software trigger, represent a big challenge for track reconstruction. This poster presents the design and performance of a novel algorithm that has been developed to reconstruct track segments using solely hits from the SciFi. This algorithm is crucial for the reconstruction of tracks originating from long-lived particles such as $K_{S}^{0}$ and $\\Lambda$ and allows to greatly enhance the physics potential and capabilities of the LHCb upgrade when compared to its previous implementation.

  15. A robust jet reconstruction algorithm for high-energy lepton colliders

    Directory of Open Access Journals (Sweden)

    M. Boronat

    2015-11-01

    Full Text Available We propose a new sequential jet reconstruction algorithm for future lepton colliders at the energy frontier. The Valencia algorithm combines the natural distance criterion for lepton colliders with the greater robustness against backgrounds of algorithms adapted to hadron colliders. Results on a detailed Monte Carlo simulation of tt¯ and ZZ production at future linear e+e− colliders (ILC and CLIC with a realistic level of background overlaid, show that it achieves better performance in the presence of background than the classical algorithms used at previous e+e− colliders.

  16. Jet Energy Scale and its Uncertainties using the Heavy Ion Jet Reconstruction Algorithm in pp Collisions

    CERN Document Server

    Puri, Akshat; The ATLAS collaboration

    2017-01-01

    ATLAS uses a jet reconstruction algorithm in heavy ion collisions that takes as input calorimeter towers of size $0.1 \\times \\pi/32$ in $\\Delta\\eta \\times \\Delta\\phi$ and iteratively determines the underlying event background. This algorithm, which is different from the standard jet reconstruction used in ATLAS, is also used in the proton-proton collisions used as reference data for the Pb+Pb and p+Pb. This poster provides details of the heavy ion jet reconstruction algorithm and its performance in pp collisions. The calibration procedure is described in detail and cross checks using photon- jet balance are shown. The uncertainties on the jet energy scale and the jet energy resolution are described.

  17. Experimental study of stochastic noise propagation in SPECT images reconstructed using the conjugate gradient algorithm.

    Science.gov (United States)

    Mariano-Goulart, D; Fourcade, M; Bernon, J L; Rossi, M; Zanca, M

    2003-01-01

    Thanks to an experimental study based on simulated and physical phantoms, the propagation of the stochastic noise in slices reconstructed using the conjugate gradient algorithm has been analysed versus iterations. After a first increase corresponding to the reconstruction of the signal, the noise stabilises before increasing linearly with iterations. The level of the plateau as well as the slope of the subsequent linear increase depends on the noise in the projection data.

  18. Pharyngoesophageal reconstruction after resection of hypopharyngeal carcinoma: a new algorithm after analysis of 142 cases

    OpenAIRE

    Denewer, Adel; Khater, Ashraf; Hafez, Mohamed T; Hussein, Osama; Roshdy, Sameh; Shahatto, Fayez; Elnahas, Waleed; Kotb, Sherif; Mowafy, Khaled

    2014-01-01

    Background The aim of this study is to define an algorithm for the choice of reconstructive method for defects after laryngo-pharyngo-esophagectomy for hypopharyngeal carcinoma. Methods One hundred and forty two cases of hypopharyngeal carcinoma were included and operated on by either partial pharyngectomy, total pharyngectomy or esophagectomy. The reconstructive method was tailored according to the resected segment. Results Pectoralis flap was used in 48 cases, free jejunal flap in 28 cases,...

  19. HAWC Energy Reconstruction via Neural Network

    Science.gov (United States)

    Marinelli, Samuel; HAWC Collaboration

    2016-03-01

    The High-Altitude Water-Cherenkov (HAWC) γ-ray observatory is located at 4100 m above sea level on the Sierra Negra mountain in the state of Puebla, Mexico. Its 300 water-filled tanks are instrumented with PMTs that detect Cherenkov light produced by charged particles in atmospheric air showers induced by TeV γ-rays. The detector became fully operational in March of 2015. With a 2-sr field of view and duty cycle exceeding 90%, HAWC is a survey instrument sensitive to diverse γ-ray sources, including supernova remnants, pulsar wind nebulae, active galactic nuclei, and others. Particle-acceleration mechanisms at these sources can be inferred by studying their energy spectra, particularly at high energies. We have developed a technique for estimating primary- γ-ray energies using an artificial neural network (ANN). Input variables to the ANN are selected to characterize shower multiplicity in the detector, the fraction of the shower contained in the detector, and atmospheric attenuation of the shower. Monte Carlo simulations show that the new estimator has superior performance to the current estimator used in HAWC publications. This work was supported by the National Science Foundation.

  20. Patient-centred decision making in breast reconstruction utilising the delayed-immediate algorithm.

    Science.gov (United States)

    Ter Louw, Ryan P; Patel, Ketan M; Sosin, Michael; Weissler, Jason M; Nahabedian, Maurice Y

    2014-04-01

    Delayed-immediate reconstruction is an increasingly valuable algorithm for patients anticipating post-mastectomy radiation therapy. Despite the cosmetic and long-term advantages of autologous tissue repair, a subset of patients choose implant-based reconstruction after their initial preference for autologous reconstruction. A critical evaluation of patients who initially planned to undergo delayed-immediate reconstruction but later chose to continue with implant-based reconstruction has not been previously reported. A retrospective analysis of the senior author's (M.Y.N.) patients who initially intended to undergo delayed-immediate autologous breast reconstruction following mastectomy and chose to abandon autologous reconstruction in favour of prosthetic reconstruction was completed from 2005 to 2011. Seven patients (10 breasts) met inclusion criteria. The mean patient age and body mass index were 50.2 years and 32.1 kg m(-2), respectively. Expansion required an average of 4.4 office visits to achieve adequate expansion volume, mean 483 ml (240-600 ml). The mean time from expander placement to definitive reconstruction was 14.6 months. Mean follow-up time was 20.4 months. Complications included infection (1/7), incisional dehiscence (1/7) and capsular contracture (2/7), and late revision surgery was performed in two patients. Successful reconstruction was achieved in 100% of patients (7/7) with a patient-reported satisfaction of 100%. Patient motivations for changing the reconstructive algorithm included a faster post-operative recovery in four patients (4/7) and potential donor-site morbidity in three patients (3/7). Depression or cancer-related fatigue symptoms were self-reported in 4/7. Avoiding donor-site morbidity and a simpler recovery are the main factors that influence patients to change their desire for autologous reconstruction to an implant-based reconstruction. Cancer-related fatigue and depression are prevalent in this population and may be implicated

  1. Image reconstruction and scan configurations enabled by optimization-based algorithms in multispectral CT

    Science.gov (United States)

    Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan

    2017-11-01

    Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.

  2. A stochastic local search algorithm for distance-based phylogeny reconstruction.

    Science.gov (United States)

    Tria, Francesca; Caglioti, Emanuele; Loreto, Vittorio; Pagnani, Andrea

    2010-11-01

    In many interesting cases, the reconstruction of a correct phylogeny is blurred by high mutation rates and/or horizontal transfer events. As a consequence, a divergence arises between the true evolutionary distances and the differences between pairs of taxa as inferred from available data, making the phylogenetic reconstruction a challenging problem. Mathematically, this divergence translates in a loss of additivity of the actual distances between taxa. In distance-based reconstruction methods, two properties of additive distances have been extensively exploited as antagonist criteria to drive phylogeny reconstruction: On the one hand, a local property of quartets, that is, sets of four taxa in a tree, the four-points condition; on the other hand, a recently proposed formula that allows to write the tree length as a function of the distances between taxa, Pauplin's formula. Here, we introduce a new reconstruction scheme that exploits in a unified framework both the four-points condition and the Pauplin's formula. We propose, in particular, a new general class of distance-based Stochastic Local Search algorithms, which reduces in a limit case to the minimization of Pauplin's length. When tested on artificially generated phylogenies, our Stochastic Big-Quartet Swapping algorithmic scheme significantly outperforms state-of-art distance-based algorithms in cases of deviation from additivity due to high rate of back mutations. A significant improvement is also observed with respect to the state-of-art algorithms in the case of high rate of horizontal transfer.

  3. Incorporation of local dependent reliability information into the Prior Image Constrained Compressed Sensing (PICCS) reconstruction algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Vaegler, Sven; Sauer, Otto [Wuerzburg Univ. (Germany). Dept. of Radiation Oncology; Stsepankou, Dzmitry; Hesser, Juergen [University Medical Center Mannheim, Mannheim (Germany). Dept. of Experimental Radiation Oncology

    2015-07-01

    The reduction of dose in cone beam computer tomography (CBCT) arises from the decrease of the tube current for each projection as well as from the reduction of the number of projections. In order to maintain good image quality, sophisticated image reconstruction techniques are required. The Prior Image Constrained Compressed Sensing (PICCS) incorporates prior images into the reconstruction algorithm and outperforms the widespread used Feldkamp-Davis-Kress-algorithm (FDK) when the number of projections is reduced. However, prior images that contain major variations are not appropriately considered so far in PICCS. We therefore propose the partial-PICCS (pPICCS) algorithm. This framework is a problem-specific extension of PICCS and enables the incorporation of the reliability of the prior images additionally. We assumed that the prior images are composed of areas with large and small deviations. Accordingly, a weighting matrix considered the assigned areas in the objective function. We applied our algorithm to the problem of image reconstruction from few views by simulations with a computer phantom as well as on clinical CBCT projections from a head-and-neck case. All prior images contained large local variations. The reconstructed images were compared to the reconstruction results by the FDK-algorithm, by Compressed Sensing (CS) and by PICCS. To show the gain of image quality we compared image details with the reference image and used quantitative metrics (root-mean-square error (RMSE), contrast-to-noise-ratio (CNR)). The pPICCS reconstruction framework yield images with substantially improved quality even when the number of projections was very small. The images contained less streaking, blurring and inaccurately reconstructed structures compared to the images reconstructed by FDK, CS and conventional PICCS. The increased image quality is also reflected in large RMSE differences. We proposed a modification of the original PICCS algorithm. The pPICCS algorithm

  4. Insertion algorithms for network model database management systems

    Science.gov (United States)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  5. New Heuristic Algorithm for Dynamic Traffic in WDM Optical Networks

    Directory of Open Access Journals (Sweden)

    Arturo Benito Rodríguez Garcia

    2015-12-01

    Full Text Available The results and comparison of the simulation of a new heuristic algorithm called Snake One are presented. The comparison is made with three heuristic algorithms, Genetic Algorithms, Simulated Annealing, and Tabu Search, using blocking probability and network utilization as standard indicators. The simulation was made on the WDM NSFNET under dynamic traffic conditions. The results show a substantial decrease of blocking, but this causes a relative growth of network utilization. There are also load intervals at which its performance improves, decreasing the number of blocked requests.

  6. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous spectrophotometric multicomponent analysis are suggested, with a study on the estimation of the components of an antihypertensive combination, namely, atenolol and losartan potassium.

  7. Wireless Sensor Networks : Structure and Algorithms

    NARCIS (Netherlands)

    van Dijk, T.C.|info:eu-repo/dai/nl/304841293

    2014-01-01

    In this thesis we look at various problems in wireless networking. First we consider two problems in physical-model networks. We introduce a new model for localisation. The model is based on a range-free model of radio transmissions. The first scheme is randomised and we analyse its expected

  8. District Heating Network Design and Configuration Optimization with Genetic Algorithm

    DEFF Research Database (Denmark)

    Li, Hongwei; Svendsen, Svend

    2013-01-01

    and the pipe friction and heat loss formulations are non-linear. In order to find the optimal district heating network configuration, genetic algorithm which handles the mixed integer nonlinear programming problem is chosen. The network configuration is represented with binary and integer encoding...

  9. Gene Expression Network Reconstruction by LEP Method Using Microarray Data

    Directory of Open Access Journals (Sweden)

    Na You

    2012-01-01

    Full Text Available Gene expression network reconstruction using microarray data is widely studied aiming to investigate the behavior of a gene cluster simultaneously. Under the Gaussian assumption, the conditional dependence between genes in the network is fully described by the partial correlation coefficient matrix. Due to the high dimensionality and sparsity, we utilize the LEP method to estimate it in this paper. Compared to the existing methods, the LEP reaches the highest PPV with the sensitivity controlled at the satisfactory level. A set of gene expression data from the HapMap project is analyzed for illustration.

  10. Cosmic web reconstruction through density ridges: method and algorithm

    Science.gov (United States)

    Chen, Yen-Chi; Ho, Shirley; Freeman, Peter E.; Genovese, Christopher R.; Wasserman, Larry

    2015-11-01

    The detection and characterization of filamentary structures in the cosmic web allows cosmologists to constrain parameters that dictate the evolution of the Universe. While many filament estimators have been proposed, they generally lack estimates of uncertainty, reducing their inferential power. In this paper, we demonstrate how one may apply the subspace constrained mean shift (SCMS) algorithm (Ozertem & Erdogmus 2011; Genovese et al. 2014) to uncover filamentary structure in galaxy data. The SCMS algorithm is a gradient ascent method that models filaments as density ridges, one-dimensional smooth curves that trace high-density regions within the point cloud. We also demonstrate how augmenting the SCMS algorithm with bootstrap-based methods of uncertainty estimation allows one to place uncertainty bands around putative filaments. We apply the SCMS first to the data set generated from the Voronoi model. The density ridges show strong agreement with the filaments from Voronoi method. We then apply the SCMS method data sets sampled from a P3M N-body simulation, with galaxy number densities consistent with SDSS and WFIRST-AFTA, and to LOWZ and CMASS data from the Baryon Oscillation Spectroscopic Survey (BOSS). To further assess the efficacy of SCMS, we compare the relative locations of BOSS filaments with galaxy clusters in the redMaPPer catalogue, and find that redMaPPer clusters are significantly closer (with p-values filaments than to randomly selected galaxies.

  11. A computational study of routing algorithms for realistic transportation networks

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, R.; Marathe, M.V.; Nagel, K.

    1998-12-01

    The authors carry out an experimental analysis of a number of shortest path (routing) algorithms investigated in the context of the TRANSIMS (Transportation Analysis and Simulation System) project. The main focus of the paper is to study how various heuristic and exact solutions, associated data structures affected the computational performance of the software developed especially for realistic transportation networks. For this purpose the authors have used Dallas Fort-Worth road network with very high degree of resolution. The following general results are obtained: (1) they discuss and experimentally analyze various one-one shortest path algorithms, which include classical exact algorithms studied in the literature as well as heuristic solutions that are designed to take into account the geometric structure of the input instances; (2) they describe a number of extensions to the basic shortest path algorithm. These extensions were primarily motivated by practical problems arising in TRANSIMS and ITS (Intelligent Transportation Systems) related technologies. Extensions discussed include--(i) time dependent networks, (ii) multi-modal networks, (iii) networks with public transportation and associated schedules. Computational results are provided to empirically compare the efficiency of various algorithms. The studies indicate that a modified Dijkstra`s algorithm is computationally fast and an excellent candidate for use in various transportation planning applications as well as ITS related technologies.

  12. Slow update stochastic simulation algorithms for modeling complex biochemical networks.

    Science.gov (United States)

    Ghosh, Debraj; De, Rajat K

    2017-10-30

    The stochastic simulation algorithm (SSA) based modeling is a well recognized approach to predict the stochastic behavior of biological networks. The stochastic simulation of large complex biochemical networks is a challenge as it takes a large amount of time for simulation due to high update cost. In order to reduce the propensity update cost, we proposed two algorithms: slow update exact stochastic simulation algorithm (SUESSA) and slow update exact sorting stochastic simulation algorithm (SUESSSA). We applied cache-based linear search (CBLS) in these two algorithms for improving the search operation for finding reactions to be executed. Data structure used for incorporating CBLS is very simple and the cost of maintaining this during propensity update operation is very low. Hence, time taken during propensity updates, for simulating strongly coupled networks, is very fast; which leads to reduction of total simulation time. SUESSA and SUESSSA are not only restricted to elementary reactions, they support higher order reactions too. We used linear chain model and colloidal aggregation model to perform a comparative analysis of the performances of our methods with the existing algorithms. We also compared the performances of our methods with the existing ones, for large biochemical networks including B cell receptor and FcϵRI signaling networks. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  14. Performance Comparison of Reconstruction Algorithms in Discrete Blind Multi-Coset Sampling

    DEFF Research Database (Denmark)

    Grigoryan, Ruben; Arildsen, Thomas; Tandur, Deepaknath

    2012-01-01

    . The performance of the existing compressed sensing reconstruction algorithms have not been investigated yet for the discrete multi-coset sampling. We compare the following algorithms – orthogonal matching pursuit, multiple signal classification, subspace-augmented multiple signal classification, focal under......This paper investigates the performance of different reconstruction algorithms in discrete blind multi-coset sampling. Multi-coset scheme is a promising compressed sensing architecture that can replace traditional Nyquist-rate sampling in the applications with multi-band frequency sparse signals......-determined system solver and basis pursuit denoising. The comparison is performed via numerical simulations for different sampling conditions. According to the simulations, focal under-determined system solver outperforms all other algorithms for signals with low signal-to-noise ratio. In other cases, the multiple...

  15. A clustering algorithm for determining community structure in complex networks

    Science.gov (United States)

    Jin, Hong; Yu, Wei; Li, ShiJun

    2018-02-01

    Clustering algorithms are attractive for the task of community detection in complex networks. DENCLUE is a representative density based clustering algorithm which has a firm mathematical basis and good clustering properties allowing for arbitrarily shaped clusters in high dimensional datasets. However, this method cannot be directly applied to community discovering due to its inability to deal with network data. Moreover, it requires a careful selection of the density parameter and the noise threshold. To solve these issues, a new community detection method is proposed in this paper. First, we use a spectral analysis technique to map the network data into a low dimensional Euclidean Space which can preserve node structural characteristics. Then, DENCLUE is applied to detect the communities in the network. A mathematical method named Sheather-Jones plug-in is chosen to select the density parameter which can describe the intrinsic clustering structure accurately. Moreover, every node on the network is meaningful so there were no noise nodes as a result the noise threshold can be ignored. We test our algorithm on both benchmark and real-life networks, and the results demonstrate the effectiveness of our algorithm over other popularity density based clustering algorithms adopted to community detection.

  16. Optimizing of Passive Optical Network Deployment Using Algorithm with Metrics

    Directory of Open Access Journals (Sweden)

    Tomas Pehnelt

    2017-01-01

    Full Text Available Various approaches and methods are used for designing of optimum deployment of Passive Optical Networks (PON according to selected optimization criteria, such as optimal trenching distance, endpoint attenuation and overall installed fibre length. This article describes the ideas and possibilities for an algorithm with the application of graph algorithms for finding the shortest path from Optical Line Termination to Optical Network Terminal unit. This algorithm uses a combination of different methods for generating of an optimal metric, thus creating the optimized tree topology mainly focused on summary trenching distance. Furthermore, it deals with algorithms for finding an optimal placement of optical splitter with the help of K-Means clustering method and hierarchical clustering technique. The results of the proposed algorithm are compared with existing methods.

  17. ANOMALY DETECTION IN NETWORKING USING HYBRID ARTIFICIAL IMMUNE ALGORITHM

    Directory of Open Access Journals (Sweden)

    D. Amutha Guka

    2012-01-01

    Full Text Available Especially in today’s network scenario, when computers are interconnected through internet, security of an information system is very important issue. Because no system can be absolutely secure, the timely and accurate detection of anomalies is necessary. The main aim of this research paper is to improve the anomaly detection by using Hybrid Artificial Immune Algorithm (HAIA which is based on Artificial Immune Systems (AIS and Genetic Algorithm (GA. In this research work, HAIA approach is used to develop Network Anomaly Detection System (NADS. The detector set is generated by using GA and the anomalies are identified using Negative Selection Algorithm (NSA which is based on AIS. The HAIA algorithm is tested with KDD Cup 99 benchmark dataset. The detection rate is used to measure the effectiveness of the NADS. The results and consistency of the HAIA are compared with earlier approaches and the results are presented. The proposed algorithm gives best results when compared to the earlier approaches.

  18. A Distributed Algorithm for Energy Optimization in Hydraulic Networks

    DEFF Research Database (Denmark)

    Kallesøe, Carsten; Wisniewski, Rafal; Jensen, Tom Nørgaard

    2014-01-01

    An industrial case study in the form of a large-scale hydraulic network underlying a district heating system is considered. A distributed control is developed that minimizes the aggregated electrical energy consumption of the pumps in the network without violating the control demands. The algorithm...... a Plug & Play control system as most commissioning can be done during the manufacture of the pumps. Only information on the graph-structure of the hydraulic network is needed during installation....

  19. Aggregation algorithm towards large-scale Boolean network analysis

    OpenAIRE

    Zhao, Y.; Kim, J.; Filippone, M.

    2013-01-01

    The analysis of large-scale Boolean network dynamics is of great importance in understanding complex phenomena where systems are characterized by a large number of components. The computational cost to reveal the number of attractors and the period of each attractor increases exponentially as the number of nodes in the networks increases. This paper presents an efficient algorithm to find attractors for medium to large-scale networks. This is achieved by analyzing subnetworks within the netwo...

  20. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  1. Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting

    Science.gov (United States)

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Lu

    2017-09-01

    In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.

  2. Online Algorithms for Adaptive Optimization in Heterogeneous Delay Tolerant Networks

    Directory of Open Access Journals (Sweden)

    Wissam Chahin

    2013-12-01

    Full Text Available Delay Tolerant Networks (DTNs are an emerging type of networks which do not need a predefined infrastructure. In fact, data forwarding in DTNs relies on the contacts among nodes which may possess different features, radio range, battery consumption and radio interfaces. On the other hand, efficient message delivery under limited resources, e.g., battery or storage, requires to optimize forwarding policies. We tackle optimal forwarding control for a DTN composed of nodes of different types, forming a so-called heterogeneous network. Using our model, we characterize the optimal policies and provide a suitable framework to design a new class of multi-dimensional stochastic approximation algorithms working for heterogeneous DTNs. Crucially, our proposed algorithms drive online the source node to the optimal operating point without requiring explicit estimation of network parameters. A thorough analysis of the convergence properties and stability of our algorithms is presented.

  3. Node-Dependence-Based Dynamic Incentive Algorithm in Opportunistic Networks

    Directory of Open Access Journals (Sweden)

    Ruiyun Yu

    2014-01-01

    Full Text Available Opportunistic networks lack end-to-end paths between source nodes and destination nodes, so the communications are mainly carried out by the “store-carry-forward” strategy. Selfish behaviors of rejecting packet relay requests will severely worsen the network performance. Incentive is an efficient way to reduce selfish behaviors and hence improves the reliability and robustness of the networks. In this paper, we propose the node-dependence-based dynamic gaming incentive (NDI algorithm, which exploits the dynamic repeated gaming to motivate nodes relaying packets for other nodes. The NDI algorithm presents a mechanism of tolerating selfish behaviors of nodes. Reward and punishment methods are also designed based on the node dependence degree. Simulation results show that the NDI algorithm is effective in increasing the delivery ratio and decreasing average latency when there are a lot of selfish nodes in the opportunistic networks.

  4. Real-world experimentation of distributed DSA network algorithms

    DEFF Research Database (Denmark)

    Tonelli, Oscar; Berardinelli, Gilberto; Tavares, Fernando Menezes Leitão

    2013-01-01

    of the available spectrum by nodes in a network, without centralized coordination. While proof-of-concept and statistical validation of such algorithms is typically achieved by using system level simulations, experimental activities are valuable contributions for the investigation of particular aspects......The problem of spectrum scarcity in uncoordinated and/or heterogeneous wireless networks is the key aspect driving the research in the field of flexible management of frequency resources. In particular, distributed dynamic spectrum access (DSA) algorithms enable an efficient sharing...... such as a dynamic propagation environment, human presence impact and terminals mobility. This chapter focuses on the practical aspects related to the real world-experimentation with distributed DSA network algorithms over a testbed network. Challenges and solutions are extensively discussed, from the testbed design...

  5. Community Clustering Algorithm in Complex Networks Based on Microcommunity Fusion

    Directory of Open Access Journals (Sweden)

    Jin Qi

    2015-01-01

    Full Text Available With the further research on physical meaning and digital features of the community structure in complex networks in recent years, the improvement of effectiveness and efficiency of the community mining algorithms in complex networks has become an important subject in this area. This paper puts forward a concept of the microcommunity and gets final mining results of communities through fusing different microcommunities. This paper starts with the basic definition of the network community and applies Expansion to the microcommunity clustering which provides prerequisites for the microcommunity fusion. The proposed algorithm is more efficient and has higher solution quality compared with other similar algorithms through the analysis of test results based on network data set.

  6. Using network properties to evaluate targeted immunization algorithms

    Directory of Open Access Journals (Sweden)

    Bita Shams

    2014-09-01

    Full Text Available Immunization of complex network with minimal or limited budget is a challenging issue for research community. In spite of much literature in network immunization, no comprehensive research has been conducted for evaluation and comparison of immunization algorithms. In this paper, we propose an evaluation framework for immunization algorithms regarding available amount of vaccination resources, goal of immunization program, and time complexity. The evaluation framework is designed based on network topological metrics which is extensible to all epidemic spreading model. Exploiting evaluation framework on well-known targeted immunization algorithms shows that in general, immunization based on PageRank centrality outperforms other targeting strategies in various types of networks, whereas, closeness and eigenvector centrality exhibit the worst case performance.

  7. A scale-space curvature matching algorithm for the reconstruction of complex proximal humeral fractures.

    Science.gov (United States)

    Vlachopoulos, Lazaros; Székely, Gábor; Gerber, Christian; Fürnstahl, Philipp

    2018-01-01

    The optimal surgical treatment of complex fractures of the proximal humerus is controversial. It is proven that best results are obtained if an anatomical reduction of the fragments is achieved and, therefore, computer-assisted methods have been proposed for the reconstruction of the fractures. However, complex fractures of the proximal humerus are commonly accompanied with a relevant displacement of the fragments and, therefore, algorithms relying on the initial position of the fragments might fail. The state-of-the-art algorithm for complex fractures of the proximal humerus requires the acquisition of a CT scan of the (healthy) contralateral anatomy as a reconstruction template to address the displacement of the fragments. Pose-invariant fracture line based reconstruction algorithms have been applied successful for reassembling broken vessels in archaeology. Nevertheless, the extraction of the fracture lines and the necessary computation of their curvature are susceptible to noise and make the application of previous approaches difficult or even impossible for bone fractures close to the joints, where the cortical layer is thin. We present a novel scale-space representation of the curvature, permitting to calculate the correct alignment between bone fragments solely based on corresponding regions of the fracture lines. The fractures of the proximal humerus are automatically reconstructed based on iterative pairwise reduction of the fragments. The validation of the presented method was performed on twelve clinical cases, surgically treated after complex proximal humeral fracture, and by cadaver experiments. The accuracy of our approach was compared to the state-of-the-art algorithm for complex fractures of the proximal humerus. All reconstructions of the clinical cases resulted in an accurate approximation of the pre-traumatic anatomy. The accuracy of the reconstructed cadaver cases outperformed the current state-of-the-art algorithm. Copyright © 2017 Elsevier B

  8. Properties of healthcare teaming networks as a function of network construction algorithms.

    Science.gov (United States)

    Zand, Martin S; Trayhan, Melissa; Farooq, Samir A; Fucile, Christopher; Ghoshal, Gourab; White, Robert J; Quill, Caroline M; Rosenberg, Alexander; Barbosa, Hugo Serrano; Bush, Kristen; Chafi, Hassan; Boudreau, Timothy

    2017-01-01

    Network models of healthcare systems can be used to examine how providers collaborate, communicate, refer patients to each other, and to map how patients traverse the network of providers. Most healthcare service network models have been constructed from patient claims data, using billing claims to link a patient with a specific provider in time. The data sets can be quite large (106-108 individual claims per year), making standard methods for network construction computationally challenging and thus requiring the use of alternate construction algorithms. While these alternate methods have seen increasing use in generating healthcare networks, there is little to no literature comparing the differences in the structural properties of the generated networks, which as we demonstrate, can be dramatically different. To address this issue, we compared the properties of healthcare networks constructed using different algorithms from 2013 Medicare Part B outpatient claims data. Three different algorithms were compared: binning, sliding frame, and trace-route. Unipartite networks linking either providers or healthcare organizations by shared patients were built using each method. We find that each algorithm produced networks with substantially different topological properties, as reflected by numbers of edges, network density, assortativity, clustering coefficients and other structural measures. Provider networks adhered to a power law, while organization networks were best fit by a power law with exponential cutoff. Censoring networks to exclude edges with less than 11 shared patients, a common de-identification practice for healthcare network data, markedly reduced edge numbers and network density, and greatly altered measures of vertex prominence such as the betweenness centrality. Data analysis identified patterns in the distance patients travel between network providers, and a striking set of teaming relationships between providers in the Northeast United States and

  9. Performance evaluation of wavelet-based ECG compression algorithms for telecardiology application over CDMA network.

    Science.gov (United States)

    Kim, Byung S; Yoo, Sun K

    2007-09-01

    The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals.

  10. Information Dynamics in Networks: Models and Algorithms

    Science.gov (United States)

    2016-09-13

    ICDCS). 29-JUN-15, Columbus, OH, USA. : , . Value-Based Network Externalities and Optimal Auction Design, Conference on Web and Internet Economics...NAME Total Number: NAME Total Number: PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: ...... ...... Inventions (DD882) Scientific Progress In...Value-based network externalities and optimal auction design. In Web and Internet Economics - 10th International Conference, WINE 2014, Beijing, China, December 14-17, pages 147–160, 2014. 6

  11. Fast direct fourier reconstruction of radial and PROPELLER MRI data using the chirp transform algorithm on graphics hardware.

    Science.gov (United States)

    Feng, Yanqiu; Song, Yanli; Wang, Cong; Xin, Xuegang; Feng, Qianjin; Chen, Wufan

    2013-10-01

    To develop and test a new algorithm for fast direct Fourier transform (DrFT) reconstruction of MR data on non-Cartesian trajectories composed of lines with equally spaced points. The DrFT, which is normally used as a reference in evaluating the accuracy of other reconstruction methods, can reconstruct images directly from non-Cartesian MR data without interpolation. However, DrFT reconstruction involves substantially intensive computation, which makes the DrFT impractical for clinical routine applications. In this article, the Chirp transform algorithm was introduced to accelerate the DrFT reconstruction of radial and Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) MRI data located on the trajectories that are composed of lines with equally spaced points. The performance of the proposed Chirp transform algorithm-DrFT algorithm was evaluated by using simulation and in vivo MRI data. After implementing the algorithm on a graphics processing unit, the proposed Chirp transform algorithm-DrFT algorithm achieved an acceleration of approximately one order of magnitude, and the speed-up factor was further increased to approximately three orders of magnitude compared with the traditional single-thread DrFT reconstruction. Implementation the Chirp transform algorithm-DrFT algorithm on the graphics processing unit can efficiently calculate the DrFT reconstruction of the radial and PROPELLER MRI data. Copyright © 2012 Wiley Periodicals, Inc.

  12. Fast Parallel Algorithms for Graphs and Networks

    Science.gov (United States)

    1987-12-01

    loosing the nth game of badminton to him. Valerie King and .Joel Friedman showed me the wonders of cross-country skiing in Yosemite. Steven Rudich was...2), both W(u) and L(v) have no more than 7s/8 vertices. Let x be some ver- tex. We can describe the history of x throughout the algorithm by a zero

  13. Incremental Centrality Algorithms for Dynamic Network Analysis

    Science.gov (United States)

    2013-08-01

    Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware...run-time of O(m + nlogn) can be achieved by implementing the priority queue using a Fibonacci heap [127]. When Dijsktra’s algorithm is invoked

  14. Optimization of Proton CT Detector System and Image Reconstruction Algorithm for On-Line Proton Therapy.

    Directory of Open Access Journals (Sweden)

    Chae Young Lee

    Full Text Available The purposes of this study were to optimize a proton computed tomography system (pCT for proton range verification and to confirm the pCT image reconstruction algorithm based on projection images generated with optimized parameters. For this purpose, we developed a new pCT scanner using the Geometry and Tracking (GEANT 4.9.6 simulation toolkit. GEANT4 simulations were performed to optimize the geometric parameters representing the detector thickness and the distance between the detectors for pCT. The system consisted of four silicon strip detectors for particle tracking and a calorimeter to measure the residual energies of the individual protons. The optimized pCT system design was then adjusted to ensure that the solution to a CS-based convex optimization problem would converge to yield the desired pCT images after a reasonable number of iterative corrections. In particular, we used a total variation-based formulation that has been useful in exploiting prior knowledge about the minimal variations of proton attenuation characteristics in the human body. Examinations performed using our CS algorithm showed that high-quality pCT images could be reconstructed using sets of 72 projections within 20 iterations and without any streaks or noise, which can be caused by under-sampling and proton starvation. Moreover, the images yielded by this CS algorithm were found to be of higher quality than those obtained using other reconstruction algorithms. The optimized pCT scanner system demonstrated the potential to perform high-quality pCT during on-line image-guided proton therapy, without increasing the imaging dose, by applying our CS based proton CT reconstruction algorithm. Further, we make our optimized detector system and CS-based proton CT reconstruction algorithm potentially useful in on-line proton therapy.

  15. An Energy Consumption Optimized Clustering Algorithm for Radar Sensor Networks Based on an Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Jiang Ting

    2010-01-01

    Full Text Available We optimize the cluster structure to solve problems such as the uneven energy consumption of the radar sensor nodes and random cluster head selection in the traditional clustering routing algorithm. According to the defined cost function for clusters, we present the clustering algorithm which is based on radio-free space path loss. In addition, we propose the energy and distance pheromones based on the residual energy and aggregation of the radar sensor nodes. According to bionic heuristic algorithm, a new ant colony-based clustering algorithm for radar sensor networks is also proposed. Simulation results show that this algorithm can get a better balance of the energy consumption and then remarkably prolong the lifetime of the radar sensor network.

  16. The guitar chord-generating algorithm based on complex network

    Science.gov (United States)

    Ren, Tao; Wang, Yi-fan; Du, Dan; Liu, Miao-miao; Siddiqi, Awais

    2016-02-01

    This paper aims to generate chords for popular songs automatically based on complex network. Firstly, according to the characteristics of guitar tablature, six chord networks of popular songs by six pop singers are constructed and the properties of all networks are concluded. By analyzing the diverse chord networks, the accompaniment regulations and features are shown, with which the chords can be generated automatically. Secondly, in terms of the characteristics of popular songs, a two-tiered network containing a verse network and a chorus network is constructed. With this network, the verse and chorus can be composed respectively with the random walk algorithm. Thirdly, the musical motif is considered for generating chords, with which the bad chord progressions can be revised. This method can make the accompaniments sound more melodious. Finally, a popular song is chosen for generating chords and the new generated accompaniment sounds better than those done by the composers.

  17. Engine cylinder pressure reconstruction using crank kinematics and recurrently-trained neural networks

    Science.gov (United States)

    Bennett, C.; Dunne, J. F.; Trimby, S.; Richardson, D.

    2017-02-01

    A recurrent non-linear autoregressive with exogenous input (NARX) neural network is proposed, and a suitable fully-recurrent training methodology is adapted and tuned, for reconstructing cylinder pressure in multi-cylinder IC engines using measured crank kinematics. This type of indirect sensing is important for cost effective closed-loop combustion control and for On-Board Diagnostics. The challenge addressed is to accurately predict cylinder pressure traces within the cycle under generalisation conditions: i.e. using data not previously seen by the network during training. This involves direct construction and calibration of a suitable inverse crank dynamic model, which owing to singular behaviour at top-dead-centre (TDC), has proved difficult via physical model construction, calibration, and inversion. The NARX architecture is specialised and adapted to cylinder pressure reconstruction, using a fully-recurrent training methodology which is needed because the alternatives are too slow and unreliable for practical network training on production engines. The fully-recurrent Robust Adaptive Gradient Descent (RAGD) algorithm, is tuned initially using synthesised crank kinematics, and then tested on real engine data to assess the reconstruction capability. Real data is obtained from a 1.125 l, 3-cylinder, in-line, direct injection spark ignition (DISI) engine involving synchronised measurements of crank kinematics and cylinder pressure across a range of steady-state speed and load conditions. The paper shows that a RAGD-trained NARX network using both crank velocity and crank acceleration as input information, provides fast and robust training. By using the optimum epoch identified during RAGD training, acceptably accurate cylinder pressures, and especially accurate location-of-peak-pressure, can be reconstructed robustly under generalisation conditions, making it the most practical NARX configuration and recurrent training methodology for use on production engines.

  18. A novel hybrid reconstruction algorithm for first generation incoherent scatter CT (ISCT) of large objects with potential medical imaging applications.

    Science.gov (United States)

    Alpuche Aviles, Jorge E; Pistorius, Stephen; Gordon, Richard; Elbakri, Idris A

    2011-01-01

    This work presents a first generation incoherent scatter CT (ISCT) hybrid (analytic-iterative) reconstruction algorithm for accurate ρ{e}imaging of objects with clinically relevant sizes. The algorithm reconstructs quantitative images of ρ{e} within a few iterations, avoiding the challenges of optimization based reconstruction algorithms while addressing the limitations of current analytical algorithms. A 4π detector is conceptualized in order to address the issue of directional dependency and is then replaced with a ring of detectors which detect a constant fraction of the scattered photons. The ISCT algorithm corrects for the attenuation of photons using a limited number of iterations and filtered back projection (FBP) for image reconstruction. This results in a hybrid reconstruction algorithm that was tested with sinograms generated by Monte Carlo (MC) and analytical (AN) simulations. Results show that the ISCT algorithm is weakly dependent on the ρ{e} initial estimate. Simulation results show that the proposed algorithm reconstruct ρ{e} images with a mean error of -1% ± 3% for the AN model and from -6% to -8% for the MC model. Finally, the algorithm is capable of reconstructing qualitatively good images even in the presence of multiple scatter. The proposed algorithm would be suitable for in-vivo medical imaging as long as practical limitations can be addressed. © 2011 – IOS Press and the authors. All rights reserved

  19. Rapid 2D phase-contrast magnetic resonance angiography reconstruction algorithm via compressed sensing

    Science.gov (United States)

    Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo; Han, Bong-Soo

    2013-09-01

    Phase-contrast magnetic resonance angiography (PC MRA) is an excellent technique for visualization of venous vessels. However, the scan time of PC MRA is long compared with there of other MRA techniques. Recently, the potential of compressed sensing (CS) reconstruction to reduce the scan time in MR image acquisition using a sparse sampling dataset has become an active field of study. In this study, we propose a combination method to apply the CS reconstruction method to 2D PC MRA. This work was performed to enable faster 2D PC MRA imaging acquisition and to demonstrate its feasibility. We used a 0.32 T MR imaging (MRI) system and a total variation (TV)-based CS reconstruction algorithm. To validate the usefulness of our proposed reconstruction method, we used visual assessment for reconstructed images, and we measured the quantitative information for sampling rates from 12.5 to 75.0%. Based on our results, when the sampling ratio is increased, images reconstructed with the CS method have a similar level of image quality to fully sampled reconstruction images. The signal to noise ratio (SNR) and the contrast-to-noise ratio (CNR) were also closer to the reference values when the sampling ratio was increased. We confirmed the feasibility of 2D PC MRA with the CS reconstruction method. Our results provide evidence that this method can improve the time resolution of 2D PC MRA.

  20. Near-infrared optical imaging of human brain based on the semi-3D reconstruction algorithm

    Science.gov (United States)

    Liu, Ming; Meng, Wei; Qin, Zhuanping; Zhou, Xiaoqing; Zhao, Huijuan; Gao, Feng

    2013-03-01

    In the non-invasive brain imaging with near-infrared light, precise head model is of great significance to the forward model and the image reconstruction. To deal with the individual difference of human head tissues and the problem of the irregular curvature, in this paper, we extracted head structure with Mimics software from the MRI image of a volunteer. This scheme makes it possible to assign the optical parameters to every layer of the head tissues reasonably and solve the diffusion equation with the finite-element analysis. During the solution of the inverse problem, a semi-3D reconstruction algorithm is adopted to trade off the computation cost and accuracy between the full 3-D and the 2-D reconstructions. In this scheme, the changes in the optical properties of the inclusions are assumed either axially invariable or confined to the imaging plane, while the 3-D nature of the photon migration is still retained. This therefore leads to a 2-D inverse issue with the matched 3-D forward model. Simulation results show that comparing to the 3-D reconstruction algorithm, the Semi-3D reconstruction algorithm cut 27% the calculation time consumption.

  1. Reconstructing networks of pathways via significance analysis of their intersections

    Directory of Open Access Journals (Sweden)

    Francesconi Mirko

    2008-04-01

    Full Text Available Abstract Background Significance analysis at single gene level may suffer from the limited number of samples and experimental noise that can severely limit the power of the chosen statistical test. This problem is typically approached by applying post hoc corrections to control the false discovery rate, without taking into account prior biological knowledge. Pathway or gene ontology analysis can provide an alternative way to relax the significance threshold applied to single genes and may lead to a better biological interpretation. Results Here we propose a new analysis method based on the study of networks of pathways. These networks are reconstructed considering both the significance of single pathways (network nodes and the intersection between them (links. We apply this method for the reconstruction of networks of pathways to two gene expression datasets: the first one obtained from a c-Myc rat fibroblast cell line expressing a conditional Myc-estrogen receptor oncoprotein; the second one obtained from the comparison of Acute Myeloid Leukemia and Acute Lymphoblastic Leukemia derived from bone marrow samples. Conclusion Our method extends statistical models that have been recently adopted for the significance analysis of functional groups of genes to infer links between these groups. We show that groups of genes at the interface between different pathways can be considered as relevant even if the pathways they belong to are not significant by themselves.

  2. Computed Tomography Radiation Dose Reduction: Effect of Different Iterative Reconstruction Algorithms on Image Quality

    NARCIS (Netherlands)

    Willemink, M.J.; Takx, R.A.P.; Jong, P.A. de; Budde, R.P.; Bleys, R.L.; Das, M.; Wildberger, J.E.; Prokop, M.; Buls, N.; Mey, J. de; Leiner, T.; Schilham, A.M.

    2014-01-01

    We evaluated the effects of hybrid and model-based iterative reconstruction (IR) algorithms from different vendors at multiple radiation dose levels on image quality of chest phantom scans.A chest phantom was scanned on state-of-the-art computed tomography scanners from 4 vendors at 4 dose levels

  3. Reconstruction of two interfering wavefronts using Zernike polynomials and stochastic parallel gradient descent algorithm.

    Science.gov (United States)

    Yazdani, Roghayeh; Fallah, Hamid R; Hajimahmoodzadeh, Morteza

    2014-03-15

    We numerically and experimentally demonstrate an iterative method to simultaneously reconstruct two unknown interfering wavefronts. A three-dimensional interference pattern is analyzed and then Zernike polynomials and the stochastic parallel gradient descent algorithm are used to expand and calculate wavefronts.

  4. Invariant mass determination using the output from a specialized electron track reconstruction algorithm

    OpenAIRE

    Gjersdal, Håvard

    2008-01-01

    The Gaussian sum filter is a track reconstruction algorithm specialized on dealing with the non-Gaussian radiative energy loss of electrons. This thesis deals with invariant mass determination from the non-Gaussian track estimates produced by the Gaussian sum filter.

  5. Quantum Google algorithm. Construction and application to complex networks

    Science.gov (United States)

    Paparo, G. D.; Müller, M.; Comellas, F.; Martin-Delgado, M. A.

    2014-07-01

    We review the main findings on the ranking capabilities of the recently proposed Quantum PageRank algorithm (G.D. Paparo et al., Sci. Rep. 2, 444 (2012) and G.D. Paparo et al., Sci. Rep. 3, 2773 (2013)) applied to large complex networks. The algorithm has been shown to identify unambiguously the underlying topology of the network and to be capable of clearly highlighting the structure of secondary hubs of networks. Furthermore, it can resolve the degeneracy in importance of the low-lying part of the list of rankings. Examples of applications include real-world instances from the WWW, which typically display a scale-free network structure and models of hierarchical networks. The quantum algorithm has been shown to display an increased stability with respect to a variation of the damping parameter, present in the Google algorithm, and a more clearly pronounced power-law behaviour in the distribution of importance among the nodes, as compared to the classical algorithm.

  6. Performance evaluation of power control algorithms in wireless cellular networks

    Science.gov (United States)

    Temaneh-Nyah, C.; Iita, V.

    2014-10-01

    Power control in a mobile communication network intents to control the transmission power levels in such a way that the required quality of service (QoS) for the users is guaranteed with lowest possible transmission powers. Most of the studies of power control algorithms in the literature are based on some kind of simplified assumptions which leads to compromise in the validity of the results when applied in a real environment. In this paper, a CDMA network was simulated. The real environment was accounted for by defining the analysis area and the network base stations and mobile stations are defined by their geographical coordinates, the mobility of the mobile stations is accounted for. The simulation also allowed for a number of network parameters including the network traffic, and the wireless channel models to be modified. Finally, we present the simulation results of a convergence speed based comparative analysis of three uplink power control algorithms.

  7. Optical correlation algorithm for reconstructing phase skeleton of complex optical fields for solving the phase problem

    DEFF Research Database (Denmark)

    Angelsky, O. V.; Gorsky, M. P.; Hanson, Steen Grüner

    2014-01-01

    We propose an optical correlation algorithm illustrating a new general method for reconstructing the phase skeleton of complex optical fields from the measured two-dimensional intensity distribution. The core of the algorithm consists in locating the saddle points of the intensity distribution...... and connecting such points into nets by the lines of intensity gradient that are closely associated with the equi-phase lines of the field. This algorithm provides a new partial solution to the inverse problem in optics commonly referred to as the phase problem....

  8. TV-constrained incremental algorithms for low-intensity CT image reconstruction

    DEFF Research Database (Denmark)

    Rose, Sean D.; Andersen, Martin S.; Sidky, Emil Y.

    2015-01-01

    Low-dose X-ray computed tomography (CT) has garnered much recent interest as it provides a method to lower patient dose and simultaneously reduce scan time. In non-medical applications the possibility of preventing sample damage makes low-dose CT desirable. Reconstruction in low-dose CT poses...... a significant challenge due to the high level of noise in the data. Here we propose an iterative method for reconstruction which minimizes the transmission Poisson likelihood subject to a total-variation constraint. This formulation accommodates efficient methods of parameter selection because the choice of TV...... constraint can be guided by an image reconstructed by filtered backprojection (FBP). We apply our algorithm to low-dose synchrotron X-ray CT data from the Advanced Photon Source (APS) at Argonne National Labs (ANL) to demonstrate its potential utility. We find that the algorithm provides a means of edge...

  9. Algorithmic and analytical methods in network biology

    OpenAIRE

    Koyutürk, Mehmet

    2010-01-01

    During genomic revolution, algorithmic and analytical methods for organizing, integrating, analyzing, and querying biological sequence data proved invaluable. Today, increasing availability of high-throughput data pertaining functional states of biomolecules, as well as their interactions, enables genome-scale studies of the cell from a systems perspective. The past decade witnessed significant efforts on the development of computational infrastructure for large-scale modeling and analysis of...

  10. Algorithms for Scheduling and Network Problems

    Science.gov (United States)

    1991-09-01

    Baruch Awerbuch while at MIT, and I thank him for serving on my thesis committee as well. My fellow students Cliff Stein and David Williamson have both...least one processor per operation, this can be done in NC using the edge-coloring algorithm of Lev , Pippinger, and Valiant [84]. We can extend this to...scheduling unrelated parallel machines. Mathematical Programming, 46:259-271, 1990. [84] G. F. Lev , N. Pippenger, and L. G. Valiant. A fast parallel

  11. Protein complexes predictions within protein interaction networks using genetic algorithms.

    Science.gov (United States)

    Ramadan, Emad; Naef, Ahmed; Ahmed, Moataz

    2016-07-25

    Protein-protein interaction networks are receiving increased attention due to their importance in understanding life at the cellular level. A major challenge in systems biology is to understand the modular structure of such biological networks. Although clustering techniques have been proposed for clustering protein-protein interaction networks, those techniques suffer from some drawbacks. The application of earlier clustering techniques to protein-protein interaction networks in order to predict protein complexes within the networks does not yield good results due to the small-world and power-law properties of these networks. In this paper, we construct a new clustering algorithm for predicting protein complexes through the use of genetic algorithms. We design an objective function for exclusive clustering and overlapping clustering. We assess the quality of our proposed clustering algorithm using two gold-standard data sets. Our algorithm can identify protein complexes that are significantly enriched in the gold-standard data sets. Furthermore, our method surpasses three competing methods: MCL, ClusterOne, and MCODE in terms of the quality of the predicted complexes. The source code and accompanying examples are freely available at http://faculty.kfupm.edu.sa/ics/eramadan/GACluster.zip .

  12. Integrated Approach to Reconstruction of Microbial Regulatory Networks

    Energy Technology Data Exchange (ETDEWEB)

    Rodionov, Dmitry A [Sanford-Burnham Medical Research Institute; Novichkov, Pavel S [Lawrence Berkeley National Laboratory

    2013-11-04

    This project had the goal(s) of development of integrated bioinformatics platform for genome-scale inference and visualization of transcriptional regulatory networks (TRNs) in bacterial genomes. The work was done in Sanford-Burnham Medical Research Institute (SBMRI, P.I. D.A. Rodionov) and Lawrence Berkeley National Laboratory (LBNL, co-P.I. P.S. Novichkov). The developed computational resources include: (1) RegPredict web-platform for TRN inference and regulon reconstruction in microbial genomes, and (2) RegPrecise database for collection, visualization and comparative analysis of transcriptional regulons reconstructed by comparative genomics. These analytical resources were selected as key components in the DOE Systems Biology KnowledgeBase (SBKB). The high-quality data accumulated in RegPrecise will provide essential datasets of reference regulons in diverse microbes to enable automatic reconstruction of draft TRNs in newly sequenced genomes. We outline our progress toward the three aims of this grant proposal, which were: Develop integrated platform for genome-scale regulon reconstruction; Infer regulatory annotations in several groups of bacteria and building of reference collections of microbial regulons; and Develop KnowledgeBase on microbial transcriptional regulation.

  13. Pre-clinical Positron Emission Tomography Reconstruction Algorithm Effect on Cu-64 ATSM Lesion Hypoxia

    Directory of Open Access Journals (Sweden)

    Bal Sanghera

    2016-02-01

    Full Text Available Objective: Application of distinct positron emission tomography (PET scan reconstruction algorithms can lead to statistically significant differences in measuring lesion functional properties. We looked at the influence of two-dimensional filtered back projection (2D FBP, two-dimensional ordered subset expectation maximization (2D OSEM, three-dimensional ordered subset expectation maximization (3D OSEM without 3D maximum a posteriori and with (3D OSEM MAP on lesion hypoxia tracer uptake using a pre-clinical PET scanner. Methods: Reconstructed images of a rodent tumor model bearing P22 carcinosarcoma injected with hypoxia tracer Copper- 64-Diacetyl-bis (N4-methylthiosemicarbazone (i.e. Cu-64 ATSM were analyzed at 10 minute intervals till 60 minute post injection. Lesion maximum standardized uptake values (SUVmax and SUVmax/background SUVmean (T/B were recorded and investigated after application of multiple algorithm and reconstruction parameters to assess their influence on Cu-64 ATSM measurements and associated trends over time. Results: SUVmax exhibited convergence for OSEM reconstructions while ANOVA results showed a significant difference in SUVmax or T/B between 2D FBP, 2D OSEM, 3D OSEM and 3D OSEM MAP reconstructions across all time frames. SUVmax and T/B were greatest in magnitude for 2D OSEM followed by 3D OSEM MAP, 3D OSEM and then 2D FBP at all time frames respectively. Similarly SUVmax and T/B standard deviations (SD were lowest for 2D OSEM in comparison with other algorithms. Conclusion: Significantly higher magnitude lesion SUVmax and T/B combined with lower SD were observed using 2D OSEM reconstruction in comparison with 2D FBP, 3D OSEM and 3D OSEM MAP algorithms at all time frames. Results are consistent with other published studies however more specimens are required for full validation.

  14. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  15. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu [Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri 63130 (United States); Yang, Deshan [Department of Radiation Oncology, School of Medicine, Washington University in St. Louis, St. Louis, Missouri 63110 (United States); Tan, Jun [Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, Texas 75390 (United States)

    2016-04-15

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  16. Block Least Mean Squares Algorithm over Distributed Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    T. Panigrahi

    2012-01-01

    Full Text Available In a distributed parameter estimation problem, during each sampling instant, a typical sensor node communicates its estimate either by the diffusion algorithm or by the incremental algorithm. Both these conventional distributed algorithms involve significant communication overheads and, consequently, defeat the basic purpose of wireless sensor networks. In the present paper, we therefore propose two new distributed algorithms, namely, block diffusion least mean square (BDLMS and block incremental least mean square (BILMS by extending the concept of block adaptive filtering techniques to the distributed adaptation scenario. The performance analysis of the proposed BDLMS and BILMS algorithms has been carried out and found to have similar performances to those offered by conventional diffusion LMS and incremental LMS algorithms, respectively. The convergence analyses of the proposed algorithms obtained from the simulation study are also found to be in agreement with the theoretical analysis. The remarkable and interesting aspect of the proposed block-based algorithms is that their communication overheads per node and latencies are less than those of the conventional algorithms by a factor as high as the block size used in the algorithms.

  17. Implementation and evaluation of two helical CT reconstruction algorithms in CIVA

    Science.gov (United States)

    Banjak, H.; Costin, M.; Vienne, C.; Kaftandjian, V.

    2016-02-01

    The large majority of industrial CT systems reconstruct the 3D volume by using an acquisition on a circular trajec-tory. However, when inspecting long objects which are highly anisotropic, this scanning geometry creates severe artifacts in the reconstruction. For this reason, the use of an advanced CT scanning method like helical data acquisition is an efficient way to address this aspect known as the long-object problem. Recently, several analytically exact and quasi-exact inversion formulas for helical cone-beam reconstruction have been proposed. Among them, we identified two algorithms of interest for our case. These algorithms are exact and of filtered back-projection structure. In this work we implemented the filtered-backprojection (FBP) and backprojection-filtration (BPF) algorithms of Zou and Pan (2004). For performance evaluation, we present a numerical compari-son of the two selected algorithms with the helical FDK algorithm using both complete (noiseless and noisy) and truncated data generated by CIVA (the simulation platform for non-destructive testing techniques developed at CEA).

  18. Modeling gene regulatory networks: A network simplification algorithm

    Science.gov (United States)

    Ferreira, Luiz Henrique O.; de Castro, Maria Clicia S.; da Silva, Fabricio A. B.

    2016-12-01

    Boolean networks have been used for some time to model Gene Regulatory Networks (GRNs), which describe cell functions. Those models can help biologists to make predictions, prognosis and even specialized treatment when some disturb on the GRN lead to a sick condition. However, the amount of information related to a GRN can be huge, making the task of inferring its boolean network representation quite a challenge. The method shown here takes into account information about the interactome to build a network, where each node represents a protein, and uses the entropy of each node as a key to reduce the size of the network, allowing the further inferring process to focus only on the main protein hubs, the ones with most potential to interfere in overall network behavior.

  19. Algorithmic Complexity and Reprogrammability of Chemical Structure Networks

    KAUST Repository

    Zenil, Hector

    2018-02-16

    Here we address the challenge of profiling causal properties and tracking the transformation of chemical compounds from an algorithmic perspective. We explore the potential of applying a computational interventional calculus based on the principles of algorithmic probability to chemical structure networks. We profile the sensitivity of the elements and covalent bonds in a chemical structure network algorithmically, asking whether reprogrammability affords information about thermodynamic and chemical processes involved in the transformation of different compound classes. We arrive at numerical results suggesting a correspondence between some physical, structural and functional properties. Our methods are capable of separating chemical classes that reflect functional and natural differences without considering any information about atomic and molecular properties. We conclude that these methods, with their links to chemoinformatics via algorithmic, probability hold promise for future research.

  20. Extension of the modal wave-front reconstruction algorithm to non-uniform illumination.

    Science.gov (United States)

    Ma, Xiaoyu; Mu, Jie; Rao, ChangHui; Yang, Jinsheng; Rao, XueJun; Tian, Yu

    2014-06-30

    Attempts are made to eliminate the effects of non-uniform illumination on the precision of wave-front measurement. To achieve this, the relationship between the wave-front slope at a single sub-aperture and the distributions of the phase and light intensity of the wave-front were first analyzed to obtain the relevant theoretical formulae. Then, based on the principle of modal wave-front reconstruction, the influence of the light intensity distribution on the wave-front slope is introduced into the calculation of the reconstruction matrix. Experiments were conducted to prove that the corrected modal wave-front reconstruction algorithm improved the accuracy of wave-front reconstruction. Moreover, the correction is conducive to high-precision wave-front measurement using a Hartmann wave-front sensor in the presence of non-uniform illumination.

  1. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark.

    Science.gov (United States)

    Zhang, Tiankui; Hu, Huasi; Jia, Qinggang; Zhang, Fengna; Chen, Da; Li, Zhenghong; Wu, Yuelei; Liu, Zhihua; Hu, Guang; Guo, Wei

    2012-11-01

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. "Residual watermark," which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  2. Classifying epilepsy diseases using artificial neural networks and genetic algorithm.

    Science.gov (United States)

    Koçer, Sabri; Canal, M Rahmi

    2011-08-01

    In this study, FFT analysis is applied to the EEG signals of the normal and patient subjects and the obtained FFT coefficients are used as inputs in Artificial Neural Network (ANN). The differences shown by the non-stationary random signals such as EEG signals in cases of health and sickness (epilepsy) were evaluated and tried to be analyzed under computer-supported conditions by using artificial neural networks. Multi-Layer Perceptron (MLP) architecture is used Levenberg-Marquardt (LM), Quickprop (QP), Delta-bar delta (DBD), Momentum and Conjugate gradient (CG) learning algorithms, and the best performance was tried to be attained by ensuring the optimization with the use of genetic algorithms of the weights, learning rates, neuron numbers of hidden layer in the training process. This study shows that the artificial neural network increases the classification performance using genetic algorithm.

  3. Algorithm for queueing networks with multi-rate traffic

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk; Ko, King-Tim

    2011-01-01

    In this paper we present a new algorithm for evaluating queueing networks with multi-rate traffic. The detailed state space of a node is evaluated by explicit formulæ. We consider reversible nodes with multi-rate traffic and find the state probabilities by taking advantage of local balance. Theory...... is reversibility which implies that the arrival process and departure process are identical processes, for example state-dependent Poisson processes. This property is equivalent to reversibility. Due to product form, an open network with multi-rate traffic is easy to evaluate by convolution algorithms because...... the nodes behave as independent nodes. For closed queueing networks with multiple servers in every node and multi-rate services we may apply multidimensional convolution algorithm to aggregate the nodes so that we end up with two nodes, the aggregated node and a single node, for which we can calculate...

  4. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    Science.gov (United States)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  5. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    Science.gov (United States)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  6. Reconstruction of cellular forces in fibrous biopolymer network

    CERN Document Server

    Zhang, Yunsong; Heizler, Shay; Levine, Herbert

    2016-01-01

    How cells move through 3d extracellular matrix (ECM) is of increasing interest in attempts to understand important biological processes such as cancer metastasis. Just as in motion on 2d surfaces, it is expected that experimental measurements of cell-generated forces will provide valuable information for uncovering the mechanisms of cell migration. Here, we use a lattice-based mechanical model of ECM to study the cellular force reconstruction issue. We conceptually propose an efficient computational scheme to reconstruct cellular forces from the deformation and explore the performance of our scheme in presence of noise, varying marker bead distribution, varying bond stiffnesses and changing cell morphology. Our results show that micromechanical information, rather than merely the bulk rheology of the biopolymer networks, is essential for a precise recovery of cellular forces.

  7. Models and algorithms for biomolecules and molecular networks

    CERN Document Server

    DasGupta, Bhaskar

    2016-01-01

    By providing expositions to modeling principles, theories, computational solutions, and open problems, this reference presents a full scope on relevant biological phenomena, modeling frameworks, technical challenges, and algorithms. * Up-to-date developments of structures of biomolecules, systems biology, advanced models, and algorithms * Sampling techniques for estimating evolutionary rates and generating molecular structures * Accurate computation of probability landscape of stochastic networks, solving discrete chemical master equations * End-of-chapter exercises

  8. Technical Note: Evaluation of an iterative reconstruction algorithm for optical CT radiation dosimetry.

    Science.gov (United States)

    Dekker, Kurtis H; Battista, Jerry J; Jordan, Kevin J

    2017-10-26

    Iterative CT reconstruction algorithms are gaining popularity as GPU-based computation becomes more accessible. These algorithms are desirable in x-ray CT for their ability to achieve similar image quality at a fraction of the dose required for standard filtered backprojection reconstructions. In optical CT dosimetry, the noise reduction capability of such algorithms is similarly desirable because noise has a detrimental effect on the precision of dosimetric analysis, and can create misleading test results. In this study, we evaluate an iterative CT reconstruction algorithm for gel dosimetry, with special attention to the challenging dosimetry of small fields. An existing ordered subsets convex algorithm using total variation minimization regularization (OSC-TV) was implemented. Three datasets, which represent the extreme cases of gel dosimetry, were examined: a large, 15 cm diameter uniform phantom, a 1.35 cm diameter finger phantom, and a 15 cm gel dosimeter irradiated with 3x3, 2x2, 1x1 and 0.6x0.6 cm fields. These were scanned on an in-house scanning laser system, and reconstructed with both filtered backprojection and OSC-TV with a range of regularization constants. The contrast to artifact + noise ratio (CANR) and penumbra width measurements (80% to 20% and 95% to 5% distances) were used to compare reconstructions. Our results showed that OSC-TV can achieve 3-5x improvement in contrast to artifact + noise ratio compared to filtered backprojection, while preserving the shape of steep dose gradients. For very small objects (≤ 0.6 x 0.6 cm fields in a 16x16 cm field of view), the mean value in the center of the object can be suppressed if the regularization constant is improperly set, which must be avoided. Overall, the results indicate that OSC-TV is a suitable reconstruction algorithm for gel dosimetry, provided care is taken in setting the regularization parameter when reconstructing objects that are small compared to the scanner field of view. This article

  9. Fast, Distributed Algorithms in Deep Networks

    Science.gov (United States)

    2016-05-11

    dataset consists of images of house numbers taken from the Google Streetview car . Each data point consisted of a cropped image of a single digit which...1989. [9] Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, volume 1...Bengio. Understanding the difficult of training deep feed- forward neural networks. In International Conference on Artificial Intelligence and Statistics

  10. Quantum-based algorithm for optimizing artificial neural networks.

    Science.gov (United States)

    Tzyy-Chyang Lu; Gwo-Ruey Yu; Jyh-Ching Juang

    2013-08-01

    This paper presents a quantum-based algorithm for evolving artificial neural networks (ANNs). The aim is to design an ANN with few connections and high classification performance by simultaneously optimizing the network structure and the connection weights. Unlike most previous studies, the proposed algorithm uses quantum bit representation to codify the network. As a result, the connectivity bits do not indicate the actual links but the probability of the existence of the connections, thus alleviating mapping problems and reducing the risk of throwing away a potential candidate. In addition, in the proposed model, each weight space is decomposed into subspaces in terms of quantum bits. Thus, the algorithm performs a region by region exploration, and evolves gradually to find promising subspaces for further exploitation. This is helpful to provide a set of appropriate weights when evolving the network structure and to alleviate the noisy fitness evaluation problem. The proposed model is tested on four benchmark problems, namely breast cancer and iris, heart, and diabetes problems. The experimental results show that the proposed algorithm can produce compact ANN structures with good generalization ability compared to other algorithms.

  11. Evaluation of Topology-Aware Broadcast Algorithms for Dragonfly Networks

    Energy Technology Data Exchange (ETDEWEB)

    Dorier, Matthieu; Mubarak, Misbah; Ross, Rob; Li, Jianping Kelvin; Carothers, Christopher D.; Ma, Kwan-Liu

    2016-09-12

    Two-tiered direct network topologies such as Dragonflies have been proposed for future post-petascale and exascale machines, since they provide a high-radix, low-diameter, fast interconnection network. Such topologies call for redesigning MPI collective communication algorithms in order to attain the best performance. Yet as increasingly more applications share a machine, it is not clear how these topology-aware algorithms will react to interference with concurrent jobs accessing the same network. In this paper, we study three topology-aware broadcast algorithms, including one designed by ourselves. We evaluate their performance through event-driven simulation for small- and large-sized broadcasts (in terms of both data size and number of processes). We study the effect of different routing mechanisms on the topology-aware collective algorithms, as well as their sensitivity to network contention with other jobs. Our results show that while topology-aware algorithms dramatically reduce link utilization, their advantage in terms of latency is more limited.

  12. A source location algorithm of lightning detection networks in China

    Directory of Open Access Journals (Sweden)

    Z. X. Hu

    2010-10-01

    Full Text Available Fast and accurate retrieval of lightning sources is crucial to the early warning and quick repairs of lightning disaster. An algorithm for computing the location and onset time of cloud-to-ground lightning using the time-of-arrival (TOA and azimuth-of-arrival (AOA data is introduced in this paper. The algorithm can iteratively calculate the least-squares solution of a lightning source on an oblate spheroidal Earth. It contains a set of unique formulas to compute the geodesic distance and azimuth and an explicit method to compute the initial position using TOA data of only three sensors. Since the method accounts for the effects of the oblateness of the Earth, it would provide a more accurate solution than algorithms based on planar or spherical surface models. Numerical simulations are presented to test this algorithm and evaluate the performance of a lightning detection network in the Hubei province of China. Since 1990s, the proposed algorithm has been used in many regional lightning detection networks installed by the electric power system in China. It is expected that the proposed algorithm be used in more lightning detection networks and other location systems.

  13. Gene Regulatory Network Reconstruction Using Conditional Mutual Information

    Directory of Open Access Journals (Sweden)

    Xiaodong Wang

    2008-06-01

    Full Text Available The inference of gene regulatory network from expression data is an important area of research that provides insight to the inner workings of a biological system. The relevance-network-based approaches provide a simple and easily-scalable solution to the understanding of interaction between genes. Up until now, most works based on relevance network focus on the discovery of direct regulation using correlation coefficient or mutual information. However, some of the more complicated interactions such as interactive regulation and coregulation are not easily detected. In this work, we propose a relevance network model for gene regulatory network inference which employs both mutual information and conditional mutual information to determine the interactions between genes. For this purpose, we propose a conditional mutual information estimator based on adaptive partitioning which allows us to condition on both discrete and continuous random variables. We provide experimental results that demonstrate that the proposed regulatory network inference algorithm can provide better performance when the target network contains coregulated and interactively regulated genes.

  14. Algorithm for queueing networks with multi-rate traffic

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk; King-Tim, Ko

    2011-01-01

    In this paper we present a new algorithm for evaluating queueing networks with multi-rate traffic. The detailed state space of a node is evaluated by explicit formulæ. We consider reversible nodes with multi-rate traffic and find the state probabilities by taking advantage of local balance. Theory...... is reversibility which implies that the arrival process and departure process are identical processes, for example state-dependent Poisson processes. This property is equivalent to reversibility. Due to product form, an open network with multi-rate traffic is easy to evaluate by convolution algorithms because...

  15. Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Jianyong Liu

    2015-01-01

    Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.

  16. An Optimal Routing Algorithm in Service Customized 5G Networks

    Directory of Open Access Journals (Sweden)

    Haipeng Yao

    2016-01-01

    Full Text Available With the widespread use of Internet, the scale of mobile data traffic grows explosively, which makes 5G networks in cellular networks become a growing concern. Recently, the ideas related to future network, for example, Software Defined Networking (SDN, Content-Centric Networking (CCN, and Big Data, have drawn more and more attention. In this paper, we propose a service-customized 5G network architecture by introducing the ideas of separation between control plane and data plane, in-network caching, and Big Data processing and analysis to resolve the problems traditional cellular radio networks face. Moreover, we design an optimal routing algorithm for this architecture, which can minimize average response hops in the network. Simulation results reveal that, by introducing the cache, the network performance can be obviously improved in different network conditions compared to the scenario without a cache. In addition, we explore the change of cache hit rate and average response hops under different cache replacement policies, cache sizes, content popularity, and network topologies, respectively.

  17. High time resolution reconstruction of electron temperature profiles with a neural network in C-2U

    Science.gov (United States)

    Player, Gabriel; Magee, Richard; Trask, Erik; Korepanov, Sergey; Clary, Ryan; Tri Alpha Energy Team

    2017-10-01

    One of the most important parameters governing fast ion dynamics in a plasma is the electron temperature, as the fast ion-electron collision rate goes as νei Te3 / 2 . Unfortunately, the electron temperature is difficult to directly measure-methods relying on high-powered laser pulses or fragile probes lead to limited time resolution or measurements restricted to the edge. In order to rectify the lack of time resolution on the Thomson scattering data in the core, a type of learning algorithm, specifically a neural network, was implemented. This network uses 3 hidden layers to correlate information from nearly 250 signals, including magnetics, interferometers, and several arrays of bolometers, with Thomson scattering data over the entire C-2U database, totalling nearly 20,000 samples. The network uses the Levenberg-Marquardt algorithm with Bayesian regularization to learn from the large number of samples and inputs how to accurately reconstruct the entire electron temperature time history at a resolution of 500 kHz, a huge improvement over the 2 time points per shot provided by Thomson scattering. These results can be used in many different types of analysis and plasma characterization-in this work, we use the network to quantify electron heating.

  18. A fast method to emulate an iterative POCS image reconstruction algorithm.

    Science.gov (United States)

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  19. Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm.

    Science.gov (United States)

    Wu, Haizhou; Zhou, Yongquan; Luo, Qifang; Basset, Mohamed Abdel

    2016-01-01

    Symbiotic organisms search (SOS) is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs). In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared.

  20. A Vehicle Detection Algorithm Based on Deep Belief Network

    Directory of Open Access Journals (Sweden)

    Hai Wang

    2014-01-01

    Full Text Available Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. Traditional shallow model based vehicle detection algorithm still cannot meet the requirement of accurate vehicle detection in these applications. In this work, a novel deep learning based vehicle detection algorithm with 2D deep belief network (2D-DBN is proposed. In the algorithm, the proposed 2D-DBN architecture uses second-order planes instead of first-order vector as input and uses bilinear projection for retaining discriminative information so as to determine the size of the deep architecture which enhances the success rate of vehicle detection. On-road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets.

  1. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    Directory of Open Access Journals (Sweden)

    Z. H. Che

    2014-01-01

    Full Text Available In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA, Genetic Algorithm-Simulated Annealing (GA-SA, and Particle Swarm Optimization-Simulated Annealing (PSO-SA for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.

  2. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    Science.gov (United States)

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  3. Hybrid algorithms for fuzzy reverse supply chain network design.

    Science.gov (United States)

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.

  4. Comparison and evaluation of network clustering algorithms applied to genetic interaction networks.

    Science.gov (United States)

    Hou, Lin; Wang, Lin; Berg, Arthur; Qian, Minping; Zhu, Yunping; Li, Fangting; Deng, Minghua

    2012-01-01

    The goal of network clustering algorithms detect dense clusters in a network, and provide a first step towards the understanding of large scale biological networks. With numerous recent advances in biotechnologies, large-scale genetic interactions are widely available, but there is a limited understanding of which clustering algorithms may be most effective. In order to address this problem, we conducted a systematic study to compare and evaluate six clustering algorithms in analyzing genetic interaction networks, and investigated influencing factors in choosing algorithms. The algorithms considered in this comparison include hierarchical clustering, topological overlap matrix, bi-clustering, Markov clustering, Bayesian discriminant analysis based community detection, and variational Bayes approach to modularity. Both experimentally identified and synthetically constructed networks were used in this comparison. The accuracy of the algorithms is measured by the Jaccard index in comparing predicted gene modules with benchmark gene sets. The results suggest that the choice differs according to the network topology and evaluation criteria. Hierarchical clustering showed to be best at predicting protein complexes; Bayesian discriminant analysis based community detection proved best under epistatic miniarray profile (EMAP) datasets; the variational Bayes approach to modularity was noticeably better than the other algorithms in the genome-scale networks.

  5. AdaBoost-based algorithm for network intrusion detection.

    Science.gov (United States)

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.

  6. Reconstruction of metabolic networks from high-throughput metabolite profiling data: in silico analysis of red blood cell metabolism.

    Science.gov (United States)

    Nemenman, Ilya; Escola, G Sean; Hlavacek, William S; Unkefer, Pat J; Unkefer, Clifford J; Wall, Michael E

    2007-12-01

    We investigate the ability of algorithms developed for reverse engineering of transcriptional regulatory networks to reconstruct metabolic networks from high-throughput metabolite profiling data. For benchmarking purposes, we generate synthetic metabolic profiles based on a well-established model for red blood cell metabolism. A variety of data sets are generated, accounting for different properties of real metabolic networks, such as experimental noise, metabolite correlations, and temporal dynamics. These data sets are made available online. We use ARACNE, a mainstream algorithm for reverse engineering of transcriptional regulatory networks from gene expression data, to predict metabolic interactions from these data sets. We find that the performance of ARACNE on metabolic data is comparable to that on gene expression data.

  7. Spectral algorithms for heterogeneous biological networks.

    Science.gov (United States)

    McDonald, Martin; Higham, Desmond J; Vass, J Keith

    2012-11-01

    Spectral methods, which use information relating to eigenvectors, singular vectors and generalized singular vectors, help us to visualize and summarize sets of pairwise interactions. In this work, we motivate and discuss the use of spectral methods by taking a matrix computation view and applying concepts from applied linear algebra. We show that this unified approach is sufficiently flexible to allow multiple sources of network information to be combined. We illustrate the methods on microarray data arising from a large population-based study in human adipose tissue, combined with related information concerning metabolic pathways.

  8. X-ray dose reduction in abdominal computed tomography using advanced iterative reconstruction algorithms.

    Directory of Open Access Journals (Sweden)

    Peigang Ning

    Full Text Available OBJECTIVE: This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR and model-based iterative reconstruction (MBIR algorithms in reducing computed tomography (CT radiation dosages in abdominal imaging. METHODS: CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP, 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol were recorded. RESULTS: At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. CONCLUSIONS: Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively.

  9. X-Ray Dose Reduction in Abdominal Computed Tomography Using Advanced Iterative Reconstruction Algorithms

    Science.gov (United States)

    Ning, Peigang; Zhu, Shaocheng; Shi, Dapeng; Guo, Ying; Sun, Minghua

    2014-01-01

    Objective This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) algorithms in reducing computed tomography (CT) radiation dosages in abdominal imaging. Methods CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP), 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs) of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol) were recorded. Results At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. Conclusions Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively. PMID:24664174

  10. AN INTELLIGENT VERTICAL HANDOVER DECISION ALGORITHM FOR WIRELESS HETEROGENEOUS NETWORKS

    OpenAIRE

    V. Anantha Narayanan; Rajeswari, A; Sureshkumar, V.

    2014-01-01

    The Next Generation Wireless Networks (NGWN) should be compatible with other communication technologies to offer the best connectivity to the mobile terminal which can access any IP based services at any time from any network without the knowledge of its user. It requires an intelligent vertical handover decision making algorithm to migrate between technologies that enable seamless mobility, always best connection and minimal terminal power consumption. Currently existing decision engines are...

  11. Genetic Algorithms in Wireless Networking: Techniques, Applications, and Issues

    OpenAIRE

    Mehboob, Usama; Qadir, Junaid; Ali, Salman; Vasilakos, Athanasios

    2014-01-01

    In recent times, wireless access technology is becoming increasingly commonplace due to the ease of operation and installation of untethered wireless media. The design of wireless networking is challenging due to the highly dynamic environmental condition that makes parameter optimization a complex task. Due to the dynamic, and often unknown, operating conditions, modern wireless networking standards increasingly rely on machine learning and artificial intelligence algorithms. Genetic algorit...

  12. Sensor and ad-hoc networks theoretical and algorithmic aspects

    CERN Document Server

    Makki, S Kami; Pissinou, Niki; Makki, Shamila; Karimi, Masoumeh

    2008-01-01

    This book brings together leading researchers and developers in the field of wireless sensor networks to explain the special problems and challenges of the algorithmic aspects of sensor and ad-hoc networks. The book also fosters communication not only between the different sensor and ad-hoc communities, but also between those communities and the distributed systems and information systems communities. The topics addressed pertain to the sensors and mobile environment.

  13. Efficient Algorithms for Bayesian Network Parameter Learning from Incomplete Data

    Science.gov (United States)

    2015-07-01

    Bayesian networks. In IJCNN, pp. 2391– 2396. Ghahramani, Z., & Jordan, M. I. (1997). Factorial hidden markov models. Machine Learning, 29(2-3), 245–273...algorithms like EM (which require inference). 1 INTRODUCTION When learning the parameters of a Bayesian network from data with missing values, the...missing at random assumption (MAR), but also for a broad class of data that is not MAR. Their analysis is based on a graphical representation for

  14. Access Network Selection Based on Fuzzy Logic and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Mohammed Alkhawlani

    2008-01-01

    Full Text Available In the next generation of heterogeneous wireless networks (HWNs, a large number of different radio access technologies (RATs will be integrated into a common network. In this type of networks, selecting the most optimal and promising access network (AN is an important consideration for overall networks stability, resource utilization, user satisfaction, and quality of service (QoS provisioning. This paper proposes a general scheme to solve the access network selection (ANS problem in the HWN. The proposed scheme has been used to present and design a general multicriteria software assistant (SA that can consider the user, operator, and/or the QoS view points. Combined fuzzy logic (FL and genetic algorithms (GAs have been used to give the proposed scheme the required scalability, flexibility, and simplicity. The simulation results show that the proposed scheme and SA have better and more robust performance over the random-based selection.

  15. Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks

    Science.gov (United States)

    2014-01-01

    Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226

  16. District Heating Network Design and Configuration Optimization with Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Hongwei Li

    2013-12-01

    Full Text Available In this paper, the configuration of a district heating network which connects from the heating plant to the end users is optimized. Each end user in the network represents a building block. The connections between the heat generation plant and the end users are represented with mixed integer and the pipe friction and heat loss formulations are non-linear. In order to find the optimal district heating network configuration, genetic algorithm which handles the mixed integer nonlinear programming problem is chosen. The network configuration is represented with binary and integer encoding and is optimized in terms of the net present cost. The optimization results indicates that the optimal DH network configuration is determined by multiple factors such as the consumer heating load, the distance between the heating plant to the consumer, the design criteria regarding the pressure and temperature limitation, as well as the corresponding network heat loss.

  17. Fast grid layout algorithm for biological networks with sweep calculation.

    Science.gov (United States)

    Kojima, Kaname; Nagasaki, Masao; Miyano, Satoru

    2008-06-15

    Properly drawn biological networks are of great help in the comprehension of their characteristics. The quality of the layouts for retrieved biological networks is critical for pathway databases. However, since it is unrealistic to manually draw biological networks for every retrieval, automatic drawing algorithms are essential. Grid layout algorithms handle various biological properties such as aligning vertices having the same attributes and complicated positional constraints according to their subcellular localizations; thus, they succeed in providing biologically comprehensible layouts. However, existing grid layout algorithms are not suitable for real-time drawing, which is one of requisites for applications to pathway databases, due to their high-computational cost. In addition, they do not consider edge directions and their resulting layouts lack traceability for biochemical reactions and gene regulations, which are the most important features in biological networks. We devise a new calculation method termed sweep calculation and reduce the time complexity of the current grid layout algorithms through its encoding and decoding processes. We conduct practical experiments by using 95 pathway models of various sizes from TRANSPATH and show that our new grid layout algorithm is much faster than existing grid layout algorithms. For the cost function, we introduce a new component that penalizes undesirable edge directions to avoid the lack of traceability in pathways due to the differences in direction between in-edges and out-edges of each vertex. Java implementations of our layout algorithms are available in Cell Illustrator. masao@ims.u-tokyo.ac.jp Supplementary data are available at Bioinformatics online.

  18. A Differentiated Anonymity Algorithm for Social Network Privacy Preservation

    Directory of Open Access Journals (Sweden)

    Yuqin Xie

    2016-12-01

    Full Text Available Devising methods to publish social network data in a form that affords utility without compromising privacy remains a longstanding challenge, while many existing methods based on k-anonymity algorithms on social networks may result in nontrivial utility loss without analyzing the social network topological structure and without considering the attributes of sparse distribution. Toward this objective, we explore the impact of the attributes of sparse distribution on data utility. Firstly, we propose a new utility metric that emphasizes network structure distortion and attribute value loss. Furthermore, we design and implement a differentiated k-anonymity l-diversity social network anonymity algorithm, which seeks to protect users’ privacy in social networks and increase the usability of the published anonymized data. Its key idea is that it divides a node into two child nodes and only anonymizes sensitive values to satisfy anonymity requirements. The evaluation results show that our method can effectively improve the data utility as compared to generalized anonymizing algorithms.

  19. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yumeng [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China); Zeng, Li, E-mail: drlizeng@cqu.edu.cn [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China)

    2017-01-11

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  20. A new mutually reinforcing network node and link ranking algorithm.

    Science.gov (United States)

    Wang, Zhenghua; Dueñas-Osorio, Leonardo; Padgett, Jamie E

    2015-10-23

    This study proposes a novel Normalized Wide network Ranking algorithm (NWRank) that has the advantage of ranking nodes and links of a network simultaneously. This algorithm combines the mutual reinforcement feature of Hypertext Induced Topic Selection (HITS) and the weight normalization feature of PageRank. Relative weights are assigned to links based on the degree of the adjacent neighbors and the Betweenness Centrality instead of assigning the same weight to every link as assumed in PageRank. Numerical experiment results show that NWRank performs consistently better than HITS, PageRank, eigenvector centrality, and edge betweenness from the perspective of network connectivity and approximate network flow, which is also supported by comparisons with the expensive N-1 benchmark removal criteria based on network efficiency. Furthermore, it can avoid some problems, such as the Tightly Knit Community effect, which exists in HITS. NWRank provides a new inexpensive way to rank nodes and links of a network, which has practical applications, particularly to prioritize resource allocation for upgrade of hierarchical and distributed networks, as well as to support decision making in the design of networks, where node and link importance depend on a balance of local and global integrity.

  1. A new mutually reinforcing network node and link ranking algorithm

    Science.gov (United States)

    Wang, Zhenghua; Dueñas-Osorio, Leonardo; Padgett, Jamie E.

    2015-10-01

    This study proposes a novel Normalized Wide network Ranking algorithm (NWRank) that has the advantage of ranking nodes and links of a network simultaneously. This algorithm combines the mutual reinforcement feature of Hypertext Induced Topic Selection (HITS) and the weight normalization feature of PageRank. Relative weights are assigned to links based on the degree of the adjacent neighbors and the Betweenness Centrality instead of assigning the same weight to every link as assumed in PageRank. Numerical experiment results show that NWRank performs consistently better than HITS, PageRank, eigenvector centrality, and edge betweenness from the perspective of network connectivity and approximate network flow, which is also supported by comparisons with the expensive N-1 benchmark removal criteria based on network efficiency. Furthermore, it can avoid some problems, such as the Tightly Knit Community effect, which exists in HITS. NWRank provides a new inexpensive way to rank nodes and links of a network, which has practical applications, particularly to prioritize resource allocation for upgrade of hierarchical and distributed networks, as well as to support decision making in the design of networks, where node and link importance depend on a balance of local and global integrity.

  2. Mitigate Cascading Failures on Networks using a Memetic Algorithm.

    Science.gov (United States)

    Tang, Xianglong; Liu, Jing; Hao, Xingxing

    2016-12-09

    Research concerning cascading failures in complex networks has become a hot topic. However, most of the existing studies have focused on modelling the cascading phenomenon on networks and analysing network robustness from a theoretical point of view, which considers only the damage incurred by the failure of one or several nodes. However, such a theoretical approach may not be useful in practical situation. Thus, we first design a much more practical measure to evaluate the robustness of networks against cascading failures, termed Rcf. Then, adopting Rcf as the objective function, we propose a new memetic algorithm (MA) named MA-Rcf to enhance network the robustness against cascading failures. Moreover, we design a new local search operator that considers the characteristics of cascading failures and operates by connecting nodes with a high probability of having similar loads. In experiments, both synthetic scale-free networks and real-world networks are used to test the efficiency and effectiveness of the MA-Rcf. We systematically investigate the effects of parameters on the performance of the MA-Rcf and validate the performance of the newly designed local search operator. The results show that the local search operator is effective, that MA-Rcf can enhance network robustness against cascading failures efficiently, and that it outperforms existing algorithms.

  3. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    Science.gov (United States)

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    2017-08-12

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P V 90% showed superior LCD and had the highest CNR in the liver, aorta, and, pancreas, measuring 7.32 ± 3.22, 11.60 ± 4.25, and 4.60 ± 2.31, respectively, compared with the next best series of ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  4. Study on beam geometry and image reconstruction algorithm in fast neutron computerized tomography at NECTAR facility

    Energy Technology Data Exchange (ETDEWEB)

    Guo, J. [State Key Laboratory of Nuclear Physics and Technology and School of Physics, Peking University, 5 Yiheyuan Lu, Beijing 100871 (China); Lehrstuhl fuer Radiochemie, Technische Universitaet Muenchen, Garching 80748 (Germany); Buecherl, T. [Lehrstuhl fuer Radiochemie, Technische Universitaet Muenchen, Garching 80748 (Germany); Zou, Y., E-mail: zouyubin@pku.edu.cn [State Key Laboratory of Nuclear Physics and Technology and School of Physics, Peking University, 5 Yiheyuan Lu, Beijing 100871 (China); Guo, Z. [State Key Laboratory of Nuclear Physics and Technology and School of Physics, Peking University, 5 Yiheyuan Lu, Beijing 100871 (China)

    2011-09-21

    Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.

  5. A new ionospheric tomographic algorithm — constrained multiplicative algebraic reconstruction technique (CMART)

    Science.gov (United States)

    Wen, Debao; Liu, Sanzhi

    2010-08-01

    For the limitation of the conventional multiplicative algebraic reconstruction technique (MART), a constrained MART (CMART) is proposed in this paper. In the new tomographic algorithm, a popular two-dimensional multi-point finite difference approximation of the second order Laplacian operator is used to smooth the electron density field. The feasibility and superiority of the new method are demonstrated by using the numerical simulation experiment. Finally, the CMART is used to reconstruct the regional electron density field by using the actual GNSS data under geomagnetic quiet and disturbed days. The available ionosonde data from Beijing station further validates the superiority of the new method.

  6. Ripple-Spreading Network Model Optimization by Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiao-Bing Hu

    2013-01-01

    Full Text Available Small-world and scale-free properties are widely acknowledged in many real-world complex network systems, and many network models have been developed to capture these network properties. The ripple-spreading network model (RSNM is a newly reported complex network model, which is inspired by the natural ripple-spreading phenomenon on clam water surface. The RSNM exhibits good potential for describing both spatial and temporal features in the development of many real-world networks where the influence of a few local events spreads out through nodes and then largely determines the final network topology. However, the relationships between ripple-spreading related parameters (RSRPs of RSNM and small-world and scale-free topologies are not as obvious or straightforward as in many other network models. This paper attempts to apply genetic algorithm (GA to tune the values of RSRPs, so that the RSNM may generate these two most important network topologies. The study demonstrates that, once RSRPs are properly tuned by GA, the RSNM is capable of generating both network topologies and therefore has a great flexibility to study many real-world complex network systems.

  7. Clinical evaluation of a novel CT image reconstruction algorithm for direct dose calculations

    Directory of Open Access Journals (Sweden)

    Brent van der Heyden

    2017-03-01

    Full Text Available Background and purpose: Computed tomography (CT imaging is frequently used in radiation oncology to calculate radiation dose distributions. In order to calculate doses, the CT numbers must be converted into densities by an energy dependent conversion curve. A recently developed algorithm directly reconstructs CT projection data into relative electron densities which eliminates the use of separate conversion curves for different X-ray tube potentials. Our work evaluates this algorithm for various cancer sites and shows its applicability in a clinical workflow. Materials and methods: The Gammex phantom with tissue mimicking inserts was scanned to characterize the CT number to density conversion curves. In total, 33 patients with various cancer sites were scanned using multiple tube potentials. All CT acquisitions were reconstructed with the standard filtered back-projection (FBP and the new developed DirectDensity™ (DD algorithm. The mean tumor doses and the volume percentage that receives more than 95% of the prescribed dose were calculated for the planning target volume. Relevant parameters for the organs at risk for each tumor site were also calculated. Results: The relative mean dose differences between the standard 120 kVp FBP CT scan workflow and the DD CT scans (80, 100, 120 and 140 kVp were in general less than 1% for the planned target volume and organs at risk. Conclusion: The energy independent DD algorithm allows for accurate dose calculations over a variety of body sites. This novel algorithm eliminates the tube potential specific calibration procedure and thereby simplifies the clinical radiotherapy workflow. Keywords: CT imaging, Image reconstruction, Dose calculations, Electron density reconstruction

  8. Interlog protein network: an evolutionary benchmark of protein interaction networks for the evaluation of clustering algorithms.

    Science.gov (United States)

    Jafari, Mohieddin; Mirzaie, Mehdi; Sadeghi, Mehdi

    2015-10-05

    In the field of network science, exploring principal and crucial modules or communities is critical in the deduction of relationships and organization of complex networks. This approach expands an arena, and thus allows further study of biological functions in the field of network biology. As the clustering algorithms that are currently employed in finding modules have innate uncertainties, external and internal validations are necessary. Sequence and network structure alignment, has been used to define the Interlog Protein Network (IPN). This network is an evolutionarily conserved network with communal nodes and less false-positive links. In the current study, the IPN is employed as an evolution-based benchmark in the validation of the module finding methods. The clustering results of five algorithms; Markov Clustering (MCL), Restricted Neighborhood Search Clustering (RNSC), Cartographic Representation (CR), Laplacian Dynamics (LD) and Genetic Algorithm; to find communities in Protein-Protein Interaction networks (GAPPI) are assessed by IPN in four distinct Protein-Protein Interaction Networks (PPINs). The MCL shows a more accurate algorithm based on this evolutionary benchmarking approach. Also, the biological relevance of proteins in the IPN modules generated by MCL is compatible with biological standard databases such as Gene Ontology, KEGG and Reactome. In this study, the IPN shows its potential for validation of clustering algorithms due to its biological logic and straightforward implementation.

  9. DS+: Reliable Distributed Snapshot Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Gamze Uslu

    2013-01-01

    Full Text Available Acquiring the snapshot of a distributed system helps gathering system related global state. In wireless sensor networks (WSNs, global state shows if a node is terminated or deadlock occurs along with many other situations which prevents a WSN from fully functioning. In this paper, we present a fully distributed snapshot acquisition algorithm adapted to tree topology wireless sensor networks (WSNs. Since snapshot acquisition is through control messages sent over highly lossy wireless channels and congested nodes, we enhanced the snapshot algorithm with a sink based reliability suit to achieve robustness. We analyzed the performance of the algorithm in terms of snapshot success ratio and response time in simulation and experimental small test bed environment. The results reveal that the proposed tailor made reliability model increases snapshot acquisition performance by a factor of seven and response time by a factor of two in a 30-node network. We have also shown that the proposed algorithm outperforms its counterparts in the specified network setting.

  10. The power-series algorithm for Markovian queueing networks

    NARCIS (Netherlands)

    van den Hout, W.B.; Blanc, J.P.C.

    1994-01-01

    A newversion of the Power-Series Algorithm is developed to compute the steady-state distribution of a rich class of Markovian queueing networks. The arrival process is a Multi-queue Markovian Arrival Process, which is a multi-queue generalization of the BMAP. It includes Poisson, fork and

  11. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R; Kim, Jeehyun; Nelson, J Stuart [Beckman Laser Institute and Medical Clinic, University of California, Irvine, CA 92612 (United States)], E-mail: wverkruy@uci.edu

    2008-03-07

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  12. Multimedia over cognitive radio networks algorithms, protocols, and experiments

    CERN Document Server

    Hu, Fei

    2014-01-01

    PrefaceAbout the EditorsContributorsNetwork Architecture to Support Multimedia over CRNA Management Architecture for Multimedia Communication in Cognitive Radio NetworksAlexandru O. Popescu, Yong Yao, Markus Fiedler , and Adrian P. PopescuPaving a Wider Way for Multimedia over Cognitive Radios: An Overview of Wideband Spectrum Sensing AlgorithmsBashar I. Ahmad, Hongjian Sun, Cong Ling, and Arumugam NallanathanBargaining-Based Spectrum Sharing for Broadband Multimedia Services in Cognitive Radio NetworkYang Yan, Xiang Chen, Xiaofeng Zhong, Ming Zhao, and Jing WangPhysical Layer Mobility Challen

  13. Neural-Network-Biased Genetic Algorithms for Materials Design: Evolutionary Algorithms That Learn.

    Science.gov (United States)

    Patra, Tarak K; Meenakshisundaram, Venkatesh; Hung, Jui-Hsiang; Simmons, David S

    2017-02-13

    Machine learning has the potential to dramatically accelerate high-throughput approaches to materials design, as demonstrated by successes in biomolecular design and hard materials design. However, in the search for new soft materials exhibiting properties and performance beyond those previously achieved, machine learning approaches are frequently limited by two shortcomings. First, because they are intrinsically interpolative, they are better suited to the optimization of properties within the known range of accessible behavior than to the discovery of new materials with extremal behavior. Second, they require large pre-existing data sets, which are frequently unavailable and prohibitively expensive to produce. Here we describe a new strategy, the neural-network-biased genetic algorithm (NBGA), for combining genetic algorithms, machine learning, and high-throughput computation or experiment to discover materials with extremal properties in the absence of pre-existing data. Within this strategy, predictions from a progressively constructed artificial neural network are employed to bias the evolution of a genetic algorithm, with fitness evaluations performed via direct simulation or experiment. In effect, this strategy gives the evolutionary algorithm the ability to "learn" and draw inferences from its experience to accelerate the evolutionary process. We test this algorithm against several standard optimization problems and polymer design problems and demonstrate that it matches and typically exceeds the efficiency and reproducibility of standard approaches including a direct-evaluation genetic algorithm and a neural-network-evaluated genetic algorithm. The success of this algorithm in a range of test problems indicates that the NBGA provides a robust strategy for employing informatics-accelerated high-throughput methods to accelerate materials design in the absence of pre-existing data.

  14. The No-Prop algorithm: a new learning algorithm for multilayer neural networks.

    Science.gov (United States)

    Widrow, Bernard; Greenblatt, Aaron; Kim, Youngsik; Park, Dookun

    2013-01-01

    A new learning algorithm for multilayer neural networks that we have named No-Propagation (No-Prop) is hereby introduced. With this algorithm, the weights of the hidden-layer neurons are set and fixed with random values. Only the weights of the output-layer neurons are trained, using steepest descent to minimize mean square error, with the LMS algorithm of Widrow and Hoff. The purpose of introducing nonlinearity with the hidden layers is examined from the point of view of Least Mean Square Error Capacity (LMS Capacity), which is defined as the maximum number of distinct patterns that can be trained into the network with zero error. This is shown to be equal to the number of weights of each of the output-layer neurons. The No-Prop algorithm and the Back-Prop algorithm are compared. Our experience with No-Prop is limited, but from the several examples presented here, it seems that the performance regarding training and generalization of both algorithms is essentially the same when the number of training patterns is less than or equal to LMS Capacity. When the number of training patterns exceeds Capacity, Back-Prop is generally the better performer. But equivalent performance can be obtained with No-Prop by increasing the network Capacity by increasing the number of neurons in the hidden layer that drives the output layer. The No-Prop algorithm is much simpler and easier to implement than Back-Prop. Also, it converges much faster. It is too early to definitively say where to use one or the other of these algorithms. This is still a work in progress. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. A robust regularization algorithm for polynomial networks for machine learning

    Science.gov (United States)

    Jaenisch, Holger M.; Handley, James W.

    2011-06-01

    We present an improvement to the fundamental Group Method of Data Handling (GMDH) Data Modeling algorithm that overcomes the parameter sensitivity to novel cases presented to derived networks. We achieve this result by regularization of the output and using a genetic weighting that selects intermediate models that do not exhibit divergence. The result is the derivation of multi-nested polynomial networks following the Kolmogorov-Gabor polynomial that are robust to mean estimators as well as novel exemplars for input. The full details of the algorithm are presented. We also introduce a new method for approximating GMDH in a single regression model using F, H, and G terms that automatically exports the answers as ordinary differential equations. The MathCAD 15 source code for all algorithms and results are provided.

  16. A Compression Algorithm in Wireless Sensor Networks of Bearing Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Zheng Bin; Meng Qingfeng; Wang Nan [Theory of Lubrication and Bearing Institute, Xi' an Jiaotong University Xi' an, 710049 (China); Li Zhi, E-mail: rthree.zhengbin@stu.xjtu.edu.cn [Dalian Machine Tool Group Corp. Dalian, 116620 (China)

    2011-07-19

    The energy consumption of wireless sensor networks (WSNs) is always an important problem in the application of wireless sensor networks. This paper proposes a data compression algorithm to reduce amount of data and energy consumption during the data transmission process in the on-line WSNs-based bearing monitoring system. The proposed compression algorithm is based on lifting wavelets, Zerotree coding and Hoffman coding. Among of that, 5/3 lifting wavelets is used for dividing data into different frequency bands to extract signal characteristics. Zerotree coding is applied to calculate the dynamic thresholds to retain the attribute data. The attribute data are then encoded by Hoffman coding to further enhance the compression ratio. In order to validate the algorithm, simulation is carried out by using Matlab. The result of simulation shows that the proposed algorithm is very suitable for the compression of bearing monitoring data. The algorithm has been successfully used in online WSNs-based bearing monitoring system, in which TI DSP TMS320F2812 is used to realize the algorithm.

  17. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Xiangyun Xiao

    Full Text Available The reconstruction of gene regulatory networks (GRNs from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM, experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  18. Evaluation of digital tomosynthesis reconstruction algorithms used to reduce metal artifacts for arthroplasty: A phantom study.

    Science.gov (United States)

    Gomi, Tsutomu; Sakai, Rina; Goto, Masami; Hara, Hidetake; Watanabe, Yusuke; Umeda, Tokuo

    2017-10-01

    To investigate methods to reduce metal artifacts during digital tomosynthesis for arthroplasty, we evaluated five algorithms with and without metal artifact reduction (MAR)-processing tested under different radiation doses (0.54, 0.47, and 0.33mSv): adaptive steepest descent projection onto convex sets (ASD-POCS), simultaneous algebraic reconstruction technique total variation (SART-TV), filtered back projection (FBP), maximum likelihood expectation maximization (MLEM), and SART. The algorithms were assessed by determining the artifact index (AI) and artifact spread function (ASF) on a prosthesis phantom. The AI data were statistically analyzed by two-way analysis of variance. Without MAR-processing, the greatest degree of effectiveness of the MLEM algorithm for reducing prosthetic phantom-related metal artifacts was achieved by quantification using the AI (MLEM vs. ASD-POCS, SART-TV, SART, and FBP; all PTV, and SART algorithms for reducing prosthetic phantom-related metal artifacts was achieved by quantification using the AI (MLEM, ASD-POCS, SART-TV, and SART vs. FBP; all PTV, and SART algorithm with MAR-processing. In ASF, the effect of metal artifact reduction was always greater at reduced radiation doses, regardless of which reconstruction algorithm with and without MAR-processing was used. In this phantom study, the MLEM algorithm without MAR-processing and ASD-POCS, SART-TV, and SART algorithm with MAR-processing gave improved metal artifact reduction. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Metaheuristic Algorithms for Convolution Neural Network.

    Science.gov (United States)

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).

  20. Metaheuristic Algorithms for Convolution Neural Network

    Directory of Open Access Journals (Sweden)

    L. M. Rasdi Rere

    2016-01-01

    Full Text Available A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN, a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent.

  1. Open-source algorithm for automatic choroid segmentation of OCT volume reconstructions

    Science.gov (United States)

    Mazzaferri, Javier; Beaton, Luke; Hounye, Gisèle; Sayah, Diane N.; Costantino, Santiago

    2017-02-01

    The use of optical coherence tomography (OCT) to study ocular diseases associated with choroidal physiology is sharply limited by the lack of available automated segmentation tools. Current research largely relies on hand-traced, single B-Scan segmentations because commercially available programs require high quality images, and the existing implementations are closed, scarce and not freely available. We developed and implemented a robust algorithm for segmenting and quantifying the choroidal layer from 3-dimensional OCT reconstructions. Here, we describe the algorithm, validate and benchmark the results, and provide an open-source implementation under the General Public License for any researcher to use (https://www.mathworks.com/matlabcentral/fileexchange/61275-choroidsegmentation).

  2. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.

  3. Convergence of SART + OS + TV iterative reconstruction algorithm for optical CT imaging of gel dosimeters

    Science.gov (United States)

    Du, Yi; Yu, Gongyi; Xiang, Xincheng; Wang, Xiangang; De Deene, Yves

    2017-05-01

    Computational simulations are used to investigate the convergence of a hybrid iterative algorithm for optical CT reconstruction, i.e. the simultaneous algebraic reconstruction technique (SART) integrated with ordered subsets (OS) iteration and total variation (TV) minimization regularization, or SART+OS+TV for short. The influence of parameter selection to reach convergence, spatial dose gradient integrity, MTF and convergent speed are discussed. It’s shown that the results of SART+OS+TV algorithm converge to the true values without significant bias, and MTF and convergent speed are affected by different parameter sets used for iterative calculation. In conclusion, the performance of the SART+OS+TV depends on parameter selection, which also implies that careful parameter tuning work is required and necessary for proper spatial performance and fast convergence.

  4. Calibration of muon reconstruction algorithms using an external muon tracking system at the Sudbury Neutrino Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Sonley, T.J. [Laboratory for Nuclear Science, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Department of Physics, Queen' s University, Kingston, Ontario, Canada K7L 3N6 (Canada); Abruzzio, R. [Laboratory for Nuclear Science, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Chan, Y.D.; Currat, C.A. [Institute for Nuclear and Particle Astrophysics and Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Duncan, F.A. [SNOLAB, Sudbury, ON, P3Y 1M3 (Canada); Department of Physics, Queen' s University, Kingston, Ontario, K7L 3N6 (Canada); Farine, J. [Department of Physics and Astronomy, Laurentian University, Sudbury, Ontario, P3E 2C6 (Canada); Ford, R.J. [SNOLAB, Sudbury, ON, P3Y 1M3 (Canada); Formaggio, J.A., E-mail: josephf@mit.edu [Laboratory for Nuclear Science, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Center for Experimental Nuclear Physics and Astrophysics, and Department of Physics, University of Washington, Seattle, WA 98195 (United States); Gagnon, N. [Institute for Nuclear and Particle Astrophysics and Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Physics, Queen' s University, Kingston, Ontario, K7L 3N6 (Canada); Center for Experimental Nuclear Physics and Astrophysics, and Department of Physics, University of Washington, Seattle, WA 98195 (United States); Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Hallin, A.L. [Department of Physics, Queen' s University, Kingston, Ontario, K7L 3N6 (Canada); Department of Physics, University of Alberta, Edmonton, Alberta, T6G 2R3 (Canada)

    2011-08-21

    To help constrain the algorithms used in reconstructing high-energy muon events incident on the Sudbury Neutrino Observatory (SNO), a muon tracking system was installed. The system consisted of four planes of wire chambers, which were triggered by scintillator panels. The system was integrated with SNO's main data acquisition system and took data for a total of 95 live days. Using cosmic-ray events reconstructed in both the wire chambers and in SNO's water Cherenkov detector, the external muon tracking system was able to constrain the uncertainty on the muon direction to better than 0.6{sup o}. - Highlights: > This paper describes a novel technique for calibrating tracking algorithms. > The experimental accuracy achieved by this system was better than 1{sup o}. > The principle behind the technique can be used in future underground experiments.

  5. Study on infrared image super-resolution reconstruction based on an improved POCS algorithm

    Science.gov (United States)

    Dai, Shaosheng; Cui, Junjie; Zhang, Dezhou; Liu, Qin; Zhang, Xiaoxiao

    2017-04-01

    Aiming at the disadvantages of the traditional projection onto convex sets of blurry edges and lack of image details, this paper proposes an improved projection onto convex sets (POCS) method to enhance the quality of image super-resolution reconstruction (SRR). In traditional POCS method, bilinear interpolation easily blurs the image. In order to improve the initial estimation of high-resolution image (HRI) during reconstruction of POCS algorithm, the initial estimation of HRI is obtained through iterative curvature-based interpolation (ICBI) instead of bilinear interpolation. Compared with the traditional POCS algorithm, the experimental results in subjective evaluation and objective evaluation demonstrate the effectiveness of the proposed method. The visual effect is improved significantly and image detail information is preserved better. Project supported by the National Natural Science Foundation of China (Nos. 61275099, 61671094) and the Natural Science Foundation of Chongqing Science and Technology Commission (No. CSTC2015JCYJA40032).

  6. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, Justin, E-mail: justin.solomon@duke.edu [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Samei, Ehsan [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 and Departments of Biomedical Engineering and Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina 27705 (United States)

    2014-09-15

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based on a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise magnitude was

  7. IJA: An Efficient Algorithm for Query Processing in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dong Hwa Kim

    2011-01-01

    Full Text Available One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm.

  8. IJA: An Efficient Algorithm for Query Processing in Sensor Networks

    Science.gov (United States)

    Lee, Hyun Chang; Lee, Young Jae; Lim, Ji Hyang; Kim, Dong Hwa

    2011-01-01

    One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA) in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm. PMID:22319375

  9. Genetic algorithm application in optimization of wireless sensor networks.

    Science.gov (United States)

    Norouzi, Ali; Zaim, A Halim

    2014-01-01

    There are several applications known for wireless sensor networks (WSN), and such variety demands improvement of the currently available protocols and the specific parameters. Some notable parameters are lifetime of network and energy consumption for routing which play key role in every application. Genetic algorithm is one of the nonlinear optimization methods and relatively better option thanks to its efficiency for large scale applications and that the final formula can be modified by operators. The present survey tries to exert a comprehensive improvement in all operational stages of a WSN including node placement, network coverage, clustering, and data aggregation and achieve an ideal set of parameters of routing and application based WSN. Using genetic algorithm and based on the results of simulations in NS, a specific fitness function was achieved, optimized, and customized for all the operational stages of WSNs.

  10. A blind matching algorithm for cognitive radio networks

    KAUST Repository

    Hamza, Doha R.

    2016-08-15

    We consider a cognitive radio network where secondary users (SUs) are allowed access time to the spectrum belonging to the primary users (PUs) provided that they relay primary messages. PUs and SUs negotiate over allocations of the secondary power that will be used to relay PU data. We formulate the problem as a generalized assignment market to find an epsilon pairwise stable matching. We propose a distributed blind matching algorithm (BLMA) to produce the pairwise-stable matching plus the associated power allocations. We stipulate a limited information exchange in the network so that agents only calculate their own utilities but no information is available about the utilities of any other users in the network. We establish convergence to epsilon pairwise stable matchings in finite time. Finally we show that our algorithm exhibits a limited degradation in PU utility when compared with the Pareto optimal results attained using perfect information assumptions. © 2016 IEEE.

  11. Chemiomics: network reconstruction and kinetics of port wine aging.

    Science.gov (United States)

    Monforte, Ana Rita; Jacobson, Dan; Silva Ferreira, A C

    2015-03-11

    Network reconstruction (NR) has proven to be useful in the detection and visualization of relationships among the compounds present in a Port wine aging data set. This view of the data provides a considerable amount of information with which to understand the kinetic contexts of the molecules represented by peaks in each chromatogram. The aim of this study was to use NR together with the determination of kinetic parameters to extract more information about the mechanisms involved in Port wine aging. The volatile compounds present in samples of Port wines spanning 128 years in age were measured with the use of GC-MS. After chromatogram alignment, a peak matrix was created, and all peak vectors were compared to one another to determine their Pearson correlations over time. A correlation network was created and filtered on the basis of the resulting correlation values. Some nodes in the network were further studied in experiments on Port wines stored under different conditions of oxygen and temperature in order to determine their kinetic parameters. The resulting network can be divided into three main branches. The first branch is related to compounds that do not directly correlate to age, the second branch contains compounds affected by temperature, and the third branch contains compounds associated with oxygen. Compounds clustered in the same branch of the network have similar expression patterns over time as well as the same kinetic order, thus are likely to be dependent on the same technological parameters. Network construction and visualization provides more information with which to understand the probable kinetic contexts of the molecules represented by peaks in each chromatogram. The approach described here is a powerful tool for the study of mechanisms and kinetics in complex systems and should aid in the understanding and monitoring of wine quality.

  12. GENETIC ALGORITHM BASED CONCEPT DESIGN TO OPTIMIZE NETWORK LOAD BALANCE

    Directory of Open Access Journals (Sweden)

    Ashish Jain

    2012-07-01

    Full Text Available Multiconstraints optimal network load balancing is an NP-hard problem and it is an important part of traffic engineering. In this research we balance the network load using classical method (brute force approach and dynamic programming is used but result shows the limitation of this method but at a certain level we recognized that the optimization of balanced network load with increased number of nodes and demands is intractable using the classical method because the solution set increases exponentially. In such case the optimization techniques like evolutionary techniques can employ for optimizing network load balance. In this paper we analyzed proposed classical algorithm and evolutionary based genetic approach is devise as well as proposed in this paper for optimizing the balance network load.

  13. Resistive Network Optimal Power Flow: Uniqueness and Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tan, CW; Cai, DWH; Lou, X

    2015-01-01

    The optimal power flow (OPF) problem minimizes the power loss in an electrical network by optimizing the voltage and power delivered at the network buses, and is a nonconvex problem that is generally hard to solve. By leveraging a recent development on the zero duality gap of OPF, we propose a second-order cone programming convex relaxation of the resistive network OPF, and study the uniqueness of the optimal solution using differential topology, especially the Poincare-Hopf Index Theorem. We characterize the global uniqueness for different network topologies, e.g., line, radial, and mesh networks. This serves as a starting point to design distributed local algorithms with global behaviors that have low complexity, are computationally fast, and can run under synchronous and asynchronous settings in practical power grids.

  14. Intelligent Control of Urban Road Networks: Algorithms, Systems and Communications

    Science.gov (United States)

    Smith, Mike

    This paper considers control in road networks. Using a simple example based on the well-known Braess network [1] the paper shows that reducing delay for traffic, assuming that the traffic distribution is fixed, may increase delay when travellers change their travel choices in light of changes in control settings and hence delays. It is shown that a similar effect occurs within signal controlled networks. In this case the effect appears at first sight to be much stronger: the overall capacity of a network may be substantially reduced by utilising standard responsive signal control algorithms. In seeking to reduce delays for existing flows, these policies do not allow properly for consequent routeing changes by travellers. Control methods for signal-controlled networks that do take proper account of the reactions of users are suggested; these require further research, development, and careful real-life trials.

  15. Evaluation of a commercial Model Based Iterative reconstruction algorithm in computed tomography.

    Science.gov (United States)

    Paruccini, Nicoletta; Villa, Raffaele; Pasquali, Claudia; Spadavecchia, Chiara; Baglivi, Antonia; Crespi, Andrea

    2017-09-01

    Iterative reconstruction algorithms have been introduced in clinical practice to obtain dose reduction without compromising the diagnostic performance. To investigate the commercial Model Based IMR algorithm by means of patient dose and image quality, with standard Fourier and alternative metrics. A Catphan phantom, a commercial density phantom and a cylindrical water filled phantom were scanned both varying CTDIvol and reconstruction thickness. Images were then reconstructed with Filtered Back Projection and both statistical (iDose) and Model Based (IMR) Iterative reconstruction algorithms. Spatial resolution was evaluated with Modulation Transfer Function and Target Transfer Function. Noise reduction was investigated with Standard Deviation. Furthermore, its behaviour was analysed with 3D and 2D Noise Power Spectrum. Blur and Low Contrast Detectability were investigated. Patient dose indexes were collected and analysed. All results, related to image quality, have been compared to FBP standard reconstructions. Model Based IMR significantly improves Modulation Transfer Function with an increase between 12% and 64%. Target Transfer Function curves confirm this trend for high density objects, while Blur presents a sharpness reduction for low density details. Model Based IMR underlines a noise reduction between 44% and 66% and a variation in noise power spectrum behaviour. Low Contrast Detectability curves underline an averaged improvement of 35-45%; these results are compatible with an achievable reduction of 50% of CTDIvol. A dose reduction between 25% and 35% is confirmed by median values of CTDIvol. IMR produces an improvement in image quality and dose reduction. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm

    Science.gov (United States)

    Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang

    2017-11-01

    Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a ;weakly sparse; eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.

  17. Development and performance of track reconstruction algorithms at the energy frontier with the ATLAS detector

    Science.gov (United States)

    Gagnon, Louis-Guillaume; ATLAS Collaboration

    2017-10-01

    ATLAS track reconstruction software is continuously evolving to match the demands from the increasing instantaneous luminosity of the LHC, as well as the increased center-of-mass energy. These conditions result in a higher abundance of events with dense track environments, such as the core of jets or boosted tau leptons undergoing three-prong decays. These environments are characterised by charged particle separations on the order of the ATLAS inner detector sensor dimensions and are created by the decay of boosted objects. Significant upgrades were made to the track reconstruction software to cope with the expected conditions during LHC Run 2. In particular, new algorithms targeting dense environments were developed. These changes lead to a substantial reduction of reconstruction time while at the same time improving physics performance. The employed methods are presented and physics performance studies are shown, including a measurement of the fraction of lost tracks in jets with high transverse momentum.

  18. Development and performance of track reconstruction algorithms at the energy frontier with the ATLAS detector

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00441787; The ATLAS collaboration

    2017-01-01

    ATLAS track reconstruction software is continuously evolving to match the demands from the increasing instantaneous luminosity of the LHC, as well as the increased center-of-mass energy. These conditions result in a higher abundance of events with dense track environments, such as the core of jets or boosted tau leptons undergoing three-prong decays. These environments are characterised by charged particle separations on the order of the ATLAS inner detector sensor dimensions and are created by the decay of boosted objects. Significant upgrades were made to the track reconstruction software to cope with the expected conditions during LHC Run 2. In particular, new algorithms targeting dense environments were developed. These changes lead to a substantial reduction of reconstruction time while at the same time improving physics performance. The employed methods are presented and physics performance studies are shown, including a measurement of the fraction of lost tracks in jets with high transverse momentum.

  19. Development and performance of track reconstruction algorithms at the energy frontier with the ATLAS detector

    CERN Document Server

    Gagnon, Louis-Guillaume; The ATLAS collaboration

    2016-01-01

    ATLAS track reconstruction code is continuously evolving to match the demands from the increasing instantaneous luminosity of LHC, as well as the increased centre-of-mass energy. With the increase in energy, events with dense environments, e.g. the cores of jets or boosted tau leptons, become much more abundant. These environments are characterised by charged particle separations on the order of ATLAS inner detector sensor dimensions and are created by the decay of boosted objects. Significant upgrades were made to the track reconstruction code to cope with the expected conditions during LHC Run 2. In particular, new algorithms targeting dense environments were developed. These changes lead to a substantial reduction of reconstruction time while at the same time improving physics performance. The employed methods are presented. In addition, physics performance studies are shown, e.g. a measurement of the fraction of lost tracks in jets with high transverse momentum.

  20. Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks

    Science.gov (United States)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.

  1. A Message-Passing Algorithm for Wireless Network Scheduling.

    Science.gov (United States)

    Paschalidis, Ioannis Ch; Huang, Fuzhuo; Lai, Wei

    2015-10-01

    We consider scheduling in wireless networks and formulate it as Maximum Weighted Independent Set (MWIS) problem on a "conflict" graph that captures interference among simultaneous transmissions. We propose a novel, low-complexity, and fully distributed algorithm that yields high-quality feasible solutions. Our proposed algorithm consists of two phases, each of which requires only local information and is based on message-passing. The first phase solves a relaxation of the MWIS problem using a gradient projection method. The relaxation we consider is tighter than the simple linear programming relaxation and incorporates constraints on all cliques in the graph. The second phase of the algorithm starts from the solution of the relaxation and constructs a feasible solution to the MWIS problem. We show that our algorithm always outputs an optimal solution to the MWIS problem for perfect graphs. Simulation results compare our policies against Carrier Sense Multiple Access (CSMA) and other alternatives and show excellent performance.

  2. Resource-Aware Data Fusion Algorithms for Wireless Sensor Networks

    CERN Document Server

    Abdelgawad, Ahmed

    2012-01-01

    This book introduces resource-aware data fusion algorithms to gather and combine data from multiple sources (e.g., sensors) in order to achieve inferences.  These techniques can be used in centralized and distributed systems to overcome sensor failure, technological limitation, and spatial and temporal coverage problems. The algorithms described in this book are evaluated with simulation and experimental results to show they will maintain data integrity and make data useful and informative.   Describes techniques to overcome real problems posed by wireless sensor networks deployed in circumstances that might interfere with measurements provided, such as strong variations of pressure, temperature, radiation, and electromagnetic noise; Uses simulation and experimental results to evaluate algorithms presented and includes real test-bed; Includes case study implementing data fusion algorithms on a remote monitoring framework for sand production in oil pipelines.

  3. Algorithm for bionic hand reconstruction in patients with global brachial plexopathies.

    Science.gov (United States)

    Hruby, Laura A; Sturma, Agnes; Mayer, Johannes A; Pittermann, Anna; Salminger, Stefan; Aszmann, Oskar C

    2017-11-01

    OBJECTIVE Global brachial plexus lesions with multiple root avulsions are among the most severe nerve injuries, leading to lifelong disability. Fortunately, in most cases primary and secondary reconstructions provide a stable shoulder and restore sufficient arm function. Restoration of biological hand function, however, remains a reconstructive goal that is difficult to reach. The recently introduced concept of bionic reconstruction overcomes biological limitations of classic reconstructive surgery to restore hand function by combining selective nerve and muscle transfers with elective amputation of the functionless hand and its replacement with a prosthetic device. The authors present their treatment algorithm for bionic hand reconstruction and report on the management and long-term functional outcomes of patients with global brachial plexopathies who have undergone this innovative treatment. METHODS Thirty-four patients with posttraumatic global brachial plexopathies leading to loss of hand function consulted the Center for Advanced Restoration of Extremity Function between 2011 and 2015. Of these patients, 16 (47%) qualified for bionic reconstruction due to lack of treatment alternatives. The treatment algorithm included progressive steps with the intent of improving the biotechnological interface to allow optimal prosthetic hand replacement. In 5 patients, final functional outcome measurements were obtained with the Action Arm Research Test (ARAT), the Southampton Hand Assessment Procedure (SHAP), and the Disabilities of the Arm, Shoulder, and Hand (DASH) questionnaire. RESULTS In all 5 patients who completed functional assessments, partial hand function was restored with bionic reconstruction. ARAT scores improved from 3.4 ± 4.3 to 25.4 ± 12.7 (p = 0.043; mean ± SD) and SHAP scores improved from 10.0 ± 1.6 to 55 ± 19.7 (p = 0.042). DASH scores decreased from 57.9 ± 20.6 to 32 ± 28.6 (p = 0.042), indicating decreased disability. CONCLUSIONS The authors

  4. An adaptive L1/2 sparse regularization algorithm for super-resolution image reconstruction

    Science.gov (United States)

    Xiong, Jiongtao; Liu, Yijun; Ye, Xiangrong

    2017-05-01

    In order to solve the ill-posed problem in super-resolution image reconstruction, this paper proposes an adaptive regularization way use sparse representation. We build a new L1/2 non-convex optimization model and apply reweighted L2 Norm for the adaptive algorithm in this paper. Experimental results show the significant effect in denoising and preserving edge details. It outperforms some traditional methods in the value of peak signal to noise ratio and structural similarity.

  5. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    Science.gov (United States)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  6. A Survey of Linear Network Coding and Network Error Correction Code Constructions and Algorithms

    Directory of Open Access Journals (Sweden)

    Michele Sanna

    2011-01-01

    Full Text Available Network coding was introduced by Ahlswede et al. in a pioneering work in 2000. This paradigm encompasses coding and retransmission of messages at the intermediate nodes of the network. In contrast with traditional store-and-forward networking, network coding increases the throughput and the robustness of the transmission. Linear network coding is a practical implementation of this new paradigm covered by several research works that include rate characterization, error-protection coding, and construction of codes. Especially determining the coding characteristics has its importance in providing the premise for an efficient transmission. In this paper, we review the recent breakthroughs in linear network coding for acyclic networks with a survey of code constructions literature. Deterministic construction algorithms and randomized procedures are presented for traditional network coding and for network-control network coding.

  7. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT.

    Science.gov (United States)

    Matenine, Dmitri; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2015-11-01

    The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of

  8. Coral Reef environment reconstruction using small drones, new generation photogrammetry algorithms and satellite imagery

    Science.gov (United States)

    Elisa, Casella; Rovere, Alessio; Harris, Daniel; Parravicini, Valeriano

    2016-04-01

    Surveys based on Remotely Piloted Aircraft Systems (RPAS), together with new-generation Structure from Motion (SfM) and Multi-View Stereo (MVS) reconstruction algorithms have been employed to reconstruct the shallow bathymetry of the inner lagoon of a coral reef in Moorea, French Polinesia. This technique has already been used with a high rate of success on coastal environments (e.g. sandy beaches and rocky shorelines) reaching accuracy of the final Digital Elevation Model in the order of few centimeters. The application of such techniques to reconstruct shallow underwater environments is, though, still little reported. We then used the bathymetric dataset obtained from aerial pictures as ground-truth for relative bathymetry obtained from satellite imagery (WorldView-2) of a larger area within the same study site. The first results of our work suggest that RPAS coupled with SfM and MVS algorithms can be used to reconstruct shallow water environments with favorable weather conditions, and can be employed to ground-truth to satellite imagery.

  9. Procedure and algorithm of 3D reconstruction of large-scale ancient architecture

    Science.gov (United States)

    Xia, Song; Zhu, Yixuan; Li, Xin

    2006-02-01

    3D reconstruction plays an essential role in the documentation and protection of ancient architecture. 3D reconstruction and photogrammetry are mainly used to conserve the datum and restore the 3D model of large-scale ancient architecture in our work. The whole procedure and an algorithm on space polyhedron are investigated in this paper. Firstly lots of conspicuous feature points are laid around the huge granite in order to construct a local and temporary 3D controlling field with sufficiently high precision. And feature points on the granite are obtained by means of photogrammetry. We use DLT (Direct Linear Transform) to calculate coordinates of feature points and accuracy evaluation of all feature points can be obtained simultaneously. A new generation algorithm for spatial convex polyhedron is presented and realized efficiently in our research. And we can get 3D model of the granite. In order to reduce duplicate storage of points and edges of the model, model connection and optimization are performed to complete the modeling process. Realistic material can be attached to the 3D model in 3DMAX. At last rendering and animation of the 3D model are completed and we got the reconstructive model of the granite. We use the approach mentioned above to realize the 3D reconstruction of large-scale ancient architecture successfully.

  10. Reconstruction of the irradiated perineum following extended abdomino-perineal excision for cancer: an algorithmic approach.

    Science.gov (United States)

    Saleh, D B; Liddington, M I; Loughenbury, P; Fenn, C W; Baker, R; Burke, D

    2012-11-01

    Our unit has implemented an algorithm for irradiated perineal reconstruction incorporating current evidence and a new technique in line with the advent of laparoscopic tumour excision. Our approach attempts to maintain the benefits patients derive from minimally invasive oncological surgery. Four consecutive patients had uterine retroversion to obturate pelvic deadspace and reconstruct the posterior vaginal wall. Age range was 41-84 years and mean follow-up of 21 months with mean in-patient stay of 7 days. All patients had neoadjuvant radiotherapy or chemoradiation for low rectal/anorectal adenocarcinoma. All patients had laparoscopic Extended APER and contiguous posterior vaginal wall excision and reconstruction with uterine retroversion and z-plasty skin closure. One patient required ultrasound aspiration of a pre-sacral seroma at two months. No patients returned to theatre for major complications. We highlight one minor and no major complications associated with an algorithmic approach incorporating our method of uterine retroversion and z-plasty parallel to traditional flap reconstruction methods. Copyright © 2012. Published by Elsevier Ltd.

  11. Free Pulp Transfer for Fingertip Reconstruction-The Algorithm for Complicated Allen Fingertip Defect.

    Science.gov (United States)

    Spyropoulou, Georgia-Alexandra; Shih, Hsiang-Shun; Jeng, Seng-Feng

    2015-12-01

    We present a review of all the cases of free toe pulp transfer and an algorithm for application of free pulp transfer in complicated Allen fingertip defect. Seventeen patients underwent free toe pulp transfer for fingertip reconstruction by the senior author. Twelve cases were Allen type II with oblique pulp defect, 4 were Allen type III, and 1 patient had 2 fingertip injuries classified both as type IV. According to the algorithm presented, for the type III defects where the germinal matrix is still preserved, we use free pulp transfer and nail bed graft to preserve the nail growth instead of toe to hand transfer. For the type IV injuries with multiple defects, a combination of web flap from both big toe and second toe is possible for 1-stage reconstruction. All pulp flaps survived completely. Static 2-point discrimination ranged from 6 to 15 mm (mean: 10.5 mm). No patient presented dysesthesia, hyperesthesia, pain at rest, or cold intolerance. The donor site did not present any nuisances apart from partial skin graft loss in 3 cases. We tried to classify and modify the defects' reconstruction according to Allen classification. Free toe pulp transfer is a "like with like" reconstruction that provides sensate, glabrous skin with good color and texture match for fingertip trauma, and minimal donor site morbidity compared with traditional toe to hand transfer.

  12. Consensus algorithm in smart grid and communication networks

    Science.gov (United States)

    Alfagee, Husain Abdulaziz

    On a daily basis, consensus theory attracts more and more researches from different areas of interest, to apply its techniques to solve technical problems in a way that is faster, more reliable, and even more precise than ever before. A power system network is one of those fields that consensus theory employs extensively. The use of the consensus algorithm to solve the Economic Dispatch and Load Restoration Problems is a good example. Instead of a conventional central controller, some researchers have explored an algorithm to solve the above mentioned problems, in a distribution manner, using the consensus algorithm, which is based on calculation methods, i.e., non estimation methods, for updating the information consensus matrix. Starting from this point of solving these types of problems mentioned, specifically, in a distribution fashion, using the consensus algorithm, we have implemented a new advanced consensus algorithm. It is based on the adaptive estimation techniques, such as the Gradient Algorithm and the Recursive Least Square Algorithm, to solve the same problems. This advanced work was tested on different case studies that had formerly been explored, as seen in references 5, 7, and 18. Three and five generators, or agents, with different topologies, correspond to the Economic Dispatch Problem and the IEEE 16-Bus power system corresponds to the Load Restoration Problem. In all the cases we have studied, the results met our expectations with extreme accuracy, and completely matched the results of the previous researchers. There is little question that this research proves the capability and dependability of using the consensus algorithm, based on the estimation methods as the Gradient Algorithm and the Recursive Least Square Algorithm to solve such power problems.

  13. BoostGAPFILL: improving the fidelity of metabolic network reconstructions through integrated constraint and pattern-based methods.

    Science.gov (United States)

    Oyetunde, Tolutola; Zhang, Muhan; Chen, Yixin; Tang, Yinjie; Lo, Cynthia

    2017-02-15

    Metabolic network reconstructions are often incomplete. Constraint-based and pattern-based methodologies have been used for automated gap filling of these networks, each with its own strengths and weaknesses. Moreover, since validation of hypotheses made by gap filling tools require experimentation, it is challenging to benchmark performance and make improvements other than that related to speed and scalability. We present BoostGAPFILL, an open source tool that leverages both constraint-based and machine learning methodologies for hypotheses generation in gap filling and metabolic model refinement. BoostGAPFILL uses metabolite patterns in the incomplete network captured using a matrix factorization formulation to constrain the set of reactions used to fill gaps in a metabolic network. We formulate a testing framework based on the available metabolic reconstructions and demonstrate the superiority of BoostGAPFILL to state-of-the-art gap filling tools. We randomly delete a number of reactions from a metabolic network and rate the different algorithms on their ability to both predict the deleted reactions from a universal set and to fill gaps. For most metabolic network reconstructions tested, BoostGAPFILL shows above 60% precision and recall, which is more than twice that of other existing tools. MATLAB open source implementation ( https://github.com/Tolutola/BoostGAPFILL ). toyetunde@wustl.edu or muhan@wustl.edu . Supplementary data are available at Bioinformatics online.

  14. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    Science.gov (United States)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  15. Fast hybrid CPU- and GPU-based CT reconstruction algorithm using air skipping technique.

    Science.gov (United States)

    Lee, Byeonghun; Lee, Ho; Shin, Yeong Gil

    2010-01-01

    This paper presents a fast hybrid CPU- and GPU-based CT reconstruction algorithm to reduce the amount of back-projection operation using air skipping involving polygon clipping. The algorithm easily and rapidly selects air areas that have significantly higher contrast in each projection image by applying K-means clustering method on CPU, and then generates boundary tables for verifying valid region using segmented air areas. Based on these boundary tables of each projection image, clipped polygon that indicates active region when back-projection operation is performed on GPU is determined on each volume slice. This polygon clipping process makes it possible to use smaller number of voxels to be back-projected, which leads to a faster GPU-based reconstruction method. This approach has been applied to a clinical data set and Shepp-Logan phantom data sets having various ratio of air region for quantitative and qualitative comparison and analysis of our and conventional GPU-based reconstruction methods. The algorithm has been proved to reduce computational time to half without losing any diagnostic information, compared to conventional GPU-based approaches.

  16. Improved Wallis Dodging Algorithm for Large-Scale Super-Resolution Reconstruction Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Chong Fan

    2017-03-01

    Full Text Available A sub-block algorithm is usually applied in the super-resolution (SR reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This algorithm can achieve consistency of image brightness and contrast. Meanwhile, a weighted adjustment sequence is presented to avoid the spatial propagation and accumulation of errors and the loss of image information caused by excessive computation. A seam line elimination method can share the partial dislocation in the seam line to the entire overlapping region with a smooth transition effect. Subsequently, the improved method is employed to remove the uneven illumination for 900 SR reconstructed images of ZY-3. Then, the overlapping image mosaic method is adopted to accomplish a seamless image mosaic based on the optimal seam line.

  17. Improved Wallis Dodging Algorithm for Large-Scale Super-Resolution Reconstruction Remote Sensing Images.

    Science.gov (United States)

    Fan, Chong; Chen, Xushuai; Zhong, Lei; Zhou, Min; Shi, Yun; Duan, Yulin

    2017-03-18

    A sub-block algorithm is usually applied in the super-resolution (SR) reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This algorithm can achieve consistency of image brightness and contrast. Meanwhile, a weighted adjustment sequence is presented to avoid the spatial propagation and accumulation of errors and the loss of image information caused by excessive computation. A seam line elimination method can share the partial dislocation in the seam line to the entire overlapping region with a smooth transition effect. Subsequently, the improved method is employed to remove the uneven illumination for 900 SR reconstructed images of ZY-3. Then, the overlapping image mosaic method is adopted to accomplish a seamless image mosaic based on the optimal seam line.

  18. Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner

    Science.gov (United States)

    Ram Yu, A.; Kim, Jin Su

    2015-10-01

    Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.

  19. Automated reconstruction algorithm for identification of 3D architectures of cribriform ductal carcinoma in situ.

    Directory of Open Access Journals (Sweden)

    Kerri-Ann Norton

    Full Text Available Ductal carcinoma in situ (DCIS is a pre-invasive carcinoma of the breast that exhibits several distinct morphologies but the link between morphology and patient outcome is not clear. We hypothesize that different mechanisms of growth may still result in similar 2D morphologies, which may look different in 3D. To elucidate the connection between growth and 3D morphology, we reconstruct the 3D architecture of cribriform DCIS from resected patient material. We produce a fully automated algorithm that aligns, segments, and reconstructs 3D architectures from microscopy images of 2D serial sections from human specimens. The alignment algorithm is based on normalized cross correlation, the segmentation algorithm uses histogram equilization, Otsu's thresholding, and morphology techniques to segment the duct and cribra. The reconstruction method combines these images in 3D. We show that two distinct 3D architectures are indeed found in samples whose 2D histological sections are similarly identified as cribriform DCIS. These differences in architecture support the hypothesis that luminal spaces may form due to different mechanisms, either isolated cell death or merging fronds, leading to the different architectures. We find that out of 15 samples, 6 were found to have 'bubble-like' cribra, 6 were found to have 'tube-like' criba and 3 were 'unknown.' We propose that the 3D architectures found, 'bubbles' and 'tubes', account for some of the heterogeneity of the disease and may be prognostic indicators of different patient outcomes.

  20. Properties of healthcare teaming networks as a function of network construction algorithms

    Science.gov (United States)

    Trayhan, Melissa; Farooq, Samir A.; Fucile, Christopher; Ghoshal, Gourab; White, Robert J.; Quill, Caroline M.; Rosenberg, Alexander; Barbosa, Hugo Serrano; Bush, Kristen; Chafi, Hassan; Boudreau, Timothy

    2017-01-01

    Network models of healthcare systems can be used to examine how providers collaborate, communicate, refer patients to each other, and to map how patients traverse the network of providers. Most healthcare service network models have been constructed from patient claims data, using billing claims to link a patient with a specific provider in time. The data sets can be quite large (106–108 individual claims per year), making standard methods for network construction computationally challenging and thus requiring the use of alternate construction algorithms. While these alternate methods have seen increasing use in generating healthcare networks, there is little to no literature comparing the differences in the structural properties of the generated networks, which as we demonstrate, can be dramatically different. To address this issue, we compared the properties of healthcare networks constructed using different algorithms from 2013 Medicare Part B outpatient claims data. Three different algorithms were compared: binning, sliding frame, and trace-route. Unipartite networks linking either providers or healthcare organizations by shared patients were built using each method. We find that each algorithm produced networks with substantially different topological properties, as reflected by numbers of edges, network density, assortativity, clustering coefficients and other structural measures. Provider networks adhered to a power law, while organization networks were best fit by a power law with exponential cutoff. Censoring networks to exclude edges with less than 11 shared patients, a common de-identification practice for healthcare network data, markedly reduced edge numbers and network density, and greatly altered measures of vertex prominence such as the betweenness centrality. Data analysis identified patterns in the distance patients travel between network providers, and a striking set of teaming relationships between providers in the Northeast United States and

  1. Vertex Reconstructing Neural Network at the ZEUS Central Tracking Detector

    CERN Document Server

    Dror, G; Dror, Gideon; Etzion, Erez

    2001-01-01

    An unconventional solution for finding the location of event creation is presented. It is based on two feed-forward neural networks with fixed architecture, whose parameters are chosen so as to reach a high accuracy. The interaction point location is a parameter that can be used to select events of interest from the very high rate of events created at the current experiments in High Energy Physics. The system suggested here is tested on simulated data sets of the ZEUS Central Tracking Detector, and is shown to perform better than conventional algorithms.

  2. GPS-free localization algorithm for wireless sensor networks.

    Science.gov (United States)

    Wang, Lei; Xu, Qingzheng

    2010-01-01

    Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time.

  3. Distributed interference alignment iterative algorithms in symmetric wireless network

    Directory of Open Access Journals (Sweden)

    YANG Jingwen

    2015-02-01

    Full Text Available Interference alignment is a novel interference alignment way,which is widely noted all of the world.Interference alignment overlaps interference in the same signal space at receiving terminal by precoding so as to thoroughly eliminate the influence of interference impacted on expected signals,thus making the desire user achieve the maximum degree of freedom.In this paper we research three typical algorithms for realizing interference alignment,including minimizing the leakage interference,maximizing Signal to Interference plus Noise Ratio (SINR and minimizing mean square error(MSE.All of these algorithms utilize the reciprocity of wireless network,and iterate the precoders between original network and the reverse network so as to achieve interference alignment.We use the uplink transmit rate to analyze the performance of these three algorithms.Numerical simulation results show the advantages of these algorithms.which is the foundation for the further study in the future.The feasibility and future of interference alignment are also discussed at last.

  4. From Tls Point Clouds to 3d Models of Trees: a Comparison of Existing Algorithms for 3d Tree Reconstruction

    Science.gov (United States)

    Bournez, E.; Landes, T.; Saudreau, M.; Kastendeuch, P.; Najjar, G.

    2017-02-01

    3D models of tree geometry are important for numerous studies, such as for urban planning or agricultural studies. In climatology, tree models can be necessary for simulating the cooling effect of trees by estimating their evapotranspiration. The literature shows that the more accurate the 3D structure of a tree is, the more accurate microclimate models are. This is the reason why, since 2013, we have been developing an algorithm for the reconstruction of trees from terrestrial laser scanner (TLS) data, which we call TreeArchitecture. Meanwhile, new promising algorithms dedicated to tree reconstruction have emerged in the literature. In this paper, we assess the capacity of our algorithm and of two others -PlantScan3D and SimpleTree- to reconstruct the 3D structure of trees. The aim of this reconstruction is to be able to characterize the geometric complexity of trees, with different heights, sizes and shapes of branches. Based on a specific surveying workflow with a TLS, we have acquired dense point clouds of six different urban trees, with specific architectures, before reconstructing them with each algorithm. Finally, qualitative and quantitative assessments of the models are performed using reference tree reconstructions and field measurements. Based on this assessment, the advantages and the limits of every reconstruction algorithm are highlighted. Anyway, very satisfying results can be reached for 3D reconstructions of tree topology as well as of tree volume.

  5. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network.

    Science.gov (United States)

    Lin, Kai; Wang, Di; Hu, Long

    2016-07-01

    With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods.

  6. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network

    Directory of Open Access Journals (Sweden)

    Kai Lin

    2016-07-01

    Full Text Available With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC. The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods.

  7. New Scheduling Algorithms for Agile All-Photonic Networks

    Science.gov (United States)

    Mehri, Mohammad Saleh; Ghaffarpour Rahbar, Akbar

    2017-12-01

    An optical overlaid star network is a class of agile all-photonic networks that consists of one or more core node(s) at the center of the star network and a number of edge nodes around the core node. In this architecture, a core node may use a scheduling algorithm for transmission of traffic through the network. A core node is responsible for scheduling optical packets that arrive from edge nodes and switching them toward their destinations. Nowadays, most edge nodes use virtual output queue (VOQ) architecture for buffering client packets to achieve high throughput. This paper presents two efficient scheduling algorithms called discretionary iterative matching (DIM) and adaptive DIM. These schedulers find maximum matching in a small number of iterations and provide high throughput and incur low delay. The number of arbiters in these schedulers and the number of messages exchanged between inputs and outputs of a core node are reduced. We show that DIM and adaptive DIM can provide better performance in comparison with iterative round-robin matching with SLIP (iSLIP). SLIP means the act of sliding for a short distance to select one of the requested connections based on the scheduling algorithm.

  8. Sparse Coding Algorithm with Negentropy and Weighted ℓ1-Norm for Signal Reconstruction

    Directory of Open Access Journals (Sweden)

    Yingxin Zhao

    2017-11-01

    Full Text Available Compressive sensing theory has attracted widespread attention in recent years and sparse signal reconstruction has been widely used in signal processing and communication. This paper addresses the problem of sparse signal recovery especially with non-Gaussian noise. The main contribution of this paper is the proposal of an algorithm where the negentropy and reweighted schemes represent the core of an approach to the solution of the problem. The signal reconstruction problem is formalized as a constrained minimization problem, where the objective function is the sum of a measurement of error statistical characteristic term, the negentropy, and a sparse regularization term, ℓp-norm, for 0 < p < 1. The ℓp-norm, however, leads to a non-convex optimization problem which is difficult to solve efficiently. Herein we treat the ℓp -norm as a serious of weighted ℓ1-norms so that the sub-problems become convex. We propose an optimized algorithm that combines forward-backward splitting. The algorithm is fast and succeeds in exactly recovering sparse signals with Gaussian and non-Gaussian noise. Several numerical experiments and comparisons demonstrate the superiority of the proposed algorithm.

  9. Reconstruction of the metabolic network of Pseudomonas aeruginosa to interrogate virulence factor synthesis

    DEFF Research Database (Denmark)

    Bartell, Jennifer; Blazier, Anna S; Yen, Phillip

    2017-01-01

    to metabolism. We evaluate the complex interrelationships between growth and virulence-linked pathways using a genome-scale metabolic network reconstruction of Pseudomonas aeruginosa strain PA14 and an updated, expanded reconstruction of P. aeruginosa strain PAO1. The PA14 reconstruction accounts...

  10. An algorithmic approach to perineal reconstruction after cancer resection--experience from two international centers.

    Science.gov (United States)

    John, Hannah Eliza; Jessop, Zita Maria; Di Candia, Michele; Simcock, Jeremy; Durrani, Amer J; Malata, Charles M

    2013-07-01

    This paper aims to simplify the approach to reconstruction of the perineum after resection of malignancies of the anal canal, lower rectum, vulva, and vagina. The data were collected from 2 centers, namely, Addenbrooke's Hospital, University of Cambridge, United Kingdom and Christchurch Hospital, University of Otago, New Zealand. All patients who underwent perineal reconstruction from 1997 to 2009 at Christchurch Hospital (13 years) and 2001 to 2009 at Addenbrooke's Hospital (9 years) were included. The diagnosis (indication), primary surgery, reconstructive surgery, complications, tumor outcomes (recurrence and survival), and follow-up were entered into a database (Microsoft Excel; Redmond, Wash). The incidence of previous radiotherapy, requirement for adjuvant radiotherapy, and length of inpatient stay were also recorded. Forty-six patients were identified for this study--13 in New Zealand and 33 in Cambridge. Indications for perineal reconstruction included resection of anal and rectal malignancies (24), vulval and vaginal malignancy (19), perineal sarcoma (1), and perineal squamous cell carcinoma arising in an enterocutaneous fistula (Table 1). The reconstructive strategies adopted included rectus abdominis myocutaneous flaps (26), gluteal fold flaps (9), gracilis V-Y or advancement flaps (7) and others (4), gluteal rotation flaps (1), local flap (2), and free latissimus dorsi flaps (1). Although various surgeons performed the reconstructive surgeries at 2 different centers, the essential approach remained the same. Smaller defects were best treated by local flaps, whereas the rectus abdominis flap remained the standard option for larger defects that additionally required closure of dead space. On the basis of our 2 center experience, we propose a simple algorithm to facilitate the planning of reconstructive surgery for the perineum.

  11. Performance of 3DOSEM and MAP algorithms for reconstructing low count SPECT acquisitions

    Energy Technology Data Exchange (ETDEWEB)

    Grootjans, Willem [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Leiden Univ. Medical Center (Netherlands). Dept. of Radiology; Meeuwis, Antoi P.W.; Gotthardt, Martin; Visser, Eric P. [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Slump, Cornelis H. [Univ. Twente, Enschede (Netherlands). MIRA Inst. for Biomedical Technology and Technical Medicine; Geus-Oei, Lioe-Fee de [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Univ. Twente, Enschede (Netherlands). MIRA Inst. for Biomedical Technology and Technical Medicine; Leiden Univ. Medical Center (Netherlands). Dept. of Radiology

    2016-07-01

    Low count single photon emission computed tomography (SPECT) is becoming more important in view of whole body SPECT and reduction of radiation dose. In this study, we investigated the performance of several 3D ordered subset expectation maximization (3DOSEM) and maximum a posteriori (MAP) algorithms for reconstructing low count SPECT images. Phantom experiments were conducted using the National Electrical Manufacturers Association (NEMA) NU2 image quality (IQ) phantom. The background compartment of the phantom was filled with varying concentrations of pertechnetate and indiumchloride, simulating various clinical imaging conditions. Images were acquired using a hybrid SPECT/CT scanner and reconstructed with 3DOSEM and MAP reconstruction algorithms implemented in Siemens Syngo MI.SPECT (Flash3D) and Hermes Hybrid Recon Oncology (Hyrid Recon 3DOSEM and MAP). Image analysis was performed by calculating the contrast recovery coefficient (CRC),percentage background variability (N%), and contrast-to-noise ratio (CNR), defined as the ratio between CRC and N%. Furthermore, image distortion is characterized by calculating the aspect ratio (AR) of ellipses fitted to the hot spheres. Additionally, the performance of these algorithms to reconstruct clinical images was investigated. Images reconstructed with 3DOSEM algorithms demonstrated superior image quality in terms of contrast and resolution recovery when compared to images reconstructed with filtered-back-projection (FBP), OSEM and 2DOSEM. However, occurrence of correlated noise patterns and image distortions significantly deteriorated the quality of 3DOSEM reconstructed images. The mean AR for the 37, 28, 22, and 17 mm spheres was 1.3, 1.3, 1.6, and 1.7 respectively. The mean N% increase in high and low count Flash3D and Hybrid Recon 3DOSEM from 5.9% and 4.0% to 11.1% and 9.0%, respectively. Similarly, the mean CNR decreased in high and low count Flash3D and Hybrid Recon 3DOSEM from 8.7 and 8.8 to 3.6 and 4

  12. A Fast local Reconstruction algorithm by selective backprojection for Low-Dose in Dental Computed Tomography

    CERN Document Server

    Bin, Yan; Yu, Han; Feng, Zhang; Chao, Wang Xian; Lei, Li

    2013-01-01

    High radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer, which become a major clinical concern. The backprojection-filtration (BPF) algorithm could reduce radiation dose by reconstructing images from truncated data in a short scan. In dental CT, it could reduce radiation dose for the teeth by using the projection acquired in a short scan, and could avoid irradiation to other part by using truncated projection. However, the limit of integration for backprojection varies per PI-line, resulting in low calculation efficiency and poor parallel performance. Recently, a tent BPF (T-BPF) has been proposed to improve calculation efficiency by rearranging projection. However, the memory-consuming data rebinning process is included. Accordingly, the chose-BPF (C-BPF) algorithm is proposed in this paper. In this algorithm, the derivative of projection is backprojected to the points whose x coordinate is less than that of the source focal spot to obtain the differentiated backprojection...

  13. Validation of simultaneous reverse optimization reconstruction algorithm in a practical circular subaperture stitching interferometer

    Science.gov (United States)

    Zhang, Lei; Li, Dong; Liu, Yu; Liu, Jingxiao; Li, Jingsong; Yu, Benli

    2017-11-01

    We demonstrate the validity of the simultaneous reverse optimization reconstruction (SROR) algorithm in circular subaperture stitching interferometry (CSSI), which is previously proposed for non-null aspheric annular subaperture stitching interferometry (ASSI). The merits of the modified SROR algorithm in CSSI, such as auto retrace error correction, no need of overlap and even permission of missed coverage, are analyzed in detail in simulations and experiments. Meanwhile, a practical CSSI system is proposed for this demonstration. An optical wedge is employed to deflect the incident beam for subaperture scanning by its rotation and shift instead of the six-axis motion-control system. Also the reference path can provide variable Zernike defocus for each subaperture test, which would decrease the fringe density. Experiments validating the SROR algorithm in this CSSI is implemented with cross validation by testing of paraboloidal mirror, flat mirror and astigmatism mirror. It is an indispensable supplement in SROR application in general subaperture stitching interferometry.

  14. A reconstruction algorithm for electrical impedance tomography based on sparsity regularization

    KAUST Repository

    Jin, Bangti

    2011-08-24

    This paper develops a novel sparse reconstruction algorithm for the electrical impedance tomography problem of determining a conductivity parameter from boundary measurements. The sparsity of the \\'inhomogeneity\\' with respect to a certain basis is a priori assumed. The proposed approach is motivated by a Tikhonov functional incorporating a sparsity-promoting ℓ 1-penalty term, and it allows us to obtain quantitative results when the assumption is valid. A novel iterative algorithm of soft shrinkage type was proposed. Numerical results for several two-dimensional problems with both single and multiple convex and nonconvex inclusions were presented to illustrate the features of the proposed algorithm and were compared with one conventional approach based on smoothness regularization. © 2011 John Wiley & Sons, Ltd.

  15. Transmission network expansion planning based on hybridization model of neural networks and harmony search algorithm

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Ameli

    2012-01-01

    Full Text Available Transmission Network Expansion Planning (TNEP is a basic part of power network planning that determines where, when and how many new transmission lines should be added to the network. So, the TNEP is an optimization problem in which the expansion purposes are optimized. Artificial Intelligence (AI tools such as Genetic Algorithm (GA, Simulated Annealing (SA, Tabu Search (TS and Artificial Neural Networks (ANNs are methods used for solving the TNEP problem. Today, by using the hybridization models of AI tools, we can solve the TNEP problem for large-scale systems, which shows the effectiveness of utilizing such models. In this paper, a new approach to the hybridization model of Probabilistic Neural Networks (PNNs and Harmony Search Algorithm (HSA was used to solve the TNEP problem. Finally, by considering the uncertain role of the load based on a scenario technique, this proposed model was tested on the Garver’s 6-bus network.

  16. Comparing algorithms that reconstruct cell lineage trees utilizing information on microsatellite mutations.

    Science.gov (United States)

    Chapal-Ilani, Noa; Maruvka, Yosef E; Spiro, Adam; Reizel, Yitzhak; Adar, Rivka; Shlush, Liran I; Shapiro, Ehud

    2013-01-01

    Organism cells proliferate and die to build, maintain, renew and repair it. The cellular history of an organism up to any point in time can be captured by a cell lineage tree in which vertices represent all organism cells, past and present, and directed edges represent progeny relations among them. The root represents the fertilized egg, and the leaves represent extant and dead cells. Somatic mutations accumulated during cell division endow each organism cell with a genomic signature that is unique with a very high probability. Distances between such genomic signatures can be used to reconstruct an organism's cell lineage tree. Cell populations possess unique features that are absent or rare in organism populations (e.g., the presence of stem cells and a small number of generations since the zygote) and do not undergo sexual reproduction, hence the reconstruction of cell lineage trees calls for careful examination and adaptation of the standard tools of population genetics. Our lab developed a method for reconstructing cell lineage trees by examining only mutations in highly variable microsatellite loci (MS, also called short tandem repeats, STR). In this study we use experimental data on somatic mutations in MS of individual cells in human and mice in order to validate and quantify the utility of known lineage tree reconstruction algorithms in this context. We employed extensive measurements of somatic mutations in individual cells which were isolated from healthy and diseased tissues of mice and humans. The validation was done by analyzing the ability to infer known and clear biological scenarios. In general, we found that if the biological scenario is simple, almost all algorithms tested can infer it. Another somewhat surprising conclusion is that the best algorithm among those tested is Neighbor Joining where the distance measure used is normalized absolute distance. We include our full dataset in Tables S1, S2, S3, S4, S5 to enable further analysis of this

  17. A fast stereo matching algorithm for 3D reconstruction of internal organs in laparoscopic surgery

    Science.gov (United States)

    Okada, Yoshimichi; Koishi, Takeshi; Ushiki, Suguru; Nakaguchi, Toshiya; Tsumura, Norimichi; Miyake, Yoichi

    2008-03-01

    We propose a fast stereo matching algorithm for 3D reconstruction of internal organs using a stereoscopic laparoscope. Stoyanov et al. have proposed a technique for recovering the 3D depth of internal organs from images taken by a stereoscopic laparoscope. In their technique, the dense stereo correspondence is solved by registration of the entire image. However, the computational cost is very high because registration of the entire image requires multidimensional optimization. In this paper, we propose a new algorithm based on a local area registration method that requires only low-dimensional optimization for reduction of computational cost. We evaluated the computational cost of the proposed algorithm using a stereoscopic laparoscope. We also evaluated the accuracy of the proposed algorithm using three types of images of abdominal models taken by a 3D laser scanner. In the matching step, the size of the template used to calculate the correlation coefficient, on which the computational cost is strongly dependent, was reduced by a factor of 16 as compared with the conventional algorithm. On the other hand, the average depth errors were 4.68 mm, 7.18 mm, and 7.44 mm respectively, and accuracy was approximately as same as the conventional algorithm.

  18. VSMURF: A Novel Sliding Window Cleaning Algorithm for RFID Networks

    Directory of Open Access Journals (Sweden)

    He Xu

    2017-01-01

    Full Text Available Radio Frequency Identification (RFID is one of the key technologies of the Internet of Things (IoT and is used in many areas, such as mobile payments, public transportation, smart lock, and environment protection. However, the performance of RFID equipment can be easily affected by the surrounding environment, such as electronic productions and metal appliances. These can impose an impact on the RF signal, which makes the collection of RFID data unreliable. Usually, the unreliability of RFID source data includes three aspects: false negatives, false positives, and dirty data. False negatives are the key problem, as the probability of false positives and dirty data occurrence is relatively small. This paper proposes a novel sliding window cleaning algorithm called VSMURF, which is based on the traditional SMURF algorithm which combines the dynamic change of tags and the value analysis of confidence. Experimental results show that VSMURF algorithm performs better in most conditions and when the tag’s speed is low or high. In particular, if the velocity parameter is set to 2 m/epoch, our proposed VSMURF algorithm performs better than SMURF. The results also show that VSMURF algorithm has better performance than other algorithms in solving the problem of false negatives for RFID networks.

  19. Human matching behavior in social networks: an algorithmic perspective.

    Science.gov (United States)

    Coviello, Lorenzo; Franceschetti, Massimo; McCubbins, Mathew D; Paturi, Ramamohan; Vattani, Andrea

    2012-01-01

    We argue that algorithmic modeling is a powerful approach to understanding the collective dynamics of human behavior. We consider the task of pairing up individuals connected over a network, according to the following model: each individual is able to propose to match with and accept a proposal from a neighbor in the network; if a matched individual proposes to another neighbor or accepts another proposal, the current match will be broken; individuals can only observe whether their neighbors are currently matched but have no knowledge of the network topology or the status of other individuals; and all individuals have the common goal of maximizing the total number of matches. By examining the experimental data, we identify a behavioral principle called prudence, develop an algorithmic model, analyze its properties mathematically and by simulations, and validate the model with human subject experiments for various network sizes and topologies. Our results include i) a 1/2-approximate maximum matching is obtained in logarithmic time in the network size for bounded degree networks; ii) for any constant ε > 0, a (1 - ε)-approximate maximum matching is obtained in polynomial time, while obtaining a maximum matching can require an exponential time; and iii) convergence to a maximum matching is slower on preferential attachment networks than on small-world networks. These results allow us to predict that while humans can find a "good quality" matching quickly, they may be unable to find a maximum matching in feasible time. We show that the human subjects largely abide by prudence, and their collective behavior is closely tracked by the above predictions.

  20. General asymmetric neutral networks and structure design by genetic algorithms: A learning rule for temporal patterns

    Energy Technology Data Exchange (ETDEWEB)

    Bornholdt, S. [Heidelberg Univ., (Germany). Inst., fuer Theoretische Physik; Graudenz, D. [Lawrence Berkeley Lab., CA (United States)

    1993-07-01

    A learning algorithm based on genetic algorithms for asymmetric neural networks with an arbitrary structure is presented. It is suited for the learning of temporal patterns and leads to stable neural networks with feedback.

  1. CN: a consensus algorithm for inferring gene regulatory networks using the SORDER algorithm and conditional mutual information test.

    Science.gov (United States)

    Aghdam, Rosa; Ganjali, Mojtaba; Zhang, Xiujun; Eslahchi, Changiz

    2015-03-01

    Inferring Gene Regulatory Networks (GRNs) from gene expression data is a major challenge in systems biology. The Path Consistency (PC) algorithm is one of the popular methods in this field. However, as an order dependent algorithm, PC algorithm is not robust because it achieves different network topologies if gene orders are permuted. In addition, the performance of this algorithm depends on the threshold value used for independence tests. Consequently, selecting suitable sequential ordering of nodes and an appropriate threshold value for the inputs of PC algorithm are challenges to infer a good GRN. In this work, we propose a heuristic algorithm, namely SORDER, to find a suitable sequential ordering of nodes. Based on the SORDER algorithm and a suitable interval threshold for Conditional Mutual Information (CMI) tests, a network inference method, namely the Consensus Network (CN), has been developed. In the proposed method, for each edge of the complete graph, a weighted value is defined. This value is considered as the reliability value of dependency between two nodes. The final inferred network, obtained using the CN algorithm, contains edges with a reliability value of dependency of more than a defined threshold. The effectiveness of this method is benchmarked through several networks from the DREAM challenge and the widely used SOS DNA repair network in Escherichia coli. The results indicate that the CN algorithm is suitable for learning GRNs and it considerably improves the precision of network inference. The source of data sets and codes are available at .

  2. Influence of different path length computation models and iterative reconstruction algorithms on the quality of transmission reconstruction in Tomographic Gamma Scanning

    Science.gov (United States)

    Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua

    2017-07-01

    This paper studies the influence of different path length computation models and iterative reconstruction algorithms on the quality of transmission reconstruction in Tomographic Gamma Scanning. The research purpose is to quantify and to localize heterogeneous matrices while investigating the recovery of linear attenuation coefficients (LACs) maps in 200 liter drums. Two different path length computation models so called ;point to point (PP); model and ;point to detector (PD); model are coupled with two different transmission reconstruction algorithms - Algebraic Reconstruction Technique (ART) with non-negativity constraint, and Maximum Likelihood Expectation Maximization (MLEM), respectively. Thus 4 modes are formed: ART-PP, ART-PD, MLEM-PP, MLEM-PD. The inter-comparison of transmission reconstruction qualities of these 4 modes is taken into account for heterogeneous matrices in the radioactive waste drums. Results illustrate that transmission-reconstructed qualities of MLEM algorithm are better than ART algorithm to get the most accurate LACs maps in good agreement with the reference data simulated by Monte Carlo. Moreover, PD model can be used to assay higher density waste drum and has a greater scope of application than PP model in TGS.

  3. Bioinspired evolutionary algorithm based for improving network coverage in wireless sensor networks.

    Science.gov (United States)

    Abbasi, Mohammadjavad; Bin Abd Latiff, Muhammad Shafie; Chizari, Hassan

    2014-01-01

    Wireless sensor networks (WSNs) include sensor nodes in which each node is able to monitor the physical area and send collected information to the base station for further analysis. The important key of WSNs is detection and coverage of target area which is provided by random deployment. This paper reviews and addresses various area detection and coverage problems in sensor network. This paper organizes many scenarios for applying sensor node movement for improving network coverage based on bioinspired evolutionary algorithm and explains the concern and objective of controlling sensor node coverage. We discuss area coverage and target detection model by evolutionary algorithm.

  4. Research on Joint Handoff Algorithm in Vehicles Networks

    Directory of Open Access Journals (Sweden)

    Yuming Bi

    2016-01-01

    Full Text Available With the communication services evolution from the fourth generation (4G to the fifth generation (5G, we are going to face diverse challenges from the new network systems. On the one hand, seamless handoff is expected to integrate universal access among various network mechanisms. On the other hand, a variety of 5G technologies will complement each other to provide ubiquitous high speed wireless connectivity. Because the current wireless network cannot support the handoff among Wireless Access for Vehicular Environment (WAVE, WiMAX, and LTE flexibly, the paper provides an advanced handoff algorithm to solve this problem. Firstly, the received signal strength is classified, and the vehicle speed and data rate under different channel conditions are optimized. Then, the optimal network is selected for handoff. Simulation results show that the proposed algorithm can well adapt to high speed environment, guarantee flexible and reasonable vehicles access to a variety of networks, and prevent ping-pong handoff and link access failure effectively.

  5. Neural-network-based voice-tracking algorithm

    Science.gov (United States)

    Baker, Mary; Stevens, Charise; Chaparro, Brennen; Paschall, Dwayne

    2002-11-01

    A voice-tracking algorithm was developed and tested for the purposes of electronically separating the voice signals of simultaneous talkers. Many individuals suffer from hearing disorders that often inhibit their ability to focus on a single speaker in a multiple speaker environment (the cocktail party effect). Digital hearing aid technology makes it possible to implement complex algorithms for speech processing in both the time and frequency domains. In this work, an average magnitude difference function (AMDF) was performed on mixed voice signals in order to determine the fundamental frequencies present in the signals. A time prediction neural network was trained to recognize normal human voice inflection patterns, including rising, falling, rising-falling, and falling-rising patterns. The neural network was designed to track the fundamental frequency of a single talker based on the training procedure. The output of the neural network can be used to design an active filter for speaker segregation. Tests were done using audio mixing of two to three speakers uttering short phrases. The AMDF function accurately identified the fundamental frequencies present in the signal. The neural network was tested using a single speaker uttering a short sentence. The network accurately tracked the fundamental frequency of the speaker.

  6. District Heating Network Design and Configuration Optimization with Genetic Algorithm

    DEFF Research Database (Denmark)

    Li, Hongwei; Svendsen, Svend

    2011-01-01

    the heating plant location is allowed to vary. The connection between the heat generation plant and the end users can be represented with mixed integer and the pipe friction and heat loss formulations are non-linear. In order to find the optimal DH distribution pipeline configuration, the genetic algorithm...... which handles the mixed integer nonlinear programming problem was chosen. The network configuration was represented through binary and integer encoding and was optimized in terms of the net present cost (NPC). The optimization results indicated that the optimal DH network configuration is determined...

  7. Distributed Multitarget Probabilistic Coverage Control Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ying Tian

    2014-01-01

    Full Text Available This paper is concerned with the problem of multitarget coverage based on probabilistic detection model. Coverage configuration is an effective method to alleviate the energy-limitation problem of sensors. Firstly, considering the attenuation of node’s sensing ability, the target probabilistic coverage problem is defined and formalized, which is based on Neyman-Peason probabilistic detection model. Secondly, in order to turn off redundant sensors, a simplified judging rule is derived, which makes the probabilistic coverage judgment execute on each node locally. Thirdly, a distributed node schedule scheme is proposed for implementing the distributed algorithm. Simulation results show that this algorithm is robust to the change of network size, and when compared with the physical coverage algorithm, it can effectively minimize the number of active sensors, which guarantees all the targets γ-covered.

  8. Energy Efficient Distributed Fault Identification Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Meenakshi Panda

    2014-01-01

    Full Text Available A distributed fault identification algorithm is proposed here to find both hard and soft faulty sensor nodes present in wireless sensor networks. The algorithm is distributed, self-detectable, and can detect the most common byzantine faults such as stuck at zero, stuck at one, and random data. In the proposed approach, each sensor node gathered the observed data from the neighbors and computed the mean to check whether faulty sensor node is present or not. If a node found the presence of faulty sensor node, then compares observed data with the data of the neighbors and predict probable fault status. The final fault status is determined by diffusing the fault information from the neighbors. The accuracy and completeness of the algorithm are verified with the help of statistical model of the sensors data. The performance is evaluated in terms of detection accuracy, false alarm rate, detection latency and message complexity.

  9. Functional clustering algorithm for the analysis of dynamic network data

    Science.gov (United States)

    Feldt, S.; Waddell, J.; Hetrick, V. L.; Berke, J. D.; Żochowski, M.

    2009-05-01

    We formulate a technique for the detection of functional clusters in discrete event data. The advantage of this algorithm is that no prior knowledge of the number of functional groups is needed, as our procedure progressively combines data traces and derives the optimal clustering cutoff in a simple and intuitive manner through the use of surrogate data sets. In order to demonstrate the power of this algorithm to detect changes in network dynamics and connectivity, we apply it to both simulated neural spike train data and real neural data obtained from the mouse hippocampus during exploration and slow-wave sleep. Using the simulated data, we show that our algorithm performs better than existing methods. In the experimental data, we observe state-dependent clustering patterns consistent with known neurophysiological processes involved in memory consolidation.

  10. NML Computation Algorithms for Tree-Structured Multinomial Bayesian Networks

    Directory of Open Access Journals (Sweden)

    Kontkanen Petri

    2007-01-01

    Full Text Available Typical problems in bioinformatics involve large discrete datasets. Therefore, in order to apply statistical methods in such domains, it is important to develop efficient algorithms suitable for discrete data. The minimum description length (MDL principle is a theoretically well-founded, general framework for performing statistical inference. The mathematical formalization of MDL is based on the normalized maximum likelihood (NML distribution, which has several desirable theoretical properties. In the case of discrete data, straightforward computation of the NML distribution requires exponential time with respect to the sample size, since the definition involves a sum over all the possible data samples of a fixed size. In this paper, we first review some existing algorithms for efficient NML computation in the case of multinomial and naive Bayes model families. Then we proceed by extending these algorithms to more complex, tree-structured Bayesian networks.

  11. Improved Differential Evolution Algorithm for Wireless Sensor Network Coverage Optimization

    Directory of Open Access Journals (Sweden)

    Xing Xu

    2014-04-01

    Full Text Available In order to serve for the ecological monitoring efficiency of Poyang Lake, an improved hybrid algorithm, mixed with differential evolution and particle swarm optimization, is proposed and applied to optimize the coverage problem of wireless sensor network. And then, the affect of the population size and the number of iterations on the coverage performance are both discussed and analyzed. The four kinds of statistical results about the coverage rate are obtained through lots of simulation experiments.

  12. Reconstruction of social group networks from friendship networks using a tag-based model

    Science.gov (United States)

    Guan, Yuan-Pan; You, Zhi-Qiang; Han, Xiao-Pu

    2016-12-01

    Social group is a type of mesoscopic structure that connects human individuals in microscopic level and the global structure of society. In this paper, we propose a tag-based model considering that social groups expand along the edge that connects two neighbors with a similar tag of interest. The model runs on a real-world friendship network, and its simulation results show that various properties of simulated group network can well fit the empirical analysis on real-world social groups, indicating that the model catches the major mechanism driving the evolution of social groups and successfully reconstructs the social group network from a friendship network and throws light on digging of relationships between social functional organizations.

  13. Maximum-entropy networks pattern detection, network reconstruction and graph combinatorics

    CERN Document Server

    Squartini, Tiziano

    2017-01-01

    This book is an introduction to maximum-entropy models of random graphs with given topological properties and their applications. Its original contribution is the reformulation of many seemingly different problems in the study of both real networks and graph theory within the unified framework of maximum entropy. Particular emphasis is put on the detection of structural patterns in real networks, on the reconstruction of the properties of networks from partial information, and on the enumeration and sampling of graphs with given properties.  After a first introductory chapter explaining the motivation, focus, aim and message of the book, chapter 2 introduces the formal construction of maximum-entropy ensembles of graphs with local topological constraints. Chapter 3 focuses on the problem of pattern detection in real networks and provides a powerful way to disentangle nontrivial higher-order structural features from those that can be traced back to simpler local constraints. Chapter 4 focuses on the problem o...

  14. Reconstructing context-specific gene regulatory network and identifying modules and network rewiring through data integration.

    Science.gov (United States)

    Ma, Tianle; Zhang, Aidong

    2017-07-15

    Reconstructing context-specific transcriptional regulatory network is crucial for deciphering principles of regulatory mechanisms underlying various conditions. Recently studies that reconstructed transcriptional networks have focused on individual organisms or cell types and relied on data repositories of context-free regulatory relationships. Here we present a comprehensive framework to systematically derive putative regulator-target pairs in any given context by integrating context-specific transcriptional profiling and public data repositories of gene regulatory networks. Moreover, our framework can identify core regulatory modules and signature genes underlying global regulatory circuitry, and detect network rewiring and core rewired modules in different contexts by considering gene modules and edge (gene interaction) modules collaboratively. We applied our methods to analyzing Autism RNA-seq experiment data and produced biologically meaningful results. In particular, all 11 hub genes in a predicted rewired autistic regulatory subnetwork have been linked to autism based on literature review. The predicted rewired autistic regulatory network may shed some new insight into disease mechanism. Published by Elsevier Inc.

  15. MAC Protocol for Ad Hoc Networks Using a Genetic Algorithm

    Science.gov (United States)

    Elizarraras, Omar; Panduro, Marco; Méndez, Aldo L.

    2014-01-01

    The problem of obtaining the transmission rate in an ad hoc network consists in adjusting the power of each node to ensure the signal to interference ratio (SIR) and the energy required to transmit from one node to another is obtained at the same time. Therefore, an optimal transmission rate for each node in a medium access control (MAC) protocol based on CSMA-CDMA (carrier sense multiple access-code division multiple access) for ad hoc networks can be obtained using evolutionary optimization. This work proposes a genetic algorithm for the transmission rate election considering a perfect power control, and our proposition achieves improvement of 10% compared with the scheme that handles the handshaking phase to adjust the transmission rate. Furthermore, this paper proposes a genetic algorithm that solves the problem of power combining, interference, data rate, and energy ensuring the signal to interference ratio in an ad hoc network. The result of the proposed genetic algorithm has a better performance (15%) compared to the CSMA-CDMA protocol without optimizing. Therefore, we show by simulation the effectiveness of the proposed protocol in terms of the throughput. PMID:25140339

  16. Performance Assessment of Different Pulse Reconstruction Algorithms for the ATHENA X-Ray Integral Field Unit

    Science.gov (United States)

    Peille, Phillip; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; Den Haretog, Roland; de Plaa, Jelle; hide

    2016-01-01

    The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.

  17. Classification of ETM+ Remote Sensing Image Based on Hybrid Algorithm of Genetic Algorithm and Back Propagation Neural Network

    Directory of Open Access Journals (Sweden)

    Haisheng Song

    2013-01-01

    Full Text Available The back propagation neural network (BPNN algorithm can be used as a supervised classification in the processing of remote sensing image classification. But its defects are obvious: falling into the local minimum value easily, slow convergence speed, and being difficult to determine intermediate hidden layer nodes. Genetic algorithm (GA has the advantages of global optimization and being not easy to fall into local minimum value, but it has the disadvantage of poor local searching capability. This paper uses GA to generate the initial structure of BPNN. Then, the stable, efficient, and fast BP classification network is gotten through making fine adjustments on the improved BP algorithm. Finally, we use the hybrid algorithm to execute classification on remote sensing image and compare it with the improved BP algorithm and traditional maximum likelihood classification (MLC algorithm. Results of experiments show that the hybrid algorithm outperforms improved BP algorithm and MLC algorithm.

  18. Compressive sensing reconstruction of feed-forward connectivity in pulse-coupled nonlinear networks

    Science.gov (United States)

    Barranca, Victor J.; Zhou, Douglas; Cai, David

    2016-06-01

    Utilizing the sparsity ubiquitous in real-world network connectivity, we develop a theoretical framework for efficiently reconstructing sparse feed-forward connections in a pulse-coupled nonlinear network through its output activities. Using only a small ensemble of random inputs, we solve this inverse problem through the compressive sensing theory based on a hidden linear structure intrinsic to the nonlinear network dynamics. The accuracy of the reconstruction is further verified by the fact that complex inputs can be well recovered using the reconstructed connectivity. We expect this Rapid Communication provides a new perspective for understanding the structure-function relationship as well as compressive sensing principle in nonlinear network dynamics.

  19. Single-Cell Tracking with PET using a Novel Trajectory Reconstruction Algorithm

    Science.gov (United States)

    Lee, Keum Sil; Kim, Tae Jin

    2015-01-01

    Virtually all biomedical applications of positron emission tomography (PET) use images to represent the distribution of a radiotracer. However, PET is increasingly used in cell tracking applications, for which the “imaging” paradigm may not be optimal. Here we investigate an alternative approach, which consists in reconstructing the time-varying position of individual radiolabeled cells directly from PET measurements. As a proof of concept, we formulate a new algorithm for reconstructing the trajectory of one single moving cell directly from list-mode PET data. We model the trajectory as a 3D B-spline function of the temporal variable and use non-linear optimization to minimize the mean-square distance between the trajectory and the recorded list-mode coincidence events. Using Monte Carlo simulations (GATE), we show that this new algorithm can track a single source moving within a small-animal PET system with <3 mm accuracy provided that the activity of the cell [Bq] is greater than four times its velocity [mm/s]. The algorithm outperforms conventional ML-EM as well as the “minimum distance” method used for positron emission particle tracking (PEPT). The new method was also successfully validated using experimentally acquired PET data. In conclusion, we demonstrated the feasibility of a new method for tracking a single moving cell directly from PET list-mode data, at the whole-body level, for physiologically relevant activities and velocities. PMID:25423651

  20. MODA: an efficient algorithm for network motif discovery in biological networks.

    Science.gov (United States)

    Omidi, Saeed; Schreiber, Falk; Masoudi-Nejad, Ali

    2009-10-01

    In recent years, interest has been growing in the study of complex networks. Since Erdös and Rényi (1960) proposed their random graph model about 50 years ago, many researchers have investigated and shaped this field. Many indicators have been proposed to assess the global features of networks. Recently, an active research area has developed in studying local features named motifs as the building blocks of networks. Unfortunately, network motif discovery is a computationally hard problem and finding rather large motifs (larger than 8 nodes) by means of current algorithms is impractical as it demands too much computational effort. In this paper, we present a new algorithm (MODA) that incorporates techniques such as a pattern growth approach for extracting larger motifs efficiently. We have tested our algorithm and found it able to identify larger motifs with more than 8 nodes more efficiently than most of the current state-of-the-art motif discovery algorithms. While most of the algorithms rely on induced subgraphs as motifs of the networks, MODA is able to extract both induced and non-induced subgraphs simultaneously. The MODA source code is freely available at: http://LBB.ut.ac.ir/Download/LBBsoft/MODA/

  1. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    Science.gov (United States)

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  2. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography.

    Science.gov (United States)

    Precht, Helle; Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess

    2016-12-01

    Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR (P = 0.004). The objective measures showed significant differences between FBP and 60% ASIR (P ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR.

  3. Pharyngoesophageal reconstruction after resection of hypopharyngeal carcinoma: a new algorithm after analysis of 142 cases.

    Science.gov (United States)

    Denewer, Adel; Khater, Ashraf; Hafez, Mohamed T; Hussein, Osama; Roshdy, Sameh; Shahatto, Fayez; Elnahas, Waleed; Kotb, Sherif; Mowafy, Khaled

    2014-06-09

    The aim of this study is to define an algorithm for the choice of reconstructive method for defects after laryngo-pharyngo-esophagectomy for hypopharyngeal carcinoma. One hundred and forty two cases of hypopharyngeal carcinoma were included and operated on by either partial pharyngectomy, total pharyngectomy or esophagectomy. The reconstructive method was tailored according to the resected segment. Pectoralis flap was used in 48 cases, free jejunal flap in 28 cases, augmented colon bypass in 4 cases, gastric pull up in 32 cases and gastric tube in 30 cases. Mean hospital stay was 12 days. Mortality rate was 10.6% and morbidity rate was 31.7%. Total flap failure occurred in 3 cases of free flap and one case of pectoralis flap. There were 23 cases of early fistula. Late stricture occurred in 19 cases, being highest with myocutaneous flap (early fistula 12/50 and late stricture 13/50). Free jejunal flap was the flap of choice for reconstruction when the safety margin is still above the clavicle. In cases with added esophagectomy, we recommend gastric tube as a method of choice for reconstruction.

  4. A fast reconstruction algorithm for bioluminescence tomography based on smoothed l0 norm regularization

    Science.gov (United States)

    He, Xiaowei; Yu, Jingjing; Geng, Guohua; Guo, Hongbo

    2013-10-01

    As an important optical molecular imaging technique, bioluminescence tomography (BLT) offers an inexpensive and sensitive means for non-invasively imaging a variety of physiological and pathological activities at cellular and molecular levels in living small animals. The key problem of BLT is to recover the distribution of the internal bioluminescence sources from limited measurements on the surface. Considering the sparsity of the light source distribution, we directly formulate the inverse problem of BLT into an l0-norm minimization model and present a smoothed l0-norm (SL0) based reconstruction algorithm. By approximating the discontinuous l0 norm with a suitable continuous function, the SL0 norm method solves the problem of intractable computational load of the minimal l0 search as well as high sensitivity of l0-norm to noise. Numerical experiments on a mouse atlas demonstrate that the proposed SL0 norm based reconstruction method can obtain whole domain reconstruction without any a priori knowledge of the source permissible region, yielding almost the same reconstruction results to those of l1 norm methods.

  5. Layout Optimization of Sensor-based Reconstruction of Explosion Overpressure Field Based on the Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Miaomiao Bai

    2014-11-01

    Full Text Available In underwater blasting experiment, the layout of the sensor has always been highly concerned. From the perspective of reconstruction with explosion overpressure field, the paper presents four indicators, which can obtain the optimal sensor layout scheme and guide sensor layout in practical experiment, combining with the genetic algorithm with global search. Then, a multi-scale model in every subregion of underwater blasting field was established to be used simulation experiments. By Matlab, the variation of these four indicators with different sensor layout, and reconstruction accuracy are analyzed and discussed. Finally, a conclusion has been raised through the analysis and comparison of simulation results, that the program can get a better sensor layout. It requires fewer number of sensors to be able to get good results with high accuracy. In the actual test explosions, we can refer to this scheme laid sensors.

  6. Secure Multicast Routing Algorithm for Wireless Mesh Networks

    Directory of Open Access Journals (Sweden)

    Rakesh Matam

    2016-01-01

    Full Text Available Multicast is an indispensable communication technique in wireless mesh network (WMN. Many applications in WMN including multicast TV, audio and video conferencing, and multiplayer social gaming use multicast transmission. On the other hand, security in multicast transmissions is crucial, without which the network services are significantly disrupted. Existing secure routing protocols that address different active attacks are still vulnerable due to subtle nature of flaws in protocol design. Moreover, existing secure routing protocols assume that adversarial nodes cannot share an out-of-band communication channel which rules out the possibility of wormhole attack. In this paper, we propose SEMRAW (SEcure Multicast Routing Algorithm for Wireless mesh network that is resistant against all known active threats including wormhole attack. SEMRAW employs digital signatures to prevent a malicious node from gaining illegitimate access to the message contents. Security of SEMRAW is evaluated using the simulation paradigm approach.

  7. Evolving neural networks using a genetic algorithm for heartbeat classification.

    Science.gov (United States)

    Sekkal, Mansouria; Chikh, Mohamed Amine; Settouti, Nesma

    2011-07-01

    This study investigates the effectiveness of a genetic algorithm (GA) evolved neural network (NN) classifier and its application to the classification of premature ventricular contraction (PVC) beats. As there is no standard procedure to determine the network structure for complicated cases, generally the design of the NN would be dependent on the user's experience. To prevent this problem, we propose a neural classifier that uses a GA for the determination of optimal connections between neurons for better recognition. The MIT-BIH arrhythmia database is employed to evaluate its accuracy. First, the topology of the NN was determined using the trial and error method. Second, the genetic operators were carefully designed to optimize the neural network structure. Performance and accuracy of the two techniques are presented and compared. Copyright © 2011 Informa UK, Ltd.

  8. Analysis of convergence performance of neural networks ranking algorithm.

    Science.gov (United States)

    Zhang, Yongquan; Cao, Feilong

    2012-10-01

    The ranking problem is to learn a real-valued function which gives rise to a ranking over an instance space, which has gained much attention in machine learning in recent years. This article gives analysis of the convergence performance of neural networks ranking algorithm by means of the given samples and approximation property of neural networks. The upper bounds of convergence rate provided by our results can be considerably tight and independent of the dimension of input space when the target function satisfies some smooth condition. The obtained results imply that neural networks are able to adapt to ranking function in the instance space. Hence the obtained results are able to circumvent the curse of dimensionality on some smooth condition. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  9. The reconstruction algorithm used for [68Ga]PSMA-HBED-CC PET/CT reconstruction significantly influences the number of detected lymph node metastases and coeliac ganglia.

    Science.gov (United States)

    Krohn, Thomas; Birmes, Anita; Winz, Oliver H; Drude, Natascha I; Mottaghy, Felix M; Behrendt, Florian F; Verburg, Frederik A

    2017-04-01

    To investigate whether the numbers of lymph node metastases and coeliac ganglia delineated on [ 68 Ga]PSMA-HBED-CC PET/CT scans differ among datasets generated using different reconstruction algorithms. Data were constructed using the BLOB-OS-TF, BLOB-OS and 3D-RAMLA algorithms. All reconstructions were assessed by two nuclear medicine physicians for the number of pelvic/paraaortal lymph node metastases as well the number of coeliac ganglia. Standardized uptake values (SUV) were also calculated in different regions. At least one [ 68 Ga]PSMA-HBED-CC PET/CT-positive pelvic or paraaortal lymph node metastasis was found in 49 and 35 patients using the BLOB-OS-TF algorithm, in 42 and 33 patients using the BLOB-OS algorithm, and in 41 and 31 patients using the 3D-RAMLA algorithm, respectively, and a positive ganglion was found in 92, 59 and 24 of 100 patients using the three algorithms, respectively. Quantitatively, the SUVmean and SUVmax were significantly higher with the BLOB-OS algorithm than with either the BLOB-OS-TF or the 3D-RAMLA algorithm in all measured regions (p algorithms were not significant in the aorta (SUVmean, p = 0.93; SUVmax, p = 0.97) but were significant in all other regions (p algorithm than with either the BLOB-OS or the 3D-RAMLA algorithm and was significantly higher with the BLOB-OS than with the 3D-RAMLA algorithm (p algorithm used. The highest number of lesions and physiological structures will be visualized using a modern algorithm employing time-of-flight information.

  10. Algorithms for energy efficiency in wireless sensor networks

    Energy Technology Data Exchange (ETDEWEB)

    Busse, M.

    2007-01-21

    The recent advances in microsensor and semiconductor technology have opened a new field within computer science: the networking of small-sized sensors which are capable of sensing, processing, and communicating. Such wireless sensor networks offer new applications in the areas of habitat and environment monitoring, disaster control and operation, military and intelligence control, object tracking, video surveillance, traffic control, as well as in health care and home automation. It is likely that the deployed sensors will be battery-powered, which will limit the energy capacity significantly. Thus, energy efficiency becomes one of the main challenges that need to be taken into account, and the design of energy-efficient algorithms is a major contribution of this thesis. As the wireless communication in the network is one of the main energy consumers, we first consider in detail the characteristics of wireless communication. By using the embedded sensor board (ESB) platform recently developed by the Free University of Berlin, we analyze the means of forward error correction and propose an appropriate resync mechanism, which improves the communication between two ESB nodes substantially. Afterwards, we focus on the forwarding of data packets through the network. We present the algorithms energy-efficient forwarding (EEF), lifetime-efficient forwarding (LEF), and energy-efficient aggregation forwarding (EEAF). While EEF is designed to maximize the number of data bytes delivered per energy unit, LEF additionally takes into account the residual energy of forwarding nodes. In so doing, LEF further prolongs the lifetime of the network. Energy savings due to data aggregation and in-network processing are exploited by EEAF. Besides single-link forwarding, in which data packets are sent to only one forwarding node, we also study the impact of multi-link forwarding, which exploits the broadcast characteristics of the wireless medium by sending packets to several (potential

  11. Universal data-based method for reconstructing complex networks with binary-state dynamics

    Science.gov (United States)

    Li, Jingwen; Shen, Zhesi; Wang, Wen-Xu; Grebogi, Celso; Lai, Ying-Cheng

    2017-03-01

    To understand, predict, and control complex networked systems, a prerequisite is to reconstruct the network structure from observable data. Despite recent progress in network reconstruction, binary-state dynamics that are ubiquitous in nature, technology, and society still present an outstanding challenge in this field. Here we offer a framework for reconstructing complex networks with binary-state dynamics by developing a universal data-based linearization approach that is applicable to systems with linear, nonlinear, discontinuous, or stochastic dynamics governed by monotonic functions. The linearization procedure enables us to convert the network reconstruction into a sparse signal reconstruction problem that can be resolved through convex optimization. We demonstrate generally high reconstruction accuracy for a number of complex networks associated with distinct binary-state dynamics from using binary data contaminated by noise and missing data. Our framework is completely data driven, efficient, and robust, and does not require any a priori knowledge about the detailed dynamical process on the network. The framework represents a general paradigm for reconstructing, understanding, and exploiting complex networked systems with binary-state dynamics.

  12. Subpixel boundary backward substitution reconstruction algorithm for not uniform microscan to FPA and blind micromotion matching

    Science.gov (United States)

    Chen, Yi-nan; Jin, Wei-qi; Zhao, Lei; Gao, Mei-jing; Zhao, Lin

    2008-03-01

    For the subpixel micro-scanning imaging, we propose the reconstruction algorithm based on neither interpolation nor super-resolution idea but one of the block-by-block method recursive from the boundary to centre when additional narrowband boundary view-field diaphragm whose radiation is known a prior. The aim of the predicted boundary value is to add the conditions for solving the ununiqueness ill-problem to the inverse transition matrix from the destructed process. For the non-uniform scan factor, the improved algorithm associated with certain non-uniform motion variables is proposed. Additionally, attention is focused on the case of unknown subpixel motion, when the reconstructed images are blurred by motion parameter modulation and neighbouring point aliasing because the value of micro-motion is not the correct one. Unlike other methods that the image registration is accomplished before multi-frame restoration from undersampled sequences frame by frame, in this paper, 2-D motion vector is estimated in single frame just from the blur character of reconstructed grids. We demonstrate that once the estimated motion approaches to the real one, square summation of all pixels over the unmatched image approximately descends to the minimum. The matching track based on recursive Newton secant approaching is optimized for high matching speed and precision by different strategies, including matching region hunting, matching direction choosing and convergence prejudgement. All iterative step lengths with respect to motion parameters are substituted by the suitable values derived from the statistic process and one or multi-secant solution. The simulations demonstrate the feasibility of the matching algorithm and the obvious resolution enhancement compared to the direct oversampling image.

  13. Enhanced temporal resolution at cardiac CT with a novel CT image reconstruction algorithm: Initial patient experience

    Energy Technology Data Exchange (ETDEWEB)

    Apfaltrer, Paul, E-mail: paul.apfaltrer@medma.uni-heidelberg.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Institute of Clinical Radiology and Nuclear Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim (Germany); Schoendube, Harald, E-mail: harald.schoendube@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Schoepf, U. Joseph, E-mail: schoepf@musc.edu [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Allmendinger, Thomas, E-mail: thomas.allmendinger@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Tricarico, Francesco, E-mail: francescotricarico82@gmail.com [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Department of Bioimaging and Radiological Sciences, Catholic University of the Sacred Heart, “A. Gemelli” Hospital, Largo A. Gemelli 8, Rome (Italy); Schindler, Andreas, E-mail: andreas.schindler@campus.lmu.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Vogt, Sebastian, E-mail: sebastian.vogt@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Sunnegårdh, Johan, E-mail: johan.sunnegardh@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); and others

    2013-02-15

    Objective: To evaluate the effect of a temporal resolution improvement method (TRIM) for cardiac CT on diagnostic image quality for coronary artery assessment. Materials and methods: The TRIM-algorithm employs an iterative approach to reconstruct images from less than 180° of projections and uses a histogram constraint to prevent the occurrence of limited-angle artifacts. This algorithm was applied in 11 obese patients (7 men, 67.2 ± 9.8 years) who had undergone second generation dual-source cardiac CT with 120 kV, 175–426 mAs, and 500 ms gantry rotation. All data were reconstructed with a temporal resolution of 250 ms using traditional filtered-back projection (FBP) and of 200 ms using the TRIM-algorithm. Contrast attenuation and contrast-to-noise-ratio (CNR) were measured in the ascending aorta. The presence and severity of coronary motion artifacts was rated on a 4-point Likert scale. Results: All scans were considered of diagnostic quality. Mean BMI was 36 ± 3.6 kg/m{sup 2}. Average heart rate was 60 ± 9 bpm. Mean effective dose was 13.5 ± 4.6 mSv. When comparing FBP- and TRIM reconstructed series, the attenuation within the ascending aorta (392 ± 70.7 vs. 396.8 ± 70.1 HU, p > 0.05) and CNR (13.2 ± 3.2 vs. 11.7 ± 3.1, p > 0.05) were not significantly different. A total of 110 coronary segments were evaluated. All studies were deemed diagnostic; however, there was a significant (p < 0.05) difference in the severity score distribution of coronary motion artifacts between FBP (median = 2.5) and TRIM (median = 2.0) reconstructions. Conclusion: The algorithm evaluated here delivers diagnostic imaging quality of the coronary arteries despite 500 ms gantry rotation. Possible applications include improvement of cardiac imaging on slower gantry rotation systems or mitigation of the trade-off between temporal resolution and CNR in obese patients.

  14. Research on wind field algorithm of wind lidar based on BP neural network and grey prediction

    Science.gov (United States)

    Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei

    2018-01-01

    This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.

  15. Prolonging the Lifetime of Wireless Sensor Networks Interconnected to Fixed Network Using Hierarchical Energy Tree Based Routing Algorithm

    Directory of Open Access Journals (Sweden)

    M. Kalpana

    2014-01-01

    Full Text Available This research work proposes a mathematical model for the lifetime of wireless sensor networks (WSN. It also proposes an energy efficient routing algorithm for WSN called hierarchical energy tree based routing algorithm (HETRA based on hierarchical energy tree constructed using the available energy in each node. The energy efficiency is further augmented by reducing the packet drops using exponential congestion control algorithm (TCP/EXP. The algorithms are evaluated in WSNs interconnected to fixed network with seven distribution patterns, simulated in ns2 and compared with the existing algorithms based on the parameters such as number of data packets, throughput, network lifetime, and data packets average network lifetime product. Evaluation and simulation results show that the combination of HETRA and TCP/EXP maximizes longer network lifetime in all the patterns. The lifetime of the network with HETRA algorithm has increased approximately 3.2 times that of the network implemented with AODV.

  16. Prolonging the lifetime of wireless sensor networks interconnected to fixed network using hierarchical energy tree based routing algorithm.

    Science.gov (United States)

    Kalpana, M; Dhanalakshmi, R; Parthiban, P

    2014-01-01

    This research work proposes a mathematical model for the lifetime of wireless sensor networks (WSN). It also proposes an energy efficient routing algorithm for WSN called hierarchical energy tree based routing algorithm (HETRA) based on hierarchical energy tree constructed using the available energy in each node. The energy efficiency is further augmented by reducing the packet drops using exponential congestion control algorithm (TCP/EXP). The algorithms are evaluated in WSNs interconnected to fixed network with seven distribution patterns, simulated in ns2 and compared with the existing algorithms based on the parameters such as number of data packets, throughput, network lifetime, and data packets average network lifetime product. Evaluation and simulation results show that the combination of HETRA and TCP/EXP maximizes longer network lifetime in all the patterns. The lifetime of the network with HETRA algorithm has increased approximately 3.2 times that of the network implemented with AODV.

  17. Distributed MLEM: an iterative tomographic image reconstruction algorithm for distributed memory architectures.

    Science.gov (United States)

    Cui, Jingyu; Pratx, Guillem; Meng, Bowen; Levin, Craig S

    2013-05-01

    The processing speed for positron emission tomography (PET) image reconstruction has been greatly improved in recent years by simply dividing the workload to multiple processors of a graphics processing unit (GPU). However, if this strategy is generalized to a multi-GPU cluster, the processing speed does not improve linearly with the number of GPUs. This is because large data transfer is required between the GPUs after each iteration, effectively reducing the parallelism. This paper proposes a novel approach to reformulate the maximum likelihood expectation maximization (MLEM) algorithm so that it can scale up to many GPU nodes with less frequent inter-node communication. While being mathematically different, the new algorithm maximizes the same convex likelihood function as MLEM, thus converges to the same solution. Experiments on a multi-GPU cluster demonstrate the effectiveness of the proposed approach.

  18. SAGA: a hybrid search algorithm for Bayesian Network structure learning of transcriptional regulatory networks.

    Science.gov (United States)

    Adabor, Emmanuel S; Acquaah-Mensah, George K; Oduro, Francis T

    2015-02-01

    Bayesian Networks have been used for the inference of transcriptional regulatory relationships among genes, and are valuable for obtaining biological insights. However, finding optimal Bayesian Network (BN) is NP-hard. Thus, heuristic approaches have sought to effectively solve this problem. In this work, we develop a hybrid search method combining Simulated Annealing with a Greedy Algorithm (SAGA). SAGA explores most of the search space by undergoing a two-phase search: first with a Simulated Annealing search and then with a Greedy search. Three sets of background-corrected and normalized microarray datasets were used to test the algorithm. BN structure learning was also conducted using the datasets, and other established search methods as implemented in BANJO (Bayesian Network Inference with Java Objects). The Bayesian Dirichlet Equivalence (BDe) metric was used to score the networks produced with SAGA. SAGA predicted transcriptional regulatory relationships among genes in networks that evaluated to higher BDe scores with high sensitivities and specificities. Thus, the proposed method competes well with existing search algorithms for Bayesian Network structure learning of transcriptional regulatory networks. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Globally accelerated reconstruction algorithm for diffusion tomography with continuous-wave source in an arbitrary convex shape domain.

    Science.gov (United States)

    Pantong, Natee; Su, Jianzhong; Shan, Hua; Klibanov, Michael V; Liu, Hanli

    2009-03-01

    A new numerical imaging algorithm is presented for reconstruction of optical absorption coefficients from near-infrared light data with a continuous-wave source. As a continuation of our earlier efforts in developing a series of methods called "globally convergent reconstruction methods" [J. Opt. Soc. Am. A23, 2388 (2006)], this numerical algorithm solves the inverse problem through solution of a boundary-value problem for a Volterra-type integral partial differential equation. We deal here with the particular issues in solving the inverse problems in an arbitrary convex shape domain. It is demonstrated in numerical studies that this reconstruction technique is highly efficient and stable with respect to the complex distribution of actual unknown absorption coefficients. The method is particularly useful for reconstruction from a large data set obtained from a tissue or organ of particular shape, such as the prostate. Numerical reconstructions of a simulated prostate-shaped phantom with three different settings of absorption-inclusions are presented.

  20. Comparison of evolutionary algorithms in gene regulatory network model inference.

    LENUS (Irish Health Repository)

    2010-01-01

    ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.

  1. Dynamic Regulatory Network Reconstruction for Alzheimer’s Disease Based on Matrix Decomposition Techniques

    Directory of Open Access Journals (Sweden)

    Wei Kong

    2014-01-01

    Full Text Available Alzheimer’s disease (AD is the most common form of dementia and leads to irreversible neurodegenerative damage of the brain. Finding the dynamic responses of genes, signaling proteins, transcription factor (TF activities, and regulatory networks of the progressively deteriorative progress of AD would represent a significant advance in discovering the pathogenesis of AD. However, the high throughput technologies of measuring TF activities are not yet available on a genome-wide scale. In this study, based on DNA microarray gene expression data and a priori information of TFs, network component analysis (NCA algorithm is applied to determining the TF activities and regulatory influences on TGs of incipient, moderate, and severe AD. Based on that, the dynamical gene regulatory networks of the deteriorative courses of AD were reconstructed. To select significant genes which are differentially expressed in different courses of AD, independent component analysis (ICA, which is better than the traditional clustering methods and can successfully group one gene in different meaningful biological processes, was used. The molecular biological analysis showed that the changes of TF activities and interactions of signaling proteins in mitosis, cell cycle, immune response, and inflammation play an important role in the deterioration of AD.

  2. Precise two-dimensional D-bar reconstructions of human chest and phantom tank via sinc-convolution algorithm

    Directory of Open Access Journals (Sweden)

    Abbasi Mahdi

    2012-06-01

    Full Text Available Abstract Background Electrical Impedance Tomography (EIT is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. Methods At the first step, synthetic and experimental data were used to compute an intermediate object named scattering transform. Next, this object was used in a two-dimensional integral equation which was precisely and rapidly solved via sinc-convolution algorithm to find the square root of the conductivity for each pixel of image. For the purpose of comparison, multigrid and NOSER algorithms were implemented under a similar setting. Quality of reconstructions of synthetic models was tested against GREIT approved quality measures. To validate the simulation results, reconstructions of a phantom chest and a human lung were used. Results Evaluation of synthetic reconstructions shows that the quality of sinc-convolution reconstructions is considerably better than that of each of its competitors in terms of amplitude response, position error, ringing, resolution and shape-deformation. In addition, the results confirm near-exponential and linear convergence rates for sinc-convolution and multigrid, respectively. Moreover, the least degree of relative errors and the most degree of truth were found in sinc-convolution reconstructions from experimental phantom data. Reconstructions of clinical lung data show that the related physiological effect is well recovered by sinc-convolution algorithm. Conclusions Parametric evaluation demonstrates the efficiency of sinc-convolution to reconstruct accurate conductivity images from experimental data. Excellent results in phantom and clinical

  3. A compressed sensing-based iterative algorithm for CT reconstruction and its possible application to phase contrast imaging

    Directory of Open Access Journals (Sweden)

    Li Xueli

    2011-08-01

    Full Text Available Abstract Background Computed Tomography (CT is a technology that obtains the tomogram of the observed objects. In real-world applications, especially the biomedical applications, lower radiation dose have been constantly pursued. To shorten scanning time and reduce radiation dose, one can decrease X-ray exposure time at each projection view or decrease the number of projections. Until quite recently, the traditional filtered back projection (FBP method has been commonly exploited in CT image reconstruction. Applying the FBP method requires using a large amount of projection data. Especially when the exposure speed is limited by the mechanical characteristic of the imaging facilities, using FBP method may prolong scanning time and cumulate with a high dose of radiation consequently damaging the biological specimens. Methods In this paper, we present a compressed sensing-based (CS-based iterative algorithm for CT reconstruction. The algorithm minimizes the l1-norm of the sparse image as the constraint factor for the iteration procedure. With this method, we can reconstruct images from substantially reduced projection data and reduce the impact of artifacts introduced into the CT reconstructed image by insufficient projection information. Results To validate and evaluate the performance of this CS-base iterative algorithm, we carried out quantitative evaluation studies in imaging of both software Shepp-Logan phantom and real polystyrene sample. The former is completely absorption based and the later is imaged in phase contrast. The results show that the CS-based iterative algorithm can yield images with quality comparable to that obtained with existing FBP and traditional algebraic reconstruction technique (ART algorithms. Discussion Compared with the common reconstruction from 180 projection images, this algorithm completes CT reconstruction from only 60 projection images, cuts the scan time, and maintains the acceptable quality of the

  4. An algorithm J-SC of detecting communities in complex networks

    Science.gov (United States)

    Hu, Fang; Wang, Mingzhu; Wang, Yanran; Hong, Zhehao; Zhu, Yanhui

    2017-11-01

    Currently, community detection in complex networks has become a hot-button topic. In this paper, based on the Spectral Clustering (SC) algorithm, we introduce the idea of Jacobi iteration, and then propose a novel algorithm J-SC for community detection in complex networks. Furthermore, the accuracy and efficiency of this algorithm are tested by some representative real-world networks and several computer-generated networks. The experimental results indicate that the J-SC algorithm can accurately and effectively detect the community structure in these networks. Meanwhile, compared with the state-of-the-art community detecting algorithms SC, SOM, K-means, Walktrap and Fastgreedy, the J-SC algorithm has better performance, reflecting that this new algorithm can acquire higher values of modularity and NMI. Moreover, this new algorithm has faster running time than SOM and Walktrap algorithms.

  5. Superiorized algorithm for reconstruction of CT images from sparse-view and limited-angle polyenergetic data

    Science.gov (United States)

    Humphries, T.; Winn, J.; Faridani, A.

    2017-08-01

    Recent work in CT image reconstruction has seen increasing interest in the use of total variation (TV) and related penalties to regularize problems involving reconstruction from undersampled or incomplete data. Superiorization is a recently proposed heuristic which provides an automatic procedure to ‘superiorize’ an iterative image reconstruction algorithm with respect to a chosen objective function, such as TV. Under certain conditions, the superiorized algorithm is guaranteed to find a solution that is as satisfactory as any found by the original algorithm with respect to satisfying the constraints of the problem; this solution is also expected to be superior with respect to the chosen objective. Most work on superiorization has used reconstruction algorithms which assume a linear measurement model, which in the case of CT corresponds to data generated from a monoenergetic x-ray beam. Many CT systems generate x-rays from a polyenergetic spectrum, however, in which the measured data represent an integral of object attenuation over all energies in the spectrum. This inconsistency with the linear model produces the well-known beam hardening artifacts, which impair analysis of CT images. In this work we superiorize an iterative algorithm for reconstruction from polyenergetic data, using both TV and an anisotropic TV (ATV) penalty. We apply the superiorized algorithm in numerical phantom experiments modeling both sparse-view and limited-angle scenarios. In our experiments, the superiorized algorithm successfully finds solutions which are as constraints-compatible as those found by the original algorithm, with significantly reduced TV and ATV values. The superiorized algorithm thus produces images with greatly reduced sparse-view and limited angle artifacts, which are also largely free of the beam hardening artifacts that would be present if a superiorized version of a monoenergetic algorithm were used.

  6. A new dynamical layout algorithm for complex biochemical reaction networks.

    Science.gov (United States)

    Wegner, Katja; Kummer, Ursula

    2005-08-26

    To study complex biochemical reaction networks in living cells researchers more and more rely on databases and computational methods. In order to facilitate computational approaches, visualisation techniques are highly important. Biochemical reaction networks, e.g. metabolic pathways are often depicted as graphs and these graphs should be drawn dynamically to provide flexibility in the context of different data. Conventional layout algorithms are not sufficient for every kind of pathway in biochemical research. This is mainly due to certain conventions to which biochemists/biologists are used to and which are not in accordance to conventional layout algorithms. A number of approaches has been developed to improve this situation. Some of these are used in the context of biochemical databases and make more or less use of the information in these databases to aid the layout process. However, visualisation becomes also more and more important in modelling and simulation tools which mostly do not offer additional connections to databases. Therefore, layout algorithms used in these tools have to work independently of any databases. In addition, all of the existing algorithms face some limitations with respect to the number of edge crossings when it comes to larger biochemical systems due to the interconnectivity of these. Last but not least, in some cases, biochemical conventions are not met properly. Out of these reasons we have developed a new algorithm which tackles these problems by reducing the number of edge crossings in complex systems, taking further biological conventions into account to identify and visualise cycles. Furthermore the algorithm is independent from database information in order to be easily adopted in any application. It can also be tested as part of the SimWiz package (free to download for academic users at 1). The new algorithm reduces the complexity of pathways, as well as edge crossings and edge length in the resulting graphical representation

  7. A Flexible Reservation Algorithm for Advance Network Provisioning

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-04-12

    Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability, nor have alternative suggestions when reservation requests fail. In general, the number of reservation options is exponential with the number of nodes n, and current reservation commitments. We present a novel approach for path finding in time-dependent networks taking advantage of user-provided parameters of total volume and time constraints, which produces options for earliest completion and shortest duration. The theoretical complexity is only O(n2r2) in the worst-case, where r is the number of reservations in the desired time interval. We have implemented our algorithm and developed efficient methodologies for incorporation into network reservation frameworks. Performance measurements confirm the theoretical predictions.

  8. An online input force time history reconstruction algorithm using dynamic principal component analysis

    Science.gov (United States)

    Prawin, J.; Rama Mohan Rao, A.

    2018-01-01

    The knowledge of dynamic loads acting on a structure is always required for many practical engineering problems, such as structural strength analysis, health monitoring and fault diagnosis, and vibration isolation. In this paper, we present an online input force time history reconstruction algorithm using Dynamic Principal Component Analysis (DPCA) from the acceleration time history response measurements using moving windows. We also present an optimal sensor placement algorithm to place limited sensors at dynamically sensitive spatial locations. The major advantage of the proposed input force identification algorithm is that it does not require finite element idealization of structure unlike the earlier formulations and therefore free from physical modelling errors. We have considered three numerical examples to validate the accuracy of the proposed DPCA based method. Effects of measurement noise, multiple force identification, different kinds of loading, incomplete measurements, and high noise levels are investigated in detail. Parametric studies have been carried out to arrive at optimal window size and also the percentage of window overlap. Studies presented in this paper clearly establish the merits of the proposed algorithm for online load identification.

  9. Novel Spectrum Sensing Algorithms for OFDM Cognitive Radio Networks.

    Science.gov (United States)

    Shi, Zhenguo; Wu, Zhilu; Yin, Zhendong; Cheng, Qingqing

    2015-06-15

    Spectrum sensing technology plays an increasingly important role in cognitive radio networks. Consequently, several spectrum sensing algorithms have been proposed in the literature. In this paper, we present a new spectrum sensing algorithm "Differential Characteristics-Based OFDM (DC-OFDM)" for detecting OFDM signal on account of differential characteristics. We put the primary value on channel gain θ around zero to detect the presence of primary user. Furthermore, utilizing the same method of differential operation, we improve two traditional OFDM sensing algorithms (cyclic prefix and pilot tones detecting algorithms), and propose a "Differential Characteristics-Based Cyclic Prefix (DC-CP)" detector and a "Differential Characteristics-Based Pilot Tones (DC-PT)" detector, respectively. DC-CP detector is based on auto-correlation vector to sense the spectrum, while the DC-PT detector takes the frequency-domain cross-correlation of PT as the test statistic to detect the primary user. Moreover, the distributions of the test statistics of the three proposed methods have been derived. Simulation results illustrate that all of the three proposed methods can achieve good performance under low signal to noise ratio (SNR) with the presence of timing delay. Specifically, the DC-OFDM detector gets the best performance among the presented detectors. Moreover, both of the DC-CP and DC-PT detector achieve significant improvements compared with their corresponding original detectors.

  10. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.

    Science.gov (United States)

    Garro, Beatriz A; Vázquez, Roberto A

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.

  11. APPLICATION OF NEURAL NETWORK ALGORITHMS FOR BPM LINEARIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Musson, John C. [JLAB; Seaton, Chad [JLAB; Spata, Mike F. [JLAB; Yan, Jianxun [JLAB

    2012-11-01

    Stripline BPM sensors contain inherent non-linearities, as a result of field distortions from the pickup elements. Many methods have been devised to facilitate corrections, often employing polynomial fitting. The cost of computation makes real-time correction difficult, particulalry when integer math is utilized. The application of neural-network technology, particularly the multi-layer perceptron algorithm, is proposed as an efficient alternative for electrode linearization. A process of supervised learning is initially used to determine the weighting coefficients, which are subsequently applied to the incoming electrode data. A non-linear layer, known as an activation layer, is responsible for the removal of saturation effects. Implementation of a perceptron in an FPGA-based software-defined radio (SDR) is presented, along with performance comparisons. In addition, efficient calculation of the sigmoidal activation function via the CORDIC algorithm is presented.

  12. Methods of information theory and algorithmic complexity for network biology.

    Science.gov (United States)

    Zenil, Hector; Kiani, Narsis A; Tegnér, Jesper

    2016-03-01

    We survey and introduce concepts and tools located at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different local and global aspects of synthetic and biological data. We show examples such as the emergence of giant components in Erdös-Rényi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data. We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs, characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labeled and unlabeled graphs and prove that the Kolmogorov complexity of a labeled graph is a good approximation of its unlabeled Kolmogorov complexity and thus a robust definition of graph complexity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Using of coevolutionary algorithm on P2P networks

    Directory of Open Access Journals (Sweden)

    Rezaee Alireza

    2009-01-01

    Full Text Available Multicast routing is the basic demand to provide QOS (Quality of service in multimedia streaming on peer to peer networks. Making multicast trees optimizing their delay cost and considering nodal and links limited bandwidth (load balance constraints is a NP-hard (Nondeterministic Polynomial time hard problem. In this paper we have used Co-evolutionary Algorithm to make multicast trees with optimized average delay from source to the clients considering the limited capacity of nodes and links in application layer. The numeric results obtained are shown that the costs have been much improved comparing with other existent non-GA (Genetic Algorithm approaches. Also we have used only a portion of every nodal outage degree and this has improved the results comparing to use of the entire outage degree.

  14. The Forward-Reverse Algorithm for Stochastic Reaction Networks

    KAUST Repository

    Bayer, Christian

    2015-01-07

    In this work, we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which we solve a set of deterministic optimization problems where the SRNs are replaced by the classical ODE rates; then, during the second phase, the Monte Carlo version of the EM algorithm is applied starting from the output of the previous phase. Starting from a set of over-dispersed seeds, the output of our two-phase method is a cluster of maximum likelihood estimates obtained by using convergence assessment techniques from the theory of Markov chain Monte Carlo.

  15. Neural network implementations of data association algorithms for sensor fusion

    Science.gov (United States)

    Brown, Donald E.; Pittard, Clarence L.; Martin, Worthy N.

    1989-01-01

    The paper is concerned with locating a time varying set of entities in a fixed field when the entities are sensed at discrete time instances. At a given time instant a collection of bivariate Gaussian sensor reports is produced, and these reports estimate the location of a subset of the entities present in the field. A database of reports is maintained, which ideally should contain one report for each entity sensed. Whenever a collection of sensor reports is received, the database must be updated to reflect the new information. This updating requires association processing between the database reports and the new sensor reports to determine which pairs of sensor and database reports correspond to the same entity. Algorithms for performing this association processing are presented. Neural network implementation of the algorithms, along with simulation results comparing the approaches are provided.

  16. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    Energy Technology Data Exchange (ETDEWEB)

    Brady, S. L.; Yee, B. S.; Kaufman, R. A. [Department of Radiological Sciences, St. Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States)

    2012-09-15

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more

  17. MIRA: mutual information-based reporter algorithm for metabolic networks.

    Science.gov (United States)

    Cicek, A Ercument; Roeder, Kathryn; Ozsoyoglu, Gultekin

    2014-06-15

    Discovering the transcriptional regulatory architecture of the metabolism has been an important topic to understand the implications of transcriptional fluctuations on metabolism. The reporter algorithm (RA) was proposed to determine the hot spots in metabolic networks, around which transcriptional regulation is focused owing to a disease or a genetic perturbation. Using a z-score-based scoring scheme, RA calculates the average statistical change in the expression levels of genes that are neighbors to a target metabolite in the metabolic network. The RA approach has been used in numerous studies to analyze cellular responses to the downstream genetic changes. In this article, we propose a mutual information-based multivariate reporter algorithm (MIRA) with the goal of eliminating the following problems in detecting reporter metabolites: (i) conventional statistical methods suffer from small sample sizes, (ii) as z-score ranges from minus to plus infinity, calculating average scores can lead to canceling out opposite effects and (iii) analyzing genes one by one, then aggregating results can lead to information loss. MIRA is a multivariate and combinatorial algorithm that calculates the aggregate transcriptional response around a metabolite using mutual information. We show that MIRA's results are biologically sound, empirically significant and more reliable than RA. We apply MIRA to gene expression analysis of six knockout strains of Escherichia coli and show that MIRA captures the underlying metabolic dynamics of the switch from aerobic to anaerobic respiration. We also apply MIRA to an Autism Spectrum Disorder gene expression dataset. Results indicate that MIRA reports metabolites that highly overlap with recently found metabolic biomarkers in the autism literature. Overall, MIRA is a promising algorithm for detecting metabolic drug targets and understanding the relation between gene expression and metabolic activity. The code is implemented in C# language using

  18. Optimization of neural network algorithm of the land market description

    Directory of Open Access Journals (Sweden)

    M. A. Karpovich

    2016-01-01

    Full Text Available The advantages of neural network technology is shown in comparison of traditional descriptions of dynamically changing systems, which include a modern land market. The basic difficulty arising in the practical implementation of neural network models of the land market and construction products is revealed It is the formation of a representative set of training and test examples. The requirements which are necessary for the correct description of the current economic situation has been determined, it consists in the fact that Train-paid-set in the feature space should not has the ranges with a low density of observations. The methods of optimization of empirical array, which allow to avoid the long-range extrapolation of data from range of concentration of the set of examples are formulated. It is shown that a radical method of optimization a set of training and test examples enclosing to collect supplemantary information, is associated with significant costs time and resources for the economic problems and the ratio of cost / efficiency is less efficient than an algorithm optimization neural network models the earth market fixed set of empirical data. Algorithm of optimization based on the transformation of arrays of information which represents the expansion of the ranges of concentration of the set of examples and compression the ranges of low density of observations is analyzed in details. The significant reduction in the relative error of land price description is demonstrated on the specific example of Voronezh region market of lands which intend for road construction, it makes the using of radical method of empirical optimization of the array costeffective with accounting the significant absolute value of the land. The high economic efficiency of the proposed algorithms is demonstrated.

  19. Test of Sliding Window Algorithm fir Jets Reconstruction in ATLAS Hadronic Calorimeters

    CERN Document Server

    Mehdiyev, R; Nevski, P; Salihagic, D

    1999-01-01

    Test of the ``sliding window'' jet finding algorithm has been perfor med for reconstruction of back-to-back jets in barrel and end-cap regions of the ATLAS hadronic calorimeters. Fully simulated events have been used for var ious jet energies and pseudorapidities in E (jet)=20 - 2000 GeV and eta = 0.6 - 3.05 range. The transverse energy threshold for the jet candidates was found to be the most sensitive parameter of the algorithm. The values of this parameter has been selected to maximise jet reconstruction efficiency in a window of 0.3X0.3 radians both in pseudorapidity and azimuthal angle. Plots are given to demonstrate the dependence of the optimal transverse energy threshold on the jet total energy, transverse ener gy and pseudorapidity. It is shown that the 90 - 95 % efficiency of a single jet reconstruc tion is achievable by the proper choice of the transverse energy threshold. For this range of jet energies and pseudorapidities the value of the transverse energy threshold varies between 15 - 35 % of th...

  20. Evaluation of image reconstruction algorithm for near infrared topography by virtual head phantom

    Science.gov (United States)

    Kawaguchi, Hiroshi; Okada, Eiji

    2007-07-01

    The poor spatial resolution and reproducibility of the images are disadvantages of near infrared topography. The authors proposed the combination of the double-density probe arrangement and the image reconstruction algorithm using a spatial sensitivity profile to improve the spatial resolution and the reproducibility. However, the proposed method was evaluated only by the simplified adult head model. It is uncertain whether the proposed method is effective to the actual head that has complicated structure. In this study, the proposed method is evaluated by the virtual head phantom the 3Dstructure of which is based upon an MRI scan of an adult head. The absorption change the size of which is almost equivalent to the width of the brain gyri was measured by the conventional method and the proposed method to evaluate the spatial resolution of the topographic images obtained by each method. The positions of the probe arrangements are slightly changed and the topographic images of the same brain activation measured by two probe positions are compared to evaluate the reproducibility of the NIR topography. The results indicate that the combination of the double-density probe arrangement and the image reconstruction algorithm using the spatial sensitivity profile can improve both the spatial resolution and the reproducibility of the topographic image of brain activation in the virtual head phantom. However, the uneven thickness of the superficial tissues affects the accuracy of the position of activation in the images.

  1. First-order convex feasibility algorithms for iterative image reconstruction in limited angular-range X-ray CT

    CERN Document Server

    Sidky, Emil Y; Pan, Xiaochuan

    2012-01-01

    Iterative image reconstruction (IIR) algorithms in Computed Tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this article, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for efficient algorithms for their solution -- thereby facilitating the IIR algorithm design process. An accelerated version of the Chambolle-Pock (CP) algorithm is adapted to various convex fea...

  2. Dummy source digitization algorithm for reconstruction of flexible brachytherapy catheters with biplane images.

    Science.gov (United States)

    Pálvölgyi, Jenö

    2014-03-01

    The traditional brachytherapy catheter reconstruction with biplane images is based on digitizing radio-opaque markers with a pointing device on a film or on a screen. An algorithm to automate digitization of radio-opaque marker coordinates on biplane images is presented. To obtain the marker coordinates in a proper sequence, instead of usual pair of reconstruction images, series of images were taken with insertion of radio-opaque markers consecutively into the catheters. The images were pre-processed to suppress the shield of anatomic structures. The determination of the marker coordinates is based on the detection of characteristic high gradient variation in pre-processed image profiles. The method was tested with six endometrial insertions performed with Simon-Norman catheters using our version of Heyman packing. 28 catheters of six treatment fractions were digitized, typically 10 markers per catheter. To obtain the marker coordinates, adjustment of two threshold levels on the pre-processed images were needed. The coordinates of the radio-opaque markers on the biplane projection images were obtained without positive or negative artefact. THE DUMMY SOURCE COORDINATES ON THE BIPLANE IMAGES WERE DIGITIZED IN A PROPER SEQUENCE: from the catheters' tip towards the end of the catheters. After the three-dimensional reconstruction of the catheters from the digitized coordinates, the geometry file was imported by the brachytherapy planning system for dose calculation. The method has the advantage to eliminate manual digitization of the dummy sources.

  3. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Tao Ma

    2016-10-01

    Full Text Available The development of intrusion detection systems (IDS that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC and deep neural network (DNN algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN, support vector machine (SVM, random forest (RF and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.

  4. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks.

    Science.gov (United States)

    Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun

    2016-10-13

    The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.

  5. On Evaluating Power Loss with HATSGA Algorithm for Power Network Reconfiguration in the Smart Grid

    OpenAIRE

    Calhau, Flavio Galvão; Pezzutti, Alysson; Martins, Joberto S. B.

    2017-01-01

    This paper presents the power network reconfig-uration algorithm HATSGA with a " R " modeling approach and evaluates its behavior in computing new reconfiguration topologies for the power network in the Smart Grid context. The modelling of the power distribution network with the language " R " is used to represent the network and support computation of distinct algorithm configurations towards the evaluation of new reconfiguration topologies. The HATSGA algorithm adopts hybrid Tabu Search and...

  6. Developpement d'algorithmes de reconstruction statistique appliques en tomographie rayons-X assistee par ordinateur

    Science.gov (United States)

    Thibaudeau, Christian

    La tomodensitometrie (TDM) permet d'obtenir, et ce de facon non invasive, une image tridimensionnelle de l'anatomie interne d'un sujet. Elle constitue l'evolution logique de la radiographie et permet l'observation d'un volume sous differents plans (sagittal, coronal, axial ou n'importe quel autre plan). La TDM peut avantageusement completer la tomographie d'emission par positrons (TEP), un outil de predilection utilise en recherche biomedicale et pour le diagnostic du cancer. La TEP fournit une information fonctionnelle, physiologique et metabolique, permettant la localisation et la quantification de radiotraceurs a l'interieur du corps humain. Cette derniere possede une sensibilite inegalee, mais peut neanmoins souffrir d'une faible resolution spatiale et d'un manque de repere anatomique selon le radiotraceur utilise. La combinaison, ou fusion, des images TEP et TDM permet d'obtenir cette localisation anatomique de la distribution du radiotraceur. L'image TDM represente une carte de l'attenuation subie par les rayons-X lors de leur passage a travers les tissus. Elle permet donc aussi d'ameliorer la quantification de l'image TEP en offrant la possibilite de corriger pour l'attenuation. L'image TDM s'obtient par la transformation de profils d'attenuation en une image cartesienne pouvant etre interpretee par l'humain. Si la qualite de cette image est fortement influencee par les performances de l'appareil, elle depend aussi grandement de la capacite de l'algorithme de reconstruction a obtenir une representation fidele du milieu image. Les techniques de reconstruction standards, basees sur la retroprojection filtree (FBP, filtered back-projection), reposent sur un modele mathematiquement parfait de la geometrie d'acquisition. Une alternative a cette methode etalon est appelee reconstruction statistique, ou iterative. Elle permet d'obtenir de meilleurs resultats en presence de bruit ou d'une quantite limitee d'information et peut virtuellement s'adapter a toutes formes

  7. A constructive algorithm for unsupervised learning with incremental neural network

    Directory of Open Access Journals (Sweden)

    Jenq-Haur Wang

    2015-04-01

    In our experiment, Reuters-21578 was used as the dataset to show the effectiveness of the proposed method on text classification. The experimental results showed that our method can effectively classify texts with the best F1-measure of 92.5%. It also showed the learning algorithm can enhance the accuracy effectively and efficiently. This framework also validates scalability in terms of the network size, in which the training and testing times both showed a constant trend. This also validates the feasibility of the method for practical uses.

  8. BOUNDARY DETECTION ALGORITHMS IN WIRELESS SENSOR NETWORKS: A SURVEY

    Directory of Open Access Journals (Sweden)

    Lanny Sitanayah

    2009-01-01

    Full Text Available Wireless sensor networks (WSNs comprise a large number of sensor nodes, which are spread out within a region and communicate using wireless links. In some WSN applications, recognizing boundary nodes is important for topology discovery, geographic routing and tracking. In this paper, we study the problem of recognizing the boundary nodes of a WSN. We firstly identify the factors that influence the design of algorithms for boundary detection. Then, we classify the existing work in boundary detection, which is vital for target tracking to detect when the targets enter or leave the sensor field.

  9. The production route selection algorithm in virtual manufacturing networks

    Science.gov (United States)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2017-08-01

    The increasing requirements and competition in the global market are challenges for the companies profitability in production and supply chain management. This situation became the basis for construction of virtual organizations, which are created in response to temporary needs. The problem of the production flow planning in virtual manufacturing networks is considered. In the paper the algorithm of the production route selection from the set of admissible routes, which meets the technology and resource requirements and in the context of the criterion of minimum cost is proposed.

  10. Finite-Size Geometric Entanglement from Tensor Network Algorithms

    OpenAIRE

    Shi, Qian-Qian; Orus, Roman; Fjaerestad, John Ove; Zhou, Huan-Qiang

    2009-01-01

    The global geometric entanglement is studied in the context of newly-developed tensor network algorithms for finite systems. For one-dimensional quantum spin systems it is found that, at criticality, the leading finite-size correction to the global geometric entanglement per site behaves as $b/n$, where $n$ is the size of the system and $b$ a given coefficient. Our conclusion is based on the computation of the geometric entanglement per spin for the quantum Ising model in a transverse magneti...

  11. Evaluation of clustering algorithms for protein-protein interaction networks

    Directory of Open Access Journals (Sweden)

    van Helden Jacques

    2006-11-01

    Full Text Available Abstract Background Protein interactions are crucial components of all cellular processes. Recently, high-throughput methods have been developed to obtain a global description of the interactome (the whole network of protein interactions for a given organism. In 2002, the yeast interactome was estimated to contain up to 80,000 potential interactions. This estimate is based on the integration of data sets obtained by various methods (mass spectrometry, two-hybrid methods, genetic studies. High-throughput methods are known, however, to yield a non-negligible rate of false positives, and to miss a fraction of existing interactions. The interactome can be represented as a graph where nodes correspond with proteins and edges with pairwise interactions. In recent years clustering methods have been developed and applied in order to extract relevant modules from such graphs. These algorithms require the specification of parameters that may drastically affect the results. In this paper we present a comparative assessment of four algorithms: Markov Clustering (MCL, Restricted Neighborhood Search Clustering (RNSC, Super Paramagnetic Clustering (SPC, and Molecular Complex Detection (MCODE. Results A test graph was built on the basis of 220 complexes annotated in the MIPS database. To evaluate the robustness to false positives and false negatives, we derived 41 altered graphs by randomly removing edges from or adding edges to the test graph in various proportions. Each clustering algorithm was applied to these graphs with various parameter settings, and the clusters were compared with the annotated complexes. We analyzed the sensitivity of the algorithms to the parameters and determined their optimal parameter values. We also evaluated their robustness to alterations of the test graph. We then applied the four algorithms to six graphs obtained from high-throughput experiments and compared the resulting clusters with the annotated complexes. Conclusion This

  12. Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.

    Science.gov (United States)

    Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo

    2015-05-01

    It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Congestion Relief of Contingent Power Network with Evolutionary Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Abhinandan De

    2012-03-01

    Full Text Available This paper presents a differential evolution optimization technique based methodology for congestion management cost optimization of contingent power networks. In Deregulated systems, line congestion apart from causing stability problems can increase the cost of electricity. Restraining line flow to a particular level of congestion is quite imperative from stability as well as economy point of view. Employing Congestion Sensitivity Index proposed in this paper, the algorithm proposed can be adopted for selecting the congested lines in a power networks and then to search for a congestion constrained optimal generation schedule at the cost of a minimum congestion management charge without any load curtailment and installation of FACTS devices. It has been depicted that the methodology on application can provide better operating conditions in terms of improvement of bus voltage and loss profile of the system. The efficiency of the proposed methodology has been tested on an IEEE 30 bus benchmark system and the results look promising.

  14. One-mSv CT colonography: Effect of different iterative reconstruction algorithms on radiologists' performance.

    Science.gov (United States)

    Shin, Cheong-Il; Kim, Se Hyung; Im, Jong Pil; Kim, Sang Gyun; Yu, Mi Hye; Lee, Eun Sun; Han, Joon Koo

    2016-03-01

    To analyze the effect of different reconstruction algorithms on image noise and radiologists' performance at ultra-low dose CT colonography (CTC) in human subjects. This retrospective study had institutional review board approval, with waiver of the need to obtain informed consent. CTC and subsequent colonoscopy were performed at the same day in 28 patients. CTC was scanned at the supine/prone positions using 120/100kVp and fixed 10mAs, and reconstructed using filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model-based IR (Veo) algorithms. Size-specific dose estimates (SSDE) and effective radiation doses were recorded. Image noise was compared among the three datasets using repeated measures analysis of variance (ANOVA). Per-polyp sensitivity and figure-of-merits were compared among the datasets using the McNemar test and jackknife alternative free-response receiver operating characteristic (JAFROC) analysis, respectively, by one novice and one expert reviewer in CTC. Mean SSDE and effective radiation dose of CTC were 1.732mGy and 1.002mSv, respectively. Mean image noise at supine/prone position datasets was significantly lowest with Veo (17.2/13.3), followed by ASIR (52.4/38.9) and FBP (69.9/50.8) (Preconstruction (81.0%, 64.3%), followed by ASIR (73.8%, 54.8%) and FBP (57.1%, 50.0%) with statistical significance between Veo and FBP for reader 1 (P=0.002). JAFROC analysis revealed that the figure-of-merit for the detection of polyps was highest with Veo (0.917, 0.786), followed by ASIR (0.881, 0.750) and FBP (0.750, 0.746) with statistical significances between Veo or ASIR and FBP for reader 1 (P<0.05). One-mSv CTC was not feasible using the standard FBP algorithm. However, diagnostic performance expressed as per-polyp sensitivity and figures-of-merit can be improved with the application of IR algorithms, particularly Veo. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Odd-graceful labeling algorithm and its implementation of generalized ring core network

    Science.gov (United States)

    Xie, Jianmin; Hong, Wenmei; Zhao, Tinggang; Yao, Bing

    2017-08-01

    The computer implementation of some labeling algorithms of special networks has practical guiding significance to computer communication network system design of functional, reliability, low communication cost. Generalized ring core network is a very important hybrid network topology structure and it is the basis of generalized ring network. In this paper, based on the requirements of research of generalized ring network addressing, the author has designed the odd-graceful labeling algorithm of generalized ring core network when n1, n2,…nm ≡ 0(mod 4), proved odd-graceful of the structure, worked out the corresponding software, and shown the practical effectiveness of this algorithm with our experimental data.

  16. Validating module network learning algorithms using simulated data.

    Science.gov (United States)

    Michoel, Tom; Maere, Steven; Bonnet, Eric; Joshi, Anagha; Saeys, Yvan; Van den Bulcke, Tim; Van Leemput, Koenraad; van Remortel, Piet; Kuiper, Martin; Marchal, Kathleen; Van de Peer, Yves

    2007-05-03

    In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Despite the demonstrated success of such algorithms in uncovering biologically relevant regulatory relations, further developments in the area are hampered by a lack of tools to compare the performance of alternative module network learning strategies. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators. We show that data simulators such as SynTReN are very well suited for the purpose of developing, testing and improving module network

  17. A compressed sensing based reconstruction algorithm for synchrotron source propagation-based X-ray phase contrast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)

    2016-01-11

    Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.

  18. Function-Oriented Networking and On-Demand Routing System in Network Using Ant Colony Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Young-Bo Sim

    2017-11-01

    Full Text Available In this paper, we proposed and developed Function-Oriented Networking (FON, a platform for network users. It has a different philosophy as opposed to technologies for network managers of Software-Defined Networking technology, OpenFlow. It is a technology that can immediately reflect the demands of the network users in the network, unlike the existing OpenFlow and Network Functions Virtualization (NFV, which do not reflect directly the needs of the network users. It allows the network user to determine the policy of the direct network, so it can be applied more precisely than the policy applied by the network manager. This is expected to increase the satisfaction of the service users when the network users try to provide new services. We developed FON function that performs on-demand routing for Low-Delay Required service. We analyzed the characteristics of the Ant Colony Optimization (ACO algorithm and found that the algorithm is suitable for low-delay required services. It was also the first in the world to implement the routing software using ACO Algorithm in the real Ethernet network. In order to improve the routing performance, several algorithms of the ACO