WorldWideScience

Sample records for network reconstruction methods

  1. Methods of graph network reconstruction in personalized medicine.

    Science.gov (United States)

    Danilov, A; Ivanov, Yu; Pryamonosov, R; Vassilevski, Yu

    2016-08-01

    The paper addresses methods for generation of individualized computational domains on the basis of medical imaging dataset. The computational domains will be used in one-dimensional (1D) and three-dimensional (3D)-1D coupled hemodynamic models. A 1D hemodynamic model employs a 1D network of a patient-specific vascular network with large number of vessels. The 1D network is the graph with nodes in the 3D space which bears additional geometric data such as length and radius of vessels. A 3D hemodynamic model requires a detailed 3D reconstruction of local parts of the vascular network. We propose algorithms which extend the automated segmentation of vascular and tubular structures, generation of centerlines, 1D network reconstruction, correction, and local adaptation. We consider two modes of centerline representation: (i) skeletal segments or sets of connected voxels and (ii) curved paths with corresponding radii. Individualized reconstruction of 1D networks depends on the mode of centerline representation. Efficiency of the proposed algorithms is demonstrated on several examples of 1D network reconstruction. The networks can be used in modeling of blood flows as well as other physiological processes in tubular structures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Gene Expression Network Reconstruction by LEP Method Using Microarray Data

    Directory of Open Access Journals (Sweden)

    Na You

    2012-01-01

    Full Text Available Gene expression network reconstruction using microarray data is widely studied aiming to investigate the behavior of a gene cluster simultaneously. Under the Gaussian assumption, the conditional dependence between genes in the network is fully described by the partial correlation coefficient matrix. Due to the high dimensionality and sparsity, we utilize the LEP method to estimate it in this paper. Compared to the existing methods, the LEP reaches the highest PPV with the sensitivity controlled at the satisfactory level. A set of gene expression data from the HapMap project is analyzed for illustration.

  3. An improved Bayesian network method for reconstructing gene regulatory network based on candidate auto selection.

    Science.gov (United States)

    Xing, Linlin; Guo, Maozu; Liu, Xiaoyan; Wang, Chunyu; Wang, Lei; Zhang, Yin

    2017-11-17

    The reconstruction of gene regulatory network (GRN) from gene expression data can discover regulatory relationships among genes and gain deep insights into the complicated regulation mechanism of life. However, it is still a great challenge in systems biology and bioinformatics. During the past years, numerous computational approaches have been developed for this goal, and Bayesian network (BN) methods draw most of attention among these methods because of its inherent probability characteristics. However, Bayesian network methods are time consuming and cannot handle large-scale networks due to their high computational complexity, while the mutual information-based methods are highly effective but directionless and have a high false-positive rate. To solve these problems, we propose a Candidate Auto Selection algorithm (CAS) based on mutual information and breakpoint detection to restrict the search space in order to accelerate the learning process of Bayesian network. First, the proposed CAS algorithm automatically selects the neighbor candidates of each node before searching the best structure of GRN. Then based on CAS algorithm, we propose a globally optimal greedy search method (CAS + G), which focuses on finding the highest rated network structure, and a local learning method (CAS + L), which focuses on faster learning the structure with little loss of quality. Results show that the proposed CAS algorithm can effectively reduce the search space of Bayesian networks through identifying the neighbor candidates of each node. In our experiments, the CAS + G method outperforms the state-of-the-art method on simulation data for inferring GRNs, and the CAS + L method is significantly faster than the state-of-the-art method with little loss of accuracy. Hence, the CAS based methods effectively decrease the computational complexity of Bayesian network and are more suitable for GRN inference.

  4. A method of reconstructing the spatial measurement network by mobile measurement transmitter for shipbuilding

    Science.gov (United States)

    Guo, Siyang; Lin, Jiarui; Yang, Linghui; Ren, Yongjie; Guo, Yin

    2017-07-01

    The workshop Measurement Position System (wMPS) is a distributed measurement system which is suitable for the large-scale metrology. However, there are some inevitable measurement problems in the shipbuilding industry, such as the restriction by obstacles and limited measurement range. To deal with these factors, this paper presents a method of reconstructing the spatial measurement network by mobile transmitter. A high-precision coordinate control network with more than six target points is established. The mobile measuring transmitter can be added into the measurement network using this coordinate control network with the spatial resection method. This method reconstructs the measurement network and broadens the measurement scope efficiently. To verify this method, two comparison experiments are designed with the laser tracker as the reference. The results demonstrate that the accuracy of point-to-point length is better than 0.4mm and the accuracy of coordinate measurement is better than 0.6mm.

  5. Bayesian network reconstruction using systems genetics data: comparison of MCMC methods.

    Science.gov (United States)

    Tasaki, Shinya; Sauerwine, Ben; Hoff, Bruce; Toyoshiba, Hiroyoshi; Gaiteri, Chris; Chaibub Neto, Elias

    2015-04-01

    Reconstructing biological networks using high-throughput technologies has the potential to produce condition-specific interactomes. But are these reconstructed networks a reliable source of biological interactions? Do some network inference methods offer dramatically improved performance on certain types of networks? To facilitate the use of network inference methods in systems biology, we report a large-scale simulation study comparing the ability of Markov chain Monte Carlo (MCMC) samplers to reverse engineer Bayesian networks. The MCMC samplers we investigated included foundational and state-of-the-art Metropolis-Hastings and Gibbs sampling approaches, as well as novel samplers we have designed. To enable a comprehensive comparison, we simulated gene expression and genetics data from known network structures under a range of biologically plausible scenarios. We examine the overall quality of network inference via different methods, as well as how their performance is affected by network characteristics. Our simulations reveal that network size, edge density, and strength of gene-to-gene signaling are major parameters that differentiate the performance of various samplers. Specifically, more recent samplers including our novel methods outperform traditional samplers for highly interconnected large networks with strong gene-to-gene signaling. Our newly developed samplers show comparable or superior performance to the top existing methods. Moreover, this performance gain is strongest in networks with biologically oriented topology, which indicates that our novel samplers are suitable for inferring biological networks. The performance of MCMC samplers in this simulation framework can guide the choice of methods for network reconstruction using systems genetics data. Copyright © 2015 by the Genetics Society of America.

  6. Neural network CT image reconstruction method for small amount of projection data

    CERN Document Server

    Ma, X F; Takeda, T

    2000-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications.

  7. Universal data-based method for reconstructing complex networks with binary-state dynamics

    Science.gov (United States)

    Li, Jingwen; Shen, Zhesi; Wang, Wen-Xu; Grebogi, Celso; Lai, Ying-Cheng

    2017-03-01

    To understand, predict, and control complex networked systems, a prerequisite is to reconstruct the network structure from observable data. Despite recent progress in network reconstruction, binary-state dynamics that are ubiquitous in nature, technology, and society still present an outstanding challenge in this field. Here we offer a framework for reconstructing complex networks with binary-state dynamics by developing a universal data-based linearization approach that is applicable to systems with linear, nonlinear, discontinuous, or stochastic dynamics governed by monotonic functions. The linearization procedure enables us to convert the network reconstruction into a sparse signal reconstruction problem that can be resolved through convex optimization. We demonstrate generally high reconstruction accuracy for a number of complex networks associated with distinct binary-state dynamics from using binary data contaminated by noise and missing data. Our framework is completely data driven, efficient, and robust, and does not require any a priori knowledge about the detailed dynamical process on the network. The framework represents a general paradigm for reconstructing, understanding, and exploiting complex networked systems with binary-state dynamics.

  8. Indian-ink perfusion based method for reconstructing continuous vascular networks in whole mouse brain.

    Directory of Open Access Journals (Sweden)

    Songchao Xue

    Full Text Available The topology of the cerebral vasculature, which is the energy transport corridor of the brain, can be used to study cerebral circulatory pathways. Limited by the restrictions of the vascular markers and imaging methods, studies on cerebral vascular structure now mainly focus on either observation of the macro vessels in a whole brain or imaging of the micro vessels in a small region. Simultaneous vascular studies of arteries, veins and capillaries have not been achieved in the whole brain of mammals. Here, we have combined the improved gelatin-Indian ink vessel perfusion process with Micro-Optical Sectioning Tomography for imaging the vessel network of an entire mouse brain. With 17 days of work, an integral dataset for the entire cerebral vessels was acquired. The voxel resolution is 0.35×0.4×2.0 µm(3 for the whole brain. Besides the observations of fine and complex vascular networks in the reconstructed slices and entire brain views, a representative continuous vascular tracking has been demonstrated in the deep thalamus. This study provided an effective method for studying the entire macro and micro vascular networks of mouse brain simultaneously.

  9. Application of Neural Networks for Energy Reconstruction

    CERN Document Server

    Damgov, Jordan

    2002-01-01

    The possibility to use Neural Networks for reconstruction ofthe energy deposited in the calorimetry system of the CMS detector is investigated. It is shown that using feed-forward neural network, good linearity, Gaussian energy distribution and good energy resolution can be achieved. Significant improvement of the energy resolution and linearity is reached in comparison with other weighting methods for energy reconstruction.

  10. A fast and efficient gene-network reconstruction method from multiple over-expression experiments

    Directory of Open Access Journals (Sweden)

    Thurner Stefan

    2009-08-01

    Full Text Available Abstract Background Reverse engineering of gene regulatory networks presents one of the big challenges in systems biology. Gene regulatory networks are usually inferred from a set of single-gene over-expressions and/or knockout experiments. Functional relationships between genes are retrieved either from the steady state gene expressions or from respective time series. Results We present a novel algorithm for gene network reconstruction on the basis of steady-state gene-chip data from over-expression experiments. The algorithm is based on a straight forward solution of a linear gene-dynamics equation, where experimental data is fed in as a first predictor for the solution. We compare the algorithm's performance with the NIR algorithm, both on the well known E. coli experimental data and on in-silico experiments. Conclusion We show superiority of the proposed algorithm in the number of correctly reconstructed links and discuss computational time and robustness. The proposed algorithm is not limited by combinatorial explosion problems and can be used in principle for large networks.

  11. Boolean regulatory network reconstruction using literature based knowledge with a genetic algorithm optimization method.

    Science.gov (United States)

    Dorier, Julien; Crespo, Isaac; Niknejad, Anne; Liechti, Robin; Ebeling, Martin; Xenarios, Ioannis

    2016-10-06

    Prior knowledge networks (PKNs) provide a framework for the development of computational biological models, including Boolean models of regulatory networks which are the focus of this work. PKNs are created by a painstaking process of literature curation, and generally describe all relevant regulatory interactions identified using a variety of experimental conditions and systems, such as specific cell types or tissues. Certain of these regulatory interactions may not occur in all biological contexts of interest, and their presence may dramatically change the dynamical behaviour of the resulting computational model, hindering the elucidation of the underlying mechanisms and reducing the usefulness of model predictions. Methods are therefore required to generate optimized contextual network models from generic PKNs. We developed a new approach to generate and optimize Boolean networks, based on a given PKN. Using a genetic algorithm, a model network is built as a sub-network of the PKN and trained against experimental data to reproduce the experimentally observed behaviour in terms of attractors and the transitions that occur between them under specific perturbations. The resulting model network is therefore contextualized to the experimental conditions and constitutes a dynamical Boolean model closer to the observed biological process used to train the model than the original PKN. Such a model can then be interrogated to simulate response under perturbation, to detect stable states and their properties, to get insights into the underlying mechanisms and to generate new testable hypotheses. Generic PKNs attempt to synthesize knowledge of all interactions occurring in a biological process of interest, irrespective of the specific biological context. This limits their usefulness as a basis for the development of context-specific, predictive dynamical Boolean models. The optimization method presented in this article produces specific, contextualized models from generic

  12. Study on Reverse Reconstruction Method of Vehicle Group Situation in Urban Road Network Based on Driver-Vehicle Feature Evolution

    Directory of Open Access Journals (Sweden)

    Xiaoyuan Wang

    2017-01-01

    Full Text Available Vehicle group situation is the status and situation of dynamic permutation which is composed of target vehicle and neighboring traffic entities. It is a concept which is frequently involved in the research of traffic flow theory, especially the active vehicle security. Studying vehicle group situation in depth is of great significance for traffic safety. Three-lane condition was taken as an example; the characteristics of target vehicle and its neighboring vehicles were synthetically considered to restructure the vehicle group situation in this paper. The Gamma distribution theory was used to identify the vehicle group situation when target vehicle arrived at the end of the study area. From the perspective of driver-vehicle feature evolution, the reverse reconstruction method of vehicle group situation in the urban road network was proposed. Results of actual driving, virtual driving, and simulation experiments showed that the model established in this paper was reasonable and feasible.

  13. BoostGAPFILL: improving the fidelity of metabolic network reconstructions through integrated constraint and pattern-based methods.

    Science.gov (United States)

    Oyetunde, Tolutola; Zhang, Muhan; Chen, Yixin; Tang, Yinjie; Lo, Cynthia

    2017-02-15

    Metabolic network reconstructions are often incomplete. Constraint-based and pattern-based methodologies have been used for automated gap filling of these networks, each with its own strengths and weaknesses. Moreover, since validation of hypotheses made by gap filling tools require experimentation, it is challenging to benchmark performance and make improvements other than that related to speed and scalability. We present BoostGAPFILL, an open source tool that leverages both constraint-based and machine learning methodologies for hypotheses generation in gap filling and metabolic model refinement. BoostGAPFILL uses metabolite patterns in the incomplete network captured using a matrix factorization formulation to constrain the set of reactions used to fill gaps in a metabolic network. We formulate a testing framework based on the available metabolic reconstructions and demonstrate the superiority of BoostGAPFILL to state-of-the-art gap filling tools. We randomly delete a number of reactions from a metabolic network and rate the different algorithms on their ability to both predict the deleted reactions from a universal set and to fill gaps. For most metabolic network reconstructions tested, BoostGAPFILL shows above 60% precision and recall, which is more than twice that of other existing tools. MATLAB open source implementation ( https://github.com/Tolutola/BoostGAPFILL ). toyetunde@wustl.edu or muhan@wustl.edu . Supplementary data are available at Bioinformatics online.

  14. Network reconstruction via density sampling

    CERN Document Server

    Squartini, Tiziano; Gabrielli, Andrea; Garlaschelli, Diego

    2016-01-01

    Reconstructing weighted networks from partial information is necessary in many important circumstances, e.g. for a correct estimation of systemic risk. It has been shown that, in order to achieve an accurate reconstruction, it is crucial to reliably replicate the empirical degree sequence, which is however unknown in many realistic situations. More recently, it has been found that the knowledge of the degree sequence can be replaced by the knowledge of the strength sequence, which is typically accessible, complemented by that of the total number of links, thus considerably relaxing the observational requirements. Here we further relax these requirements and devise a procedure valid when even the the total number of links is unavailable. We assume that, apart from the heterogeneity induced by the degree sequence itself, the network is homogeneous, so that its link density can be estimated by sampling subsets of nodes with representative density. We show that the best way of sampling nodes is the random selecti...

  15. Reconstruction of network topology using status-time-series data

    Science.gov (United States)

    Pandey, Pradumn Kumar; Badarla, Venkataramana

    2018-01-01

    Uncovering the heterogeneous connection pattern of a networked system from the available status-time-series (STS) data of a dynamical process on the network is of great interest in network science and known as a reverse engineering problem. Dynamical processes on a network are affected by the structure of the network. The dependency between the diffusion dynamics and structure of the network can be utilized to retrieve the connection pattern from the diffusion data. Information of the network structure can help to devise the control of dynamics on the network. In this paper, we consider the problem of network reconstruction from the available status-time-series (STS) data using matrix analysis. The proposed method of network reconstruction from the STS data is tested successfully under susceptible-infected-susceptible (SIS) diffusion dynamics on real-world and computer-generated benchmark networks. High accuracy and efficiency of the proposed reconstruction procedure from the status-time-series data define the novelty of the method. Our proposed method outperforms compressed sensing theory (CST) based method of network reconstruction using STS data. Further, the same procedure of network reconstruction is applied to the weighted networks. The ordering of the edges in the weighted networks is identified with high accuracy.

  16. Pseudo-proxy evaluation of climate field reconstruction methods of North Atlantic climate based on an annually resolved marine proxy network

    Science.gov (United States)

    Pyrina, Maria; Wagner, Sebastian; Zorita, Eduardo

    2017-10-01

    Two statistical methods are tested to reconstruct the interannual variations in past sea surface temperatures (SSTs) of the North Atlantic (NA) Ocean over the past millennium based on annually resolved and absolutely dated marine proxy records of the bivalve mollusk Arctica islandica. The methods are tested in a pseudo-proxy experiment (PPE) setup using state-of-the-art climate models (CMIP5 Earth system models) and reanalysis data from the COBE2 SST data set. The methods were applied in the virtual reality provided by global climate simulations and reanalysis data to reconstruct the past NA SSTs using pseudo-proxy records that mimic the statistical characteristics and network of Arctica islandica. The multivariate linear regression methods evaluated here are principal component regression and canonical correlation analysis. Differences in the skill of the climate field reconstruction (CFR) are assessed according to different calibration periods and different proxy locations within the NA basin. The choice of the climate model used as a surrogate reality in the PPE has a more profound effect on the CFR skill than the calibration period and the statistical reconstruction method. The differences between the two methods are clearer for the MPI-ESM model due to its higher spatial resolution in the NA basin. The pseudo-proxy results of the CCSM4 model are closer to the pseudo-proxy results based on the reanalysis data set COBE2. Conducting PPEs using noise-contaminated pseudo-proxies instead of noise-free pseudo-proxies is important for the evaluation of the methods, as more spatial differences in the reconstruction skill are revealed. Both methods are appropriate for the reconstruction of the temporal evolution of the NA SSTs, even though they lead to a great loss of variance away from the proxy sites. Under reasonable assumptions about the characteristics of the non-climate noise in the proxy records, our results show that the marine network of Arctica islandica can

  17. Pseudo-proxy evaluation of climate field reconstruction methods of North Atlantic climate based on an annually resolved marine proxy network

    Directory of Open Access Journals (Sweden)

    M. Pyrina

    2017-10-01

    Full Text Available Two statistical methods are tested to reconstruct the interannual variations in past sea surface temperatures (SSTs of the North Atlantic (NA Ocean over the past millennium based on annually resolved and absolutely dated marine proxy records of the bivalve mollusk Arctica islandica. The methods are tested in a pseudo-proxy experiment (PPE setup using state-of-the-art climate models (CMIP5 Earth system models and reanalysis data from the COBE2 SST data set. The methods were applied in the virtual reality provided by global climate simulations and reanalysis data to reconstruct the past NA SSTs using pseudo-proxy records that mimic the statistical characteristics and network of Arctica islandica. The multivariate linear regression methods evaluated here are principal component regression and canonical correlation analysis. Differences in the skill of the climate field reconstruction (CFR are assessed according to different calibration periods and different proxy locations within the NA basin. The choice of the climate model used as a surrogate reality in the PPE has a more profound effect on the CFR skill than the calibration period and the statistical reconstruction method. The differences between the two methods are clearer for the MPI-ESM model due to its higher spatial resolution in the NA basin. The pseudo-proxy results of the CCSM4 model are closer to the pseudo-proxy results based on the reanalysis data set COBE2. Conducting PPEs using noise-contaminated pseudo-proxies instead of noise-free pseudo-proxies is important for the evaluation of the methods, as more spatial differences in the reconstruction skill are revealed. Both methods are appropriate for the reconstruction of the temporal evolution of the NA SSTs, even though they lead to a great loss of variance away from the proxy sites. Under reasonable assumptions about the characteristics of the non-climate noise in the proxy records, our results show that the marine network of Arctica

  18. Robust Reconstruction of Complex Networks from Sparse Data

    Science.gov (United States)

    Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Di, Zengru

    2015-01-01

    Reconstructing complex networks from measurable data is a fundamental problem for understanding and controlling collective dynamics of complex networked systems. However, a significant challenge arises when we attempt to decode structural information hidden in limited amounts of data accompanied by noise and in the presence of inaccessible nodes. Here, we develop a general framework for robust reconstruction of complex networks from sparse and noisy data. Specifically, we decompose the task of reconstructing the whole network into recovering local structures centered at each node. Thus, the natural sparsity of complex networks ensures a conversion from the local structure reconstruction into a sparse signal reconstruction problem that can be addressed by using the lasso, a convex optimization method. We apply our method to evolutionary games, transportation, and communication processes taking place in a variety of model and real complex networks, finding that universal high reconstruction accuracy can be achieved from sparse data in spite of noise in time series and missing data of partial nodes. Our approach opens new routes to the network reconstruction problem and has potential applications in a wide range of fields.

  19. Evolutionary optimization of network reconstruction from derivative-variable correlations

    Science.gov (United States)

    Leguia, Marc G.; Andrzejak, Ralph G.; Levnajić, Zoran

    2017-08-01

    Topologies of real-world complex networks are rarely accessible, but can often be reconstructed from experimentally obtained time series via suitable network reconstruction methods. Extending our earlier work on methods based on statistics of derivative-variable correlations, we here present a new method built on integrating an evolutionary optimization algorithm into the derivative-variable correlation method. Results obtained from our modification of the method in general outperform the original results, demonstrating the suitability of evolutionary optimization logic in network reconstruction problems. We show the method’s usefulness in realistic scenarios where the reconstruction precision can be limited by the nature of the time series. We also discuss important limitations coming from various dynamical regimes that time series can belong to.

  20. Reconstruction of a random phase dynamics network from observations

    Science.gov (United States)

    Pikovsky, A.

    2018-01-01

    We consider networks of coupled phase oscillators of different complexity: Kuramoto-Daido-type networks, generalized Winfree networks, and hypernetworks with triple interactions. For these setups an inverse problem of reconstruction of the network connections and of the coupling function from the observations of the phase dynamics is addressed. We show how a reconstruction based on the minimization of the squared error can be implemented in all these cases. Examples include random networks with full disorder both in the connections and in the coupling functions, as well as networks where the coupling functions are taken from experimental data of electrochemical oscillators. The method can be directly applied to asynchronous dynamics of units, while in the case of synchrony, additional phase resettings are necessary for reconstruction.

  1. Reconstruction of periodic signals using neural networks

    Directory of Open Access Journals (Sweden)

    José Danilo Rairán Antolines

    2014-01-01

    Full Text Available In this paper, we reconstruct a periodic signal by using two neural networks. The first network is trained to approximate the period of a signal, and the second network estimates the corresponding coefficients of the signal's Fourier expansion. The reconstruction strategy consists in minimizing the mean-square error via backpro-pagation algorithms over a single neuron with a sine transfer function. Additionally, this paper presents mathematical proof about the quality of the approximation as well as a first modification of the algorithm, which requires less data to reach the same estimation; thus making the algorithm suitable for real-time implementations.

  2. Fast reconstruction of compact context-specific metabolic network models.

    Directory of Open Access Journals (Sweden)

    Nikos Vlassis

    2014-01-01

    Full Text Available Systemic approaches to the study of a biological cell or tissue rely increasingly on the use of context-specific metabolic network models. The reconstruction of such a model from high-throughput data can routinely involve large numbers of tests under different conditions and extensive parameter tuning, which calls for fast algorithms. We present fastcore, a generic algorithm for reconstructing context-specific metabolic network models from global genome-wide metabolic network models such as Recon X. fastcore takes as input a core set of reactions that are known to be active in the context of interest (e.g., cell or tissue, and it searches for a flux consistent subnetwork of the global network that contains all reactions from the core set and a minimal set of additional reactions. Our key observation is that a minimal consistent reconstruction can be defined via a set of sparse modes of the global network, and fastcore iteratively computes such a set via a series of linear programs. Experiments on liver data demonstrate speedups of several orders of magnitude, and significantly more compact reconstructions, over a rival method. Given its simplicity and its excellent performance, fastcore can form the backbone of many future metabolic network reconstruction algorithms.

  3. Reconstructing complex networks without time series

    Science.gov (United States)

    Ma, Chuang; Zhang, Hai-Feng; Lai, Ying-Cheng

    2017-08-01

    In the real world there are situations where the network dynamics are transient (e.g., various spreading processes) and the final nodal states represent the available data. Can the network topology be reconstructed based on data that are not time series? Assuming that an ensemble of the final nodal states resulting from statistically independent initial triggers (signals) of the spreading dynamics is available, we develop a maximum likelihood estimation-based framework to accurately infer the interaction topology. For dynamical processes that result in a binary final state, the framework enables network reconstruction based solely on the final nodal states. Additional information, such as the first arrival time of each signal at each node, can improve the reconstruction accuracy. For processes with a uniform final state, the first arrival times can be exploited to reconstruct the network. We derive a mathematical theory for our framework and validate its performance and robustness using various combinations of spreading dynamics and real-world network topologies.

  4. Magnetic flux reconstruction methods for shaped tokamaks

    Energy Technology Data Exchange (ETDEWEB)

    Tsui, Chi-Wa [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    1993-12-01

    The use of a variational method permits the Grad-Shafranov (GS) equation to be solved by reducing the problem of solving the 2D non-linear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a parameterization of the plasma boundary and the current profile (p` and FF` functions). The author treats the current profile parameters as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. Matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green`s function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing Principle provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. The performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package has been implemented which integrates a vacuum field solver using a filament model for the plasma, a multi-layer perception neural network as an interface, and the volume integration of plasma current density using Green`s functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data. The results are promising.

  5. Computing autocatalytic sets to unravel inconsistencies in metabolic network reconstructions

    DEFF Research Database (Denmark)

    Schmidt, R.; Waschina, S.; Boettger-Schmidt, D.

    2015-01-01

    by inherent inconsistencies and gaps. RESULTS: Here we present a novel method to validate metabolic network reconstructions based on the concept of autocatalytic sets. Autocatalytic sets correspond to collections of metabolites that, besides enzymes and a growth medium, are required to produce all biomass...

  6. Systemic risk analysis in reconstructed economic and financial networks

    CERN Document Server

    Cimini, Giulio; Gabrielli, Andrea; Garlaschelli, Diego

    2014-01-01

    The assessment of fundamental properties for economic and financial systems, such as systemic risk, is systematically hindered by privacy issues$-$that put severe limitations on the available information. Here we introduce a novel method to reconstruct partially-accessible networked systems of this kind. The method is based on the knowledge of the fitnesses, $i.e.$, intrinsic node-specific properties, and of the number of connections of only a limited subset of nodes. Such information is used to calibrate a directed configuration model which can generate ensembles of networks intended to represent the real system, so that the real network properties can be estimated within the generated ensemble in terms of mean values of the observables. Here we focus on estimating those properties that are commonly used to measure the network resilience to shock and crashes. Tests on both artificial and empirical networks shows that the method is remarkably robust with respect to the limitedness of the information available...

  7. Reconstruction and Application of Protein–Protein Interaction Network

    Directory of Open Access Journals (Sweden)

    Tong Hao

    2016-06-01

    Full Text Available The protein-protein interaction network (PIN is a useful tool for systematic investigation of the complex biological activities in the cell. With the increasing interests on the proteome-wide interaction networks, PINs have been reconstructed for many species, including virus, bacteria, plants, animals, and humans. With the development of biological techniques, the reconstruction methods of PIN are further improved. PIN has gradually penetrated many fields in biological research. In this work we systematically reviewed the development of PIN in the past fifteen years, with respect to its reconstruction and application of function annotation, subsystem investigation, evolution analysis, hub protein analysis, and regulation mechanism analysis. Due to the significant role of PIN in the in-depth exploration of biological process mechanisms, PIN will be preferred by more and more researchers for the systematic study of the protein systems in various kinds of organisms.

  8. Reconstruction of stochastic temporal networks through diffusive arrival times

    Science.gov (United States)

    Li, Xun; Li, Xiang

    2017-01-01

    Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications. PMID:28604687

  9. Genome-scale reconstruction of the Saccharomyces cerevisiae metabolic network

    DEFF Research Database (Denmark)

    Förster, Jochen; Famili, I.; Fu, P.

    2003-01-01

    and the environment were included. A total of 708 structural open reading frames (ORFs) were accounted for in the reconstructed network, corresponding to 1035 metabolic reactions. Further, 140 reactions were included on the basis of biochemical evidence resulting in a genome-scale reconstructed metabolic network...... with Escherichia coli. The reconstructed metabolic network is the first comprehensive network for a eukaryotic organism, and it may be used as the basis for in silico analysis of phenotypic functions....

  10. Distributed Reconstruction via Alternating Direction Method

    Directory of Open Access Journals (Sweden)

    Linyuan Wang

    2013-01-01

    Full Text Available With the development of compressive sensing theory, image reconstruction from few-view projections has received considerable research attentions in the field of computed tomography (CT. Total-variation- (TV- based CT image reconstruction has been shown to be experimentally capable of producing accurate reconstructions from sparse-view data. In this study, a distributed reconstruction algorithm based on TV minimization has been developed. This algorithm is very simple as it uses the alternating direction method. The proposed method can accelerate the alternating direction total variation minimization (ADTVM algorithm without losing accuracy.

  11. Method for position emission mammography image reconstruction

    Science.gov (United States)

    Smith, Mark Frederick

    2004-10-12

    An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.

  12. Innovative rapid construction/reconstruction methods.

    Science.gov (United States)

    2005-07-01

    Innovative construction and reconstruction methods provide the opportunity to significantly reduce the time of roadway projects while maintaining the necessary quality of workmanship. The need for these rapid methods stems from the increase in ...

  13. Reconstructing networks of pathways via significance analysis of their intersections

    Directory of Open Access Journals (Sweden)

    Francesconi Mirko

    2008-04-01

    Full Text Available Abstract Background Significance analysis at single gene level may suffer from the limited number of samples and experimental noise that can severely limit the power of the chosen statistical test. This problem is typically approached by applying post hoc corrections to control the false discovery rate, without taking into account prior biological knowledge. Pathway or gene ontology analysis can provide an alternative way to relax the significance threshold applied to single genes and may lead to a better biological interpretation. Results Here we propose a new analysis method based on the study of networks of pathways. These networks are reconstructed considering both the significance of single pathways (network nodes and the intersection between them (links. We apply this method for the reconstruction of networks of pathways to two gene expression datasets: the first one obtained from a c-Myc rat fibroblast cell line expressing a conditional Myc-estrogen receptor oncoprotein; the second one obtained from the comparison of Acute Myeloid Leukemia and Acute Lymphoblastic Leukemia derived from bone marrow samples. Conclusion Our method extends statistical models that have been recently adopted for the significance analysis of functional groups of genes to infer links between these groups. We show that groups of genes at the interface between different pathways can be considered as relevant even if the pathways they belong to are not significant by themselves.

  14. Reconstructing Causal Biological Networks through Active Learning.

    Directory of Open Access Journals (Sweden)

    Hyunghoon Cho

    Full Text Available Reverse-engineering of biological networks is a central problem in systems biology. The use of intervention data, such as gene knockouts or knockdowns, is typically used for teasing apart causal relationships among genes. Under time or resource constraints, one needs to carefully choose which intervention experiments to carry out. Previous approaches for selecting most informative interventions have largely been focused on discrete Bayesian networks. However, continuous Bayesian networks are of great practical interest, especially in the study of complex biological systems and their quantitative properties. In this work, we present an efficient, information-theoretic active learning algorithm for Gaussian Bayesian networks (GBNs, which serve as important models for gene regulatory networks. In addition to providing linear-algebraic insights unique to GBNs, leading to significant runtime improvements, we demonstrate the effectiveness of our method on data simulated with GBNs and the DREAM4 network inference challenge data sets. Our method generally leads to faster recovery of underlying network structure and faster convergence to final distribution of confidence scores over candidate graph structures using the full data, in comparison to random selection of intervention experiments.

  15. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Saifullah

    2002-07-01

    porosity in chalk in the form of foraminifer shells. A hybrid reconstruction technique that initializes the simulated annealing reconstruction with input generated using the Gaussian random field method has also been introduced. The technique was found to accelerate significantly the rate of convergence of the simulated annealing method. This finding is important because the main advantage of the simulated annealing method, namely its ability to impose a variety of reconstruction constraints, is usually compromised by its very slow rate of convergence. Absolute permeability, formation factor and mercury-air capillary pressure are computed from simple network models. The input parameters for the network models were extracted from a reconstructed chalk sample. The computed permeability, formation factor and mercury-air capillary pressure correspond well with the experimental data. The predictive power of a network model for chalk is further extended through incorporating important pore-level displacement phenomena and realistic description of pore space geometry and topology. Limited results show that the model may be used to compute absolute and relative permeabilities, capillary pressure, formation factor, resistivity index and saturation exponent. The above findings suggest that the network modeling technique may be used for prediction of petrophysical and reservoir engineering properties of chalk. Further works are necessary and an outline is given with considerable details. Two 2D, one 3D and a dual-porosity fractured reservoir models have been developed and an imbibition process involving water displacing oil is simulated at various injection rates and with different oil-to-water viscosity ratios using four widely used conventional up scaling techniques. The up scaling techniques are the Kyte and Berry, Pore Volume Weighted, Weighted Relative Permeability, and Stone. The results suggest that up scaling of fractured reservoirs may be possible using the conventional

  16. Reconstruction of Complex Network based on the Noise via QR Decomposition and Compressed Sensing.

    Science.gov (United States)

    Li, Lixiang; Xu, Dafei; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian

    2017-11-08

    It is generally known that the states of network nodes are stable and have strong correlations in a linear network system. We find that without the control input, the method of compressed sensing can not succeed in reconstructing complex networks in which the states of nodes are generated through the linear network system. However, noise can drive the dynamics between nodes to break the stability of the system state. Therefore, a new method integrating QR decomposition and compressed sensing is proposed to solve the reconstruction problem of complex networks under the assistance of the input noise. The state matrix of the system is decomposed by QR decomposition. We construct the measurement matrix with the aid of Gaussian noise so that the sparse input matrix can be reconstructed by compressed sensing. We also discover that noise can build a bridge between the dynamics and the topological structure. Experiments are presented to show that the proposed method is more accurate and more efficient to reconstruct four model networks and six real networks by the comparisons between the proposed method and only compressed sensing. In addition, the proposed method can reconstruct not only the sparse complex networks, but also the dense complex networks.

  17. Synthetic Event Reconstruction Experiments for Defining Sensor Network Characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Lundquist, J K; Kosovic, B; Belles, R

    2005-12-15

    An event reconstruction technology system has been designed and implemented at Lawrence Livermore National Laboratory (LLNL). This system integrates sensor observations, which may be sparse and/or conflicting, with transport and dispersion models via Bayesian stochastic sampling methodologies to characterize the sources of atmospheric releases of hazardous materials. We demonstrate the application of this event reconstruction technology system to designing sensor networks for detecting and responding to atmospheric releases of hazardous materials. The quantitative measure of the reduction in uncertainty, or benefit of a given network, can be utilized by policy makers to determine the cost/benefit of certain networks. Herein we present two numerical experiments demonstrating the utility of the event reconstruction methodology for sensor network design. In the first set of experiments, only the time resolution of the sensors varies between three candidate networks. The most ''expensive'' sensor network offers few advantages over the moderately-priced network for reconstructing the release examined here. The second set of experiments explores the significance of the sensors detection limit, which can have a significant impact on sensor cost. In this experiment, the expensive network can most clearly define the source location and source release rate. The other networks provide data insufficient for distinguishing between two possible clusters of source locations. When the reconstructions from all networks are aggregated into a composite plume, a decision-maker can distinguish the utility of the expensive sensor network.

  18. Reconstructing cancer drug response networks using multitask learning.

    Science.gov (United States)

    Ruffalo, Matthew; Stojanov, Petar; Pillutla, Venkata Krishna; Varma, Rohan; Bar-Joseph, Ziv

    2017-10-10

    Translating in vitro results to clinical tests is a major challenge in systems biology. Here we present a new Multi-Task learning framework which integrates thousands of cell line expression experiments to reconstruct drug specific response networks in cancer. The reconstructed networks correctly identify several shared key proteins and pathways while simultaneously highlighting many cell type specific proteins. We used top proteins from each drug network to predict survival for patients prescribed the drug. Predictions based on proteins from the in-vitro derived networks significantly outperformed predictions based on known cancer genes indicating that Multi-Task learning can indeed identify accurate drug response networks.

  19. Tomographic Reconstruction Methods for Decomposing Directional Components

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas; Dong, Yiqiu

    X-ray computed tomography technique has many different practical applications. In this paper, we propose two new reconstruction methods that can decompose objects at the same time. By incorporating direction information, the proposed methods can decompose objects into various directional components...

  20. Double and multiple knockout simulations for genome-scale metabolic network reconstructions.

    Science.gov (United States)

    Goldstein, Yaron Ab; Bockmayr, Alexander

    2015-01-01

    Constraint-based modeling of genome-scale metabolic network reconstructions has become a widely used approach in computational biology. Flux coupling analysis is a constraint-based method that analyses the impact of single reaction knockouts on other reactions in the network. We present an extension of flux coupling analysis for double and multiple gene or reaction knockouts, and develop corresponding algorithms for an in silico simulation. To evaluate our method, we perform a full single and double knockout analysis on a selection of genome-scale metabolic network reconstructions and compare the results. A prototype implementation of double knockout simulation is available at http://hoverboard.io/L4FC.

  1. Plasmid flux in Escherichia coli ST131 sublineages, analyzed by plasmid constellation network (PLACNET, a new method for plasmid reconstruction from whole genome sequences.

    Directory of Open Access Journals (Sweden)

    Val F Lanza

    2014-12-01

    Full Text Available Bacterial whole genome sequence (WGS methods are rapidly overtaking classical sequence analysis. Many bacterial sequencing projects focus on mobilome changes, since macroevolutionary events, such as the acquisition or loss of mobile genetic elements, mainly plasmids, play essential roles in adaptive evolution. Existing WGS analysis protocols do not assort contigs between plasmids and the main chromosome, thus hampering full analysis of plasmid sequences. We developed a method (called plasmid constellation networks or PLACNET that identifies, visualizes and analyzes plasmids in WGS projects by creating a network of contig interactions, thus allowing comprehensive plasmid analysis within WGS datasets. The workflow of the method is based on three types of data: assembly information (including scaffold links and coverage, comparison to reference sequences and plasmid-diagnostic sequence features. The resulting network is pruned by expert analysis, to eliminate confounding data, and implemented in a Cytoscape-based graphic representation. To demonstrate PLACNET sensitivity and efficacy, the plasmidome of the Escherichia coli lineage ST131 was analyzed. ST131 is a globally spread clonal group of extraintestinal pathogenic E. coli (ExPEC, comprising different sublineages with ability to acquire and spread antibiotic resistance and virulence genes via plasmids. Results show that plasmids flux in the evolution of this lineage, which is wide open for plasmid exchange. MOBF12/IncF plasmids were pervasive, adding just by themselves more than 350 protein families to the ST131 pangenome. Nearly 50% of the most frequent γ-proteobacterial plasmid groups were found to be present in our limited sample of ten analyzed ST131 genomes, which represent the main ST131 sublineages.

  2. Plasmid Flux in Escherichia coli ST131 Sublineages, Analyzed by Plasmid Constellation Network (PLACNET), a New Method for Plasmid Reconstruction from Whole Genome Sequences

    Science.gov (United States)

    Garcillán-Barcia, M. Pilar; Mora, Azucena; Blanco, Jorge; Coque, Teresa M.; de la Cruz, Fernando

    2014-01-01

    Bacterial whole genome sequence (WGS) methods are rapidly overtaking classical sequence analysis. Many bacterial sequencing projects focus on mobilome changes, since macroevolutionary events, such as the acquisition or loss of mobile genetic elements, mainly plasmids, play essential roles in adaptive evolution. Existing WGS analysis protocols do not assort contigs between plasmids and the main chromosome, thus hampering full analysis of plasmid sequences. We developed a method (called plasmid constellation networks or PLACNET) that identifies, visualizes and analyzes plasmids in WGS projects by creating a network of contig interactions, thus allowing comprehensive plasmid analysis within WGS datasets. The workflow of the method is based on three types of data: assembly information (including scaffold links and coverage), comparison to reference sequences and plasmid-diagnostic sequence features. The resulting network is pruned by expert analysis, to eliminate confounding data, and implemented in a Cytoscape-based graphic representation. To demonstrate PLACNET sensitivity and efficacy, the plasmidome of the Escherichia coli lineage ST131 was analyzed. ST131 is a globally spread clonal group of extraintestinal pathogenic E. coli (ExPEC), comprising different sublineages with ability to acquire and spread antibiotic resistance and virulence genes via plasmids. Results show that plasmids flux in the evolution of this lineage, which is wide open for plasmid exchange. MOBF12/IncF plasmids were pervasive, adding just by themselves more than 350 protein families to the ST131 pangenome. Nearly 50% of the most frequent γ–proteobacterial plasmid groups were found to be present in our limited sample of ten analyzed ST131 genomes, which represent the main ST131 sublineages. PMID:25522143

  3. Enhanced capital-asset pricing model for the reconstruction of bipartite financial networks

    Science.gov (United States)

    Squartini, Tiziano; Almog, Assaf; Caldarelli, Guido; van Lelyveld, Iman; Garlaschelli, Diego; Cimini, Giulio

    2017-09-01

    Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets are strongly affected by the interconnections among financial institutions. Yet, while the aggregate balance sheets of institutions are publicly disclosed, information on single positions is mostly confidential and, as such, unavailable. Standard approaches to reconstruct the network of financial interconnection produce unrealistically dense topologies, leading to a biased estimation of systemic risk. Moreover, reconstruction techniques are generally designed for monopartite networks of bilateral exposures between financial institutions, thus failing in reproducing bipartite networks of security holdings (e.g., investment portfolios). Here we propose a reconstruction method based on constrained entropy maximization, tailored for bipartite financial networks. Such a procedure enhances the traditional capital-asset pricing model (CAPM) and allows us to reproduce the correct topology of the network. We test this enhanced CAPM (ECAPM) method on a dataset, collected by the European Central Bank, of detailed security holdings of European institutional sectors over a period of six years (2009-2015). Our approach outperforms the traditional CAPM and the recently proposed maximum-entropy CAPM both in reproducing the network topology and in estimating systemic risk due to fire sales spillovers. In general, ECAPM can be applied to the whole class of weighted bipartite networks described by the fitness model.

  4. A Practical Algorithm for Reconstructing Level-1 Phylogenetic Networks

    NARCIS (Netherlands)

    K.T. Huber; L.J.J. van Iersel (Leo); S.M. Kelk (Steven); R. Suchecki

    2010-01-01

    htmlabstractRecently much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks - a type of

  5. Reconstructing context-specific gene regulatory network and identifying modules and network rewiring through data integration.

    Science.gov (United States)

    Ma, Tianle; Zhang, Aidong

    2017-07-15

    Reconstructing context-specific transcriptional regulatory network is crucial for deciphering principles of regulatory mechanisms underlying various conditions. Recently studies that reconstructed transcriptional networks have focused on individual organisms or cell types and relied on data repositories of context-free regulatory relationships. Here we present a comprehensive framework to systematically derive putative regulator-target pairs in any given context by integrating context-specific transcriptional profiling and public data repositories of gene regulatory networks. Moreover, our framework can identify core regulatory modules and signature genes underlying global regulatory circuitry, and detect network rewiring and core rewired modules in different contexts by considering gene modules and edge (gene interaction) modules collaboratively. We applied our methods to analyzing Autism RNA-seq experiment data and produced biologically meaningful results. In particular, all 11 hub genes in a predicted rewired autistic regulatory subnetwork have been linked to autism based on literature review. The predicted rewired autistic regulatory network may shed some new insight into disease mechanism. Published by Elsevier Inc.

  6. Human metabolic network: reconstruction, simulation, and applications in systems biology.

    Science.gov (United States)

    Wu, Ming; Chan, Christina

    2012-03-02

    Metabolism is crucial to cell growth and proliferation. Deficiency or alterations in metabolic functions are known to be involved in many human diseases. Therefore, understanding the human metabolic system is important for the study and treatment of complex diseases. Current reconstructions of the global human metabolic network provide a computational platform to integrate genome-scale information on metabolism. The platform enables a systematic study of the regulation and is applicable to a wide variety of cases, wherein one could rely on in silico perturbations to predict novel targets, interpret systemic effects, and identify alterations in the metabolic states to better understand the genotype-phenotype relationships. In this review, we describe the reconstruction of the human metabolic network, introduce the constraint based modeling approach to analyze metabolic networks, and discuss systems biology applications to study human physiology and pathology. We highlight the challenges and opportunities in network reconstruction and systems modeling of the human metabolic system.

  7. Stereo Matching Based on Immune Neural Network in Abdomen Reconstruction

    Directory of Open Access Journals (Sweden)

    Huan Liu

    2015-01-01

    Full Text Available Stereo feature matching is a technique that finds an optimal match in two images from the same entity in the three-dimensional world. The stereo correspondence problem is formulated as an optimization task where an energy function, which represents the constraints on the solution, is to be minimized. A novel intelligent biological network (Bio-Net, which involves the human B-T cells immune system into neural network, is proposed in this study in order to learn the robust relationship between the input feature points and the output matched points. A model from input-output data (left reference point-right target point is established. In the experiments, the abdomen reconstructions for different-shape mannequins are then performed by means of the proposed method. The final results are compared and analyzed, which demonstrate that the proposed approach greatly outperforms the single neural network and the conventional matching algorithm in precise. Particularly, as far as time cost and efficiency, the proposed method exhibits its significant promising and potential for improvement. Hence, it is entirely considered as an effective and feasible alternative option for stereo matching.

  8. Efficient network reconstruction from dynamical cascades identifies small-world topology of neuronal avalanches.

    Directory of Open Access Journals (Sweden)

    Sinisa Pajevic

    2009-01-01

    Full Text Available Cascading activity is commonly found in complex systems with directed interactions such as metabolic networks, neuronal networks, or disease spreading in social networks. Substantial insight into a system's organization can be obtained by reconstructing the underlying functional network architecture from the observed activity cascades. Here we focus on Bayesian approaches and reduce their computational demands by introducing the Iterative Bayesian (IB and Posterior Weighted Averaging (PWA methods. We introduce a special case of PWA, cast in nonparametric form, which we call the normalized count (NC algorithm. NC efficiently reconstructs random and small-world functional network topologies and architectures from subcritical, critical, and supercritical cascading dynamics and yields significant improvements over commonly used correlation methods. With experimental data, NC identified a functional and structural small-world topology and its corresponding traffic in cortical networks with neuronal avalanche dynamics.

  9. Strategy on energy saving reconstruction of distribution networks based on life cycle cost

    Science.gov (United States)

    Chen, Xiaofei; Qiu, Zejing; Xu, Zhaoyang; Xiao, Chupeng

    2017-08-01

    Because the actual distribution network reconstruction project funds are often limited, the cost-benefit model and the decision-making method are crucial for distribution network energy saving reconstruction project. From the perspective of life cycle cost (LCC), firstly the research life cycle is determined for the energy saving reconstruction of distribution networks with multi-devices. Then, a new life cycle cost-benefit model for energy-saving reconstruction of distribution network is developed, in which the modification schemes include distribution transformers replacement, lines replacement and reactive power compensation. In the operation loss cost and maintenance cost area, the operation cost model considering the influence of load season characteristics and the maintenance cost segmental model of transformers are proposed. Finally, aiming at the highest energy saving profit per LCC, a decision-making method is developed while considering financial and technical constraints as well. The model and method are applied to a real distribution network reconstruction, and the results prove that the model and method are effective.

  10. Reconstruction of LGT networks from tri-LGT-nets.

    Science.gov (United States)

    Cardona, Gabriel; Pons, Joan Carles

    2017-12-01

    Phylogenetic networks have gained attention from the scientific community due to the evidence of the existence of evolutionary events that cannot be represented using trees. A variant of phylogenetic networks, called LGT networks, models specifically lateral gene transfer events, which cannot be properly represented with generic phylogenetic networks. In this paper we treat the problem of the reconstruction of LGT networks from substructures induced by three leaves, which we call tri-LGT-nets. We first restrict ourselves to a class of LGT networks that are both mathematically treatable and biologically significant, called BAN-LGT networks. Then, we study the decomposition of such networks in subnetworks with three leaves and ask whether or not this decomposition determines the network. The answer to this question is negative, but if we further impose time-consistency (species involved in a later gene transfer must coexist) the answer is affirmative, up to some redundancy that can never be recovered but is fully characterized.

  11. Missing and spurious interactions and the reconstruction of complex networks

    CERN Document Server

    Guimera, R; 10.1073/pnas.0908366106

    2010-01-01

    Network analysis is currently used in a myriad of contexts: from identifying potential drug targets to predicting the spread of epidemics and designing vaccination strategies, and from finding friends to uncovering criminal activity. Despite the promise of the network approach, the reliability of network data is a source of great concern in all fields where complex networks are studied. Here, we present a general mathematical and computational framework to deal with the problem of data reliability in complex networks. In particular, we are able to reliably identify both missing and spurious interactions in noisy network observations. Remarkably, our approach also enables us to obtain, from those noisy observations, network reconstructions that yield estimates of the true network properties that are more accurate than those provided by the observations themselves. Our approach has the potential to guide experiments, to better characterize network data sets, and to drive new discoveries.

  12. Reconstructing gene-regulatory networks from time series, knock-out data, and prior knowledge

    Directory of Open Access Journals (Sweden)

    Timmer Jens

    2007-02-01

    Full Text Available Abstract Background Cellular processes are controlled by gene-regulatory networks. Several computational methods are currently used to learn the structure of gene-regulatory networks from data. This study focusses on time series gene expression and gene knock-out data in order to identify the underlying network structure. We compare the performance of different network reconstruction methods using synthetic data generated from an ensemble of reference networks. Data requirements as well as optimal experiments for the reconstruction of gene-regulatory networks are investigated. Additionally, the impact of prior knowledge on network reconstruction as well as the effect of unobserved cellular processes is studied. Results We identify linear Gaussian dynamic Bayesian networks and variable selection based on F-statistics as suitable methods for the reconstruction of gene-regulatory networks from time series data. Commonly used discrete dynamic Bayesian networks perform inferior and this result can be attributed to the inevitable information loss by discretization of expression data. It is shown that short time series generated under transcription factor knock-out are optimal experiments in order to reveal the structure of gene regulatory networks. Relative to the level of observational noise, we give estimates for the required amount of gene expression data in order to accurately reconstruct gene-regulatory networks. The benefit of using of prior knowledge within a Bayesian learning framework is found to be limited to conditions of small gene expression data size. Unobserved processes, like protein-protein interactions, induce dependencies between gene expression levels similar to direct transcriptional regulation. We show that these dependencies cannot be distinguished from transcription factor mediated gene regulation on the basis of gene expression data alone. Conclusion Currently available data size and data quality make the reconstruction of

  13. The reconstruction and analysis of tissue specific human metabolic networks.

    Science.gov (United States)

    Hao, Tong; Ma, Hong-Wu; Zhao, Xue-Ming; Goryanin, Igor

    2012-02-01

    Human tissues have distinct biological functions. Many proteins/enzymes are known to be expressed only in specific tissues and therefore the metabolic networks in various tissues are different. Though high quality global human metabolic networks and metabolic networks for certain tissues such as liver have already been studied, a systematic study of tissue specific metabolic networks for all main tissues is still missing. In this work, we reconstruct the tissue specific metabolic networks for 15 main tissues in human based on the previously reconstructed Edinburgh Human Metabolic Network (EHMN). The tissue information is firstly obtained for enzymes from Human Protein Reference Database (HPRD) and UniprotKB databases and transfers to reactions through the enzyme-reaction relationships in EHMN. As our knowledge of tissue distribution of proteins is still very limited, we replenish the tissue information of the metabolic network based on network connectivity analysis and thorough examination of the literature. Finally, about 80% of proteins and reactions in EHMN are determined to be in at least one of the 15 tissues. To validate the quality of the tissue specific network, the brain specific metabolic network is taken as an example for functional module analysis and the results reveal that the function of the brain metabolic network is closely related with its function as the centre of the human nervous system. The tissue specific human metabolic networks are available at .

  14. CMIP: a software package capable of reconstructing genome-wide regulatory networks using gene expression data.

    Science.gov (United States)

    Zheng, Guangyong; Xu, Yaochen; Zhang, Xiujun; Liu, Zhi-Ping; Wang, Zhuo; Chen, Luonan; Zhu, Xin-Guang

    2016-12-23

    A gene regulatory network (GRN) represents interactions of genes inside a cell or tissue, in which vertexes and edges stand for genes and their regulatory interactions respectively. Reconstruction of gene regulatory networks, in particular, genome-scale networks, is essential for comparative exploration of different species and mechanistic investigation of biological processes. Currently, most of network inference methods are computationally intensive, which are usually effective for small-scale tasks (e.g., networks with a few hundred genes), but are difficult to construct GRNs at genome-scale. Here, we present a software package for gene regulatory network reconstruction at a genomic level, in which gene interaction is measured by the conditional mutual information measurement using a parallel computing framework (so the package is named CMIP). The package is a greatly improved implementation of our previous PCA-CMI algorithm. In CMIP, we provide not only an automatic threshold determination method but also an effective parallel computing framework for network inference. Performance tests on benchmark datasets show that the accuracy of CMIP is comparable to most current network inference methods. Moreover, running tests on synthetic datasets demonstrate that CMIP can handle large datasets especially genome-wide datasets within an acceptable time period. In addition, successful application on a real genomic dataset confirms its practical applicability of the package. This new software package provides a powerful tool for genomic network reconstruction to biological community. The software can be accessed at http://www.picb.ac.cn/CMIP/ .

  15. Two-Dimensional Impact Reconstruction Method for Rail Defect Inspection

    Directory of Open Access Journals (Sweden)

    Jie Zhao

    2014-01-01

    Full Text Available The safety of train operating is seriously menaced by the rail defects, so it is of great significance to inspect rail defects dynamically while the train is operating. This paper presents a two-dimensional impact reconstruction method to realize the on-line inspection of rail defects. The proposed method utilizes preprocessing technology to convert time domain vertical vibration signals acquired by wireless sensor network to space signals. The modern time-frequency analysis method is improved to reconstruct the obtained multisensor information. Then, the image fusion processing technology based on spectrum threshold processing and node color labeling is proposed to reduce the noise, and blank the periodic impact signal caused by rail joints and locomotive running gear. This method can convert the aperiodic impact signals caused by rail defects to partial periodic impact signals, and locate the rail defects. An application indicates that the two-dimensional impact reconstruction method could display the impact caused by rail defects obviously, and is an effective on-line rail defects inspection method.

  16. Consensus-based sparse signal reconstruction algorithm for wireless sensor networks

    National Research Council Canada - National Science Library

    Peng, Bao; Zhao, Zhi; Han, Guangjie; Shen, Jian

    2016-01-01

    This article presents a distributed Bayesian reconstruction algorithm for wireless sensor networks to reconstruct the sparse signals based on variational sparse Bayesian learning and consensus filter...

  17. Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

    Directory of Open Access Journals (Sweden)

    Christian L Barrett

    2006-05-01

    Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.

  18. Double and multiple knockout simulations for genome-scale metabolic network reconstructions

    OpenAIRE

    Goldstein, Yaron AB; Bockmayr, Alexander

    2015-01-01

    Background Constraint-based modeling of genome-scale metabolic network reconstructions has become a widely used approach in computational biology. Flux coupling analysis is a constraint-based method that analyses the impact of single reaction knockouts on other reactions in the network. Results We present an extension of flux coupling analysis for double and multiple gene or reaction knockouts, and develop corresponding algorithms for an in silico simulation. To evaluate our method, we perfor...

  19. Improving automated 3D reconstruction methods via vision metrology

    Science.gov (United States)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  20. A new method of morphological comparison for bony reconstructive surgery: maxillary reconstruction using scapular tip bone

    Science.gov (United States)

    Chan, Harley; Gilbert, Ralph W.; Pagedar, Nitin A.; Daly, Michael J.; Irish, Jonathan C.; Siewerdsen, Jeffrey H.

    2010-02-01

    esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall root-mean-square (RMS) conformance was 3.71+/- 0.16 mm

  1. Reconstructing Networks from Profit Sequences in Evolutionary Games via a Multiobjective Optimization Approach with Lasso Initialization

    Science.gov (United States)

    Wu, Kai; Liu, Jing; Wang, Shuai

    2016-11-01

    Evolutionary games (EG) model a common type of interactions in various complex, networked, natural and social systems. Given such a system with only profit sequences being available, reconstructing the interacting structure of EG networks is fundamental to understand and control its collective dynamics. Existing approaches used to handle this problem, such as the lasso, a convex optimization method, need a user-defined constant to control the tradeoff between the natural sparsity of networks and measurement error (the difference between observed data and simulated data). However, a shortcoming of these approaches is that it is not easy to determine these key parameters which can maximize the performance. In contrast to these approaches, we first model the EG network reconstruction problem as a multiobjective optimization problem (MOP), and then develop a framework which involves multiobjective evolutionary algorithm (MOEA), followed by solution selection based on knee regions, termed as MOEANet, to solve this MOP. We also design an effective initialization operator based on the lasso for MOEA. We apply the proposed method to reconstruct various types of synthetic and real-world networks, and the results show that our approach is effective to avoid the above parameter selecting problem and can reconstruct EG networks with high accuracy.

  2. Orthotropic conductivity reconstruction with virtual-resistive network and Faraday's law

    KAUST Repository

    Lee, Min-Gi

    2015-06-01

    We obtain the existence and the uniqueness at the same time in the reconstruction of orthotropic conductivity in two-space dimensions by using two sets of internal current densities and boundary conductivity. The curl-free equation of Faraday\\'s law is taken instead of the elliptic equation in a divergence form that is typically used in electrical impedance tomography. A reconstruction method based on layered bricks-type virtual-resistive network is developed to reconstruct orthotropic conductivity with up to 40% multiplicative noise.

  3. Sparse time series chain graphical models for reconstructing genetic networks

    NARCIS (Netherlands)

    Abegaz, Fentaw; Wit, Ernst

    We propose a sparse high-dimensional time series chain graphical model for reconstructing genetic networks from gene expression data parametrized by a precision matrix and autoregressive coefficient matrix. We consider the time steps as blocks or chains. The proposed approach explores patterns of

  4. Reconstruction of a ring applicator using CT imaging: impact of the reconstruction method and applicator orientation

    DEFF Research Database (Denmark)

    Hellebust, Taran Paulsen; Tanderup, Kari; Bergstrand, Eva Stabell

    2007-01-01

    in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method...... applicator set and six lead pellets representing dose points, was used. The phantom was CT scanned with the ring applicator at four different angles related to the image plane. In each scan the applicator was reconstructed by three methods: (1) direct reconstruction in each image (DR), (2) reconstruction...

  5. Spike-Triggered Regression for Synaptic Connectivity Reconstruction in Neuronal Networks.

    Science.gov (United States)

    Zhang, Yaoyu; Xiao, Yanyang; Zhou, Douglas; Cai, David

    2017-01-01

    How neurons are connected in the brain to perform computation is a key issue in neuroscience. Recently, the development of calcium imaging and multi-electrode array techniques have greatly enhanced our ability to measure the firing activities of neuronal populations at single cell level. Meanwhile, the intracellular recording technique is able to measure subthreshold voltage dynamics of a neuron. Our work addresses the issue of how to combine these measurements to reveal the underlying network structure. We propose the spike-triggered regression (STR) method, which employs both the voltage trace and firing activity of the neuronal population to reconstruct the underlying synaptic connectivity. Our numerical study of the conductance-based integrate-and-fire neuronal network shows that only short data of 20 ~ 100 s is required for an accurate recovery of network topology as well as the corresponding coupling strength. Our method can yield an accurate reconstruction of a large neuronal network even in the case of dense connectivity and nearly synchronous dynamics, which many other network reconstruction methods cannot successfully handle. In addition, we point out that, for sparse networks, the STR method can infer coupling strength between each pair of neurons with high accuracy in the absence of the global information of all other neurons.

  6. Some methods of estimating uncertainty in accident reconstruction

    OpenAIRE

    Batista, Milan

    2011-01-01

    In the paper four methods for estimating uncertainty in accident reconstruction are discussed: total differential method, extreme values method, Gauss statistical method, and Monte Carlo simulation method. The methods are described and the program solutions are given.

  7. Accelerated augmented Lagrangian method for few-view CT reconstruction

    Science.gov (United States)

    Wu, Junfeng; Mou, Xuanqin

    2012-03-01

    Recently iterative reconstruction algorithms with total variation (TV) regularization have shown its tremendous power in image reconstruction from few-view projection data, but it is much more demanding in computation. In this paper, we propose an accelerated augmented Lagrangian method (ALM) for few-view CT reconstruction with total variation regularization. Experimental phantom results demonstrate that the proposed method not only reconstruct high quality image from few-view projection data but also converge fast to the optimal solution.

  8. Reconstruction of switching thresholds in piecewise-affine models of genetic regulatory networks

    OpenAIRE

    Drulhe, Samuel; Ferrari-Trecate, Giancarlo; De Jong, Hidde; Viari, Alain

    2006-01-01

    http://dx.doi.org/10.1007/11730637_16; Recent advances of experimental techniques in biology have led to the production of enormous amounts of data on the dynamics of genetic regulatory networks. In this paper, we present an approach for the identification of PieceWise-Affine (PWA) models of genetic regulatory networks from experimental data, focusing on the reconstruction of switching thresholds associated with regulatory interactions. In particular, our method takes into account geometric c...

  9. Experimental Reconstructions of Surface Temperature using the PAGES 2k Network

    Science.gov (United States)

    Wang, Jianghao; Emile-Geay, Julien; Vaccaro, Adam; Guillot, Dominique; Rajaratnam, Bala

    2014-05-01

    Climate field reconstructions (CFRs) of the Common Era provide uniquely detailed characterizations of natural, low-frequency climate variability beyond the instrumental era. However, the accuracy and robustness of global-scale CFRs remains an open question. For instance, Wang et al. (2013) showed that CFRs are greatly method-dependent, highlighting the danger of forming dynamical interpretations based on a single reconstruction (e.g. Mann et al., 2009). This study will present a set of new reconstructions of global surface temperature and compare them with existing reconstructions from the IPCC AR5. The reconstructions are derived using the PAGES 2k network, which is composed of 501 high-resolution temperature-sensitive proxies from eight continental-scale regions (PAGES2K Consortium, 2013). Four CFR techniques are used to produce reconstructions, including RegEM-TTLS, the Mann et al. (2009) implementation of RegEM-TTLS (hereinafter M09-TTLS), CCA (Smerdon et al., 2010) and GraphEM (Guillot et al., submitted). First, we show that CFRs derived from the PAGES 2k network exhibit greater inter-method similarities than the same methods applied to the proxy network of Mann et al. (2009) (hereinafter M09 network). For instance, reconstructed NH mean temperature series using the PAGES 2k network are in better agreement over the last millennium than the M09-based reconstructions. Remarkably, for the reconstructed temperature difference between the Medieval Climate Anomaly and the Little Ice Age, the spatial patterns of the M09-based reconstructions are greatly divergent amongst methods. On the other hand, not a single PAGES 2k-based CFR displays the La Niña-like pattern found in Mann et al. (2009); rather, no systematic pattern emerges between the two epochs. Next, we quantify uncertainties associated with the PAGES 2k-based CFRs via ensemble methods, and show that GraphEM and CCA are less sensitive to random noise than RegEM-TTLS and M09-TTLS, consistent with pseudoproxy

  10. Image-reconstruction methods in positron tomography

    CERN Document Server

    Townsend, David W; CERN. Geneva

    1993-01-01

    Physics and mathematics for medical imaging In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-rays but also for studies which explore the functional status of the body using positron-emitting radioisotopes and nuclear magnetic resonance. Mathematical methods which enable three-dimentional distributions to be reconstructed from projection data acquired by radiation detectors suitably positioned around the patient will be described in detail. The lectures will trace the development of medical imaging from simpleradiographs to the present-day non-invasive measurement of in vivo boichemistry. Powerful techniques to correlate anatomy and function that are cur...

  11. Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure

    Directory of Open Access Journals (Sweden)

    Hesheng Zhang

    2016-01-01

    Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.

  12. Oromandibular Reconstruction Using 3D Planned Triple Template Method

    NARCIS (Netherlands)

    Coppen, C.T.M.; Weijs, W.L.J.; Berge, S.J.; Maal, T.J.J.

    2013-01-01

    PURPOSE: Reconstruction of an oromandibular defect remains one of the most formidable surgical challenges faced by the reconstructive head and neck surgeon. The purpose of this study was to illustrate the added value of 3D imaging and planning in oromandibular reconstruction. MATERIALS AND METHODS:

  13. HAWC Energy Reconstruction via Neural Network

    Science.gov (United States)

    Marinelli, Samuel; HAWC Collaboration

    2016-03-01

    The High-Altitude Water-Cherenkov (HAWC) γ-ray observatory is located at 4100 m above sea level on the Sierra Negra mountain in the state of Puebla, Mexico. Its 300 water-filled tanks are instrumented with PMTs that detect Cherenkov light produced by charged particles in atmospheric air showers induced by TeV γ-rays. The detector became fully operational in March of 2015. With a 2-sr field of view and duty cycle exceeding 90%, HAWC is a survey instrument sensitive to diverse γ-ray sources, including supernova remnants, pulsar wind nebulae, active galactic nuclei, and others. Particle-acceleration mechanisms at these sources can be inferred by studying their energy spectra, particularly at high energies. We have developed a technique for estimating primary- γ-ray energies using an artificial neural network (ANN). Input variables to the ANN are selected to characterize shower multiplicity in the detector, the fraction of the shower contained in the detector, and atmospheric attenuation of the shower. Monte Carlo simulations show that the new estimator has superior performance to the current estimator used in HAWC publications. This work was supported by the National Science Foundation.

  14. Reconstruction of a real world social network using the Potts model and Loopy Belief Propagation

    Directory of Open Access Journals (Sweden)

    Cristian eBisconti

    2015-11-01

    Full Text Available The scope of this paper is to test the adoption of a statistical model derived from Condensed Matter Physics, aiming at the reconstruction of a networked structure from observations of the states of the nodes in the network.The inverse Potts model, normally applied to observations of quantum states, is here addressed to observations of the node states in a network and their (anticorrelations, thus inferring interactions as links connecting the nodes. Adopting the Bethe approximation, such an inverse problem is known to be tractable.Within this operational framework, we discuss and apply this network-reconstruction method to a small real-world social network, where it is easy to track statuses of its members: the Italian parliament, adopted as a case study. The dataset is made of (cosponsorships of law proposals by parliament members. In previous studies of similar activity-based networks, the graph structure was inferred directly from activity co-occurrences: here we compare our statistical reconstruction with standard methods, outlining discrepancies and advantages.

  15. Gene expression network reconstruction by convex feature selection when incorporating genetic perturbations.

    Directory of Open Access Journals (Sweden)

    Benjamin A Logsdon

    Full Text Available Cellular gene expression measurements contain regulatory information that can be used to discover novel network relationships. Here, we present a new algorithm for network reconstruction powered by the adaptive lasso, a theoretically and empirically well-behaved method for selecting the regulatory features of a network. Any algorithms designed for network discovery that make use of directed probabilistic graphs require perturbations, produced by either experiments or naturally occurring genetic variation, to successfully infer unique regulatory relationships from gene expression data. Our approach makes use of appropriately selected cis-expression Quantitative Trait Loci (cis-eQTL, which provide a sufficient set of independent perturbations for maximum network resolution. We compare the performance of our network reconstruction algorithm to four other approaches: the PC-algorithm, QTLnet, the QDG algorithm, and the NEO algorithm, all of which have been used to reconstruct directed networks among phenotypes leveraging QTL. We show that the adaptive lasso can outperform these algorithms for networks of ten genes and ten cis-eQTL, and is competitive with the QDG algorithm for networks with thirty genes and thirty cis-eQTL, with rich topologies and hundreds of samples. Using this novel approach, we identify unique sets of directed relationships in Saccharomyces cerevisiae when analyzing genome-wide gene expression data for an intercross between a wild strain and a lab strain. We recover novel putative network relationships between a tyrosine biosynthesis gene (TYR1, and genes involved in endocytosis (RCY1, the spindle checkpoint (BUB2, sulfonate catabolism (JLP1, and cell-cell communication (PRM7. Our algorithm provides a synthesis of feature selection methods and graphical model theory that has the potential to reveal new directed regulatory relationships from the analysis of population level genetic and gene expression data.

  16. Supersampling and network reconstruction of urban mobility

    CERN Document Server

    Sagarra, Oleguer; Santi, Paolo; Diaz-Guilera, Albert; Ratti, Carlo

    2015-01-01

    Understanding human mobility is of vital importance for urban planning, epidemiology, and many other fields that aim to draw policies from the activities of humans in space. Despite recent availability of large scale data sets related to human mobility such as GPS traces, mobile phone data, etc., it is still true that such data sets represent a subsample of the population of interest, and then might give an incomplete picture of the entire population in question. Notwithstanding the abundant usage of such inherently limited data sets, the impact of sampling biases on mobility patterns is unclear -- we do not have methods available to reliably infer mobility information from a limited data set. Here, we investigate the effects of sampling using a data set of millions of taxi movements in New York City. On the one hand, we show that mobility patterns are highly stable once an appropriate simple rescaling is applied to the data, implying negligible loss of information due to subsampling over long time scales. On...

  17. Reconstructing Generalized Logical Networks of Transcriptional Regulation in Mouse Brain from Temporal Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Lodowski Kerrie H

    2009-01-01

    Full Text Available Gene expression time course data can be used not only to detect differentially expressed genes but also to find temporal associations among genes. The problem of reconstructing generalized logical networks to account for temporal dependencies among genes and environmental stimuli from transcriptomic data is addressed. A network reconstruction algorithm was developed that uses statistical significance as a criterion for network selection to avoid false-positive interactions arising from pure chance. The multinomial hypothesis testing-based network reconstruction allows for explicit specification of the false-positive rate, unique from all extant network inference algorithms. The method is superior to dynamic Bayesian network modeling in a simulation study. Temporal gene expression data from the brains of alcohol-treated mice in an analysis of the molecular response to alcohol are used for modeling. Genes from major neuronal pathways are identified as putative components of the alcohol response mechanism. Nine of these genes have associations with alcohol reported in literature. Several other potentially relevant genes, compatible with independent results from literature mining, may play a role in the response to alcohol. Additional, previously unknown gene interactions were discovered that, subject to biological verification, may offer new clues in the search for the elusive molecular mechanisms of alcoholism.

  18. Comparative Analysis of Yeast Metabolic Network Models Highlights Progress, Opportunities for Metabolic Reconstruction

    Science.gov (United States)

    Heavner, Benjamin D.; Price, Nathan D.

    2015-01-01

    We have compared 12 genome-scale models of the Saccharomyces cerevisiae metabolic network published since 2003 to evaluate progress in reconstruction of the yeast metabolic network. We compared the genomic coverage, overlap of annotated metabolites, predictive ability for single gene essentiality with a selection of model parameters, and biomass production predictions in simulated nutrient-limited conditions. We have also compared pairwise gene knockout essentiality predictions for 10 of these models. We found that varying approaches to model scope and annotation reflected the involvement of multiple research groups in model development; that single-gene essentiality predictions were affected by simulated medium, objective function, and the reference list of essential genes; and that predictive ability for single-gene essentiality did not correlate well with predictive ability for our reference list of synthetic lethal gene interactions (R = 0.159). We conclude that the reconstruction of the yeast metabolic network is indeed gradually improving through the iterative process of model development, and there remains great opportunity for advancing our understanding of biology through continued efforts to reconstruct the full biochemical reaction network that constitutes yeast metabolism. Additionally, we suggest that there is opportunity for refining the process of deriving a metabolic model from a metabolic network reconstruction to facilitate mechanistic investigation and discovery. This comparative study lays the groundwork for developing improved tools and formalized methods to quantitatively assess metabolic network reconstructions independently of any particular model application, which will facilitate ongoing efforts to advance our understanding of the relationship between genotype and cellular phenotype. PMID:26566239

  19. Comparative Analysis of Yeast Metabolic Network Models Highlights Progress, Opportunities for Metabolic Reconstruction.

    Directory of Open Access Journals (Sweden)

    Benjamin D Heavner

    2015-11-01

    Full Text Available We have compared 12 genome-scale models of the Saccharomyces cerevisiae metabolic network published since 2003 to evaluate progress in reconstruction of the yeast metabolic network. We compared the genomic coverage, overlap of annotated metabolites, predictive ability for single gene essentiality with a selection of model parameters, and biomass production predictions in simulated nutrient-limited conditions. We have also compared pairwise gene knockout essentiality predictions for 10 of these models. We found that varying approaches to model scope and annotation reflected the involvement of multiple research groups in model development; that single-gene essentiality predictions were affected by simulated medium, objective function, and the reference list of essential genes; and that predictive ability for single-gene essentiality did not correlate well with predictive ability for our reference list of synthetic lethal gene interactions (R = 0.159. We conclude that the reconstruction of the yeast metabolic network is indeed gradually improving through the iterative process of model development, and there remains great opportunity for advancing our understanding of biology through continued efforts to reconstruct the full biochemical reaction network that constitutes yeast metabolism. Additionally, we suggest that there is opportunity for refining the process of deriving a metabolic model from a metabolic network reconstruction to facilitate mechanistic investigation and discovery. This comparative study lays the groundwork for developing improved tools and formalized methods to quantitatively assess metabolic network reconstructions independently of any particular model application, which will facilitate ongoing efforts to advance our understanding of the relationship between genotype and cellular phenotype.

  20. New density profile reconstruction methods in X-mode reflectometry

    Science.gov (United States)

    Morales, R. B.; Hacquin, S.; Heuraux, S.; Sabot, R.

    2017-04-01

    The reconstruction method published by Bottollier-Curtet and Ichtchenko in 1987 has been the standard method of density profile reconstruction for X-mode reflectometry ever since, with only minor revision. Envisaging improved accuracy and stability of the reconstruction method, functions more complex than the linear are evaluated here to describe the refractive index shape in each integration step. The stability and accuracy obtained when using parabolic and fixed or adaptative fractional power functions are compared to the previous method and tested against spurious events and phase noise. The developed relation from the plasma parameters to the best integration shapes allows for the optimization of the reconstruction for any profile shape. In addition, the density profiles can be reconstructed using less probing frequencies without accuracy loss, which speed up the reconstruction algorithm and enable real-time monitoring of faster density profile evolution.

  1. A Comparative Study of Different Reconstruction Schemes for a Reconstructed Discontinuous Galerkin Method on Arbitrary Grids

    Energy Technology Data Exchange (ETDEWEB)

    Hong Luo; Hanping Xiao; Robert Nourgaliev; Chunpei Cai

    2011-06-01

    A comparative study of different reconstruction schemes for a reconstruction-based discontinuous Galerkin, termed RDG(P1P2) method is performed for compressible flow problems on arbitrary grids. The RDG method is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution via a reconstruction scheme commonly used in the finite volume method. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are implemented to obtain a quadratic polynomial representation of the underlying discontinuous Galerkin linear polynomial solution on each cell. These three reconstruction/recovery methods are compared for a variety of compressible flow problems on arbitrary meshes to access their accuracy and robustness. The numerical results demonstrate that all three reconstruction methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstruction method provides the best performance in terms of both accuracy and robustness.

  2. Reconstruction of a ring applicator using CT imaging: impact of the reconstruction method and applicator orientation

    Energy Technology Data Exchange (ETDEWEB)

    Hellebust, Taran Paulsen [Department of Medical Physics, Rikshospital-Radiumhospital Medical Center, Oslo (Norway); Tanderup, Kari [Department of Oncology, Aarhus University Hospital, Aarhus (Denmark); Bergstrand, Eva Stabell [Department of Medical Physics, Rikshospital-Radiumhospital Medical Center, Oslo (Norway); Knutsen, Bjoern Helge [Department of Medical Physics, Rikshospital-Radiumhospital Medical Center, Oslo (Norway); Roeislien, Jo [Section of Biostatistics, Rikshospital-Radiumhospital Medical Center, Oslo (Norway); Olsen, Dag Rune [Institute for Cancer Research, Rikshospital-Radiumhospital Medical Center, Oslo (Norway)

    2007-08-21

    The purpose of this study is to investigate whether the method of applicator reconstruction and/or the applicator orientation influence the dose calculation to points around the applicator for brachytherapy of cervical cancer with CT-based treatment planning. A phantom, containing a fixed ring applicator set and six lead pellets representing dose points, was used. The phantom was CT scanned with the ring applicator at four different angles related to the image plane. In each scan the applicator was reconstructed by three methods: (1) direct reconstruction in each image (DR) (2) reconstruction in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method was significantly lower (p < 0.05) than for the DR and MPR methods for all but two points. All applicator orientations had similar dose calculation reproducibility. Using library plans for applicator reconstruction gives the most reproducible dose calculation. However, with restrictive guidelines for applicator reconstruction the uncertainties for all methods are low compared to other factors influencing the accuracy of brachytherapy.

  3. Integrated Approach to Reconstruction of Microbial Regulatory Networks

    Energy Technology Data Exchange (ETDEWEB)

    Rodionov, Dmitry A [Sanford-Burnham Medical Research Institute; Novichkov, Pavel S [Lawrence Berkeley National Laboratory

    2013-11-04

    This project had the goal(s) of development of integrated bioinformatics platform for genome-scale inference and visualization of transcriptional regulatory networks (TRNs) in bacterial genomes. The work was done in Sanford-Burnham Medical Research Institute (SBMRI, P.I. D.A. Rodionov) and Lawrence Berkeley National Laboratory (LBNL, co-P.I. P.S. Novichkov). The developed computational resources include: (1) RegPredict web-platform for TRN inference and regulon reconstruction in microbial genomes, and (2) RegPrecise database for collection, visualization and comparative analysis of transcriptional regulons reconstructed by comparative genomics. These analytical resources were selected as key components in the DOE Systems Biology KnowledgeBase (SBKB). The high-quality data accumulated in RegPrecise will provide essential datasets of reference regulons in diverse microbes to enable automatic reconstruction of draft TRNs in newly sequenced genomes. We outline our progress toward the three aims of this grant proposal, which were: Develop integrated platform for genome-scale regulon reconstruction; Infer regulatory annotations in several groups of bacteria and building of reference collections of microbial regulons; and Develop KnowledgeBase on microbial transcriptional regulation.

  4. Reconstruction of cellular forces in fibrous biopolymer network

    CERN Document Server

    Zhang, Yunsong; Heizler, Shay; Levine, Herbert

    2016-01-01

    How cells move through 3d extracellular matrix (ECM) is of increasing interest in attempts to understand important biological processes such as cancer metastasis. Just as in motion on 2d surfaces, it is expected that experimental measurements of cell-generated forces will provide valuable information for uncovering the mechanisms of cell migration. Here, we use a lattice-based mechanical model of ECM to study the cellular force reconstruction issue. We conceptually propose an efficient computational scheme to reconstruct cellular forces from the deformation and explore the performance of our scheme in presence of noise, varying marker bead distribution, varying bond stiffnesses and changing cell morphology. Our results show that micromechanical information, rather than merely the bulk rheology of the biopolymer networks, is essential for a precise recovery of cellular forces.

  5. Reconstruction of extended Petri nets from time series data and its application to signal transduction and to gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Marwan Wolfgang

    2011-07-01

    Full Text Available Abstract Background Network inference methods reconstruct mathematical models of molecular or genetic networks directly from experimental data sets. We have previously reported a mathematical method which is exclusively data-driven, does not involve any heuristic decisions within the reconstruction process, and deliveres all possible alternative minimal networks in terms of simple place/transition Petri nets that are consistent with a given discrete time series data set. Results We fundamentally extended the previously published algorithm to consider catalysis and inhibition of the reactions that occur in the underlying network. The results of the reconstruction algorithm are encoded in the form of an extended Petri net involving control arcs. This allows the consideration of processes involving mass flow and/or regulatory interactions. As a non-trivial test case, the phosphate regulatory network of enterobacteria was reconstructed using in silico-generated time-series data sets on wild-type and in silico mutants. Conclusions The new exact algorithm reconstructs extended Petri nets from time series data sets by finding all alternative minimal networks that are consistent with the data. It suggested alternative molecular mechanisms for certain reactions in the network. The algorithm is useful to combine data from wild-type and mutant cells and may potentially integrate physiological, biochemical, pharmacological, and genetic data in the form of a single model.

  6. The performance of diphoton primary vertex reconstruction methods in H → γγ+Met channel of ATLAS experiment

    Science.gov (United States)

    Tomiwa, K. G.

    2017-09-01

    The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.

  7. Reconstruction of a Real World Social Network using the Potts Model and Loopy Belief Propagation.

    Science.gov (United States)

    Bisconti, Cristian; Corallo, Angelo; Fortunato, Laura; Gentile, Antonio A; Massafra, Andrea; Pellè, Piergiuseppe

    2015-01-01

    The scope of this paper is to test the adoption of a statistical model derived from Condensed Matter Physics, for the reconstruction of the structure of a social network. The inverse Potts model, traditionally applied to recursive observations of quantum states in an ensemble of particles, is here addressed to observations of the members' states in an organization and their (anti)correlations, thus inferring interactions as links among the members. Adopting proper (Bethe) approximations, such an inverse problem is showed to be tractable. Within an operational framework, this network-reconstruction method is tested for a small real-world social network, the Italian parliament. In this study case, it is easy to track statuses of the parliament members, using (co)sponsorships of law proposals as the initial dataset. In previous studies of similar activity-based networks, the graph structure was inferred directly from activity co-occurrences: here we compare our statistical reconstruction with such standard methods, outlining discrepancies and advantages.

  8. SCENERY: a web application for (causal) network reconstruction from cytometry data

    KAUST Repository

    Papoutsoglou, Georgios

    2017-05-08

    Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/.

  9. Integration of expression data in genome-scale metabolic network reconstructions

    Directory of Open Access Journals (Sweden)

    Anna S. Blazier

    2012-08-01

    Full Text Available With the advent of high-throughput technologies, the field of systems biology has amassed an abundance of omics data, quantifying thousands of cellular components across a variety of scales, ranging from mRNA transcript levels to metabolite quantities. Methods are needed to not only integrate this omics data but to also use this data to heighten the predictive capabilities of computational models. Several recent studies have successfully demonstrated how flux balance analysis (FBA, a constraint-based modeling approach, can be used to integrate transcriptomic data into genome-scale metabolic network reconstructions to generate predictive computational models. In this review, we summarize such FBA-based methods for integrating expression data into genome-scale metabolic network reconstructions, highlighting their advantages as well as their limitations.

  10. Improved reconstruction of in silico gene regulatory networks by integrating knockout and perturbation data.

    Directory of Open Access Journals (Sweden)

    Kevin Y Yip

    Full Text Available We performed computational reconstruction of the in silico gene regulatory networks in the DREAM3 Challenges. Our task was to learn the networks from two types of data, namely gene expression profiles in deletion strains (the 'deletion data' and time series trajectories of gene expression after some initial perturbation (the 'perturbation data'. In the course of developing the prediction method, we observed that the two types of data contained different and complementary information about the underlying network. In particular, deletion data allow for the detection of direct regulatory activities with strong responses upon the deletion of the regulator while perturbation data provide richer information for the identification of weaker and more complex types of regulation. We applied different techniques to learn the regulation from the two types of data. For deletion data, we learned a noise model to distinguish real signals from random fluctuations using an iterative method. For perturbation data, we used differential equations to model the change of expression levels of a gene along the trajectories due to the regulation of other genes. We tried different models, and combined their predictions. The final predictions were obtained by merging the results from the two types of data. A comparison with the actual regulatory networks suggests that our approach is effective for networks with a range of different sizes. The success of the approach demonstrates the importance of integrating heterogeneous data in network reconstruction.

  11. High resolution x-ray CMT: Reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.K.

    1997-02-01

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.

  12. Reconstruction of FY-3B/MWRI soil moisture using an artificial neural network based on reconstructed MODIS optical products over the Tibetan Plateau

    Science.gov (United States)

    Cui, Y.; Long, D.; Hong, Y.; Zeng, C.; Han, Z.

    2016-12-01

    Reconstruction of FY-3B/MWRI soil moisture using an artificial neural network based on reconstructed MODIS optical products over the Tibetan Plateau Yaokui Cui, Di Long, Yang Hong, Chao Zeng, and Zhongying Han State Key Laboratory of Hydroscience and Engineering, Department of Hydraulic Engineering, Tsinghua University, Beijing 100084, China Abstract: Soil moisture is a key variable in the exchange of water and energy between the land surface and the atmosphere, especially over the Tibetan Plateau (TP) which is climatically and hydrologically sensitive as the world's third pole. Large-scale consistent and continuous soil moisture datasets are of importance to meteorological and hydrological applications, such as weather forecasting and drought monitoring. The Fengyun-3B Microwave Radiation Imager (FY-3B/MWRI) soil moisture product is one of relatively new passive microwave products. The FY-3B/MWRI soil moisture product is reconstructed using the back-propagation neural network (BP-NN) based on reconstructed MODIS products, i.e., LST, NDVI, and albedo using different gap-filling methods. The reconstruction method of generating the soil moisture product not only considers the relationship between the soil moisture and the NDVI, LST, and albedo, but also the relationship between the soil moisture and the four-dimensional variation using the longitude, latitude, DEM and day of year (DOY). Results show that the soil moisture could be well reconstructed with R2 larger than 0.63, and RMSE less than 0.1 cm3 cm-3 and bias less than 0.07 cm3 cm-3 for both frozen and unfrozen periods, compared with in-situ measurements in the central TP. The reconstruction method is subsequently applied to generate spatially consistent and temporally continuous surface soil moisture over the TP. The reconstructed FY-3B/MWRI soil moisture product could be valuable in studying meteorology, hydrology, and agriculture over the TP. Keywords: FY-3B/MWRI; Soil moisture; Reconstruction; Tibetan

  13. Interior reconstruction method based on rotation-translation scanning model.

    Science.gov (United States)

    Wang, Xianchao; Tang, Ziyue; Yan, Bin; Li, Lei; Bao, Shanglian

    2014-01-01

    In various applications of computed tomography (CT), it is common that the reconstructed object is over the field of view (FOV) or we may intend to sue a FOV which only covers the region of interest (ROI) for the sake of reducing radiation dose. These kinds of imaging situations often lead to interior reconstruction problems which are difficult cases in the reconstruction field of CT, due to the truncated projection data at every view angle. In this paper, an interior reconstruction method is developed based on a rotation-translation (RT) scanning model. The method is implemented by first scanning the reconstructed region, and then scanning a small region outside the support of the reconstructed object after translating the rotation centre. The differentiated backprojection (DBP) images of the reconstruction region and the small region outside the object can be respectively obtained from the two-time scanning data without data rebinning process. At last, the projection onto convex sets (POCS) algorithm is applied to reconstruct the interior region. Numerical simulations are conducted to validate the proposed reconstruction method.

  14. AIR Tools - A MATLAB package of algebraic iterative reconstruction methods

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Saxild-Hansen, Maria

    2012-01-01

    We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are impleme......We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods...... are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter...

  15. Adaptive multiresolution method for MAP reconstruction in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)

    2016-11-15

    3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.

  16. Geometric reconstruction methods for electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Alpers, Andreas, E-mail: alpers@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Gardner, Richard J., E-mail: Richard.Gardner@wwu.edu [Department of Mathematics, Western Washington University, Bellingham, WA 98225-9063 (United States); König, Stefan, E-mail: koenig@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Pennington, Robert S., E-mail: robert.pennington@uni-ulm.de [Center for Electron Nanoscopy, Technical University of Denmark, DK-2800 Kongens Lyngby (Denmark); Boothroyd, Chris B., E-mail: ChrisBoothroyd@cantab.net [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Houben, Lothar, E-mail: l.houben@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Dunin-Borkowski, Rafal E., E-mail: rdb@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Joost Batenburg, Kees, E-mail: Joost.Batenburg@cwi.nl [Centrum Wiskunde and Informatica, NL-1098XG, Amsterdam, The Netherlands and Vision Lab, Department of Physics, University of Antwerp, B-2610 Wilrijk (Belgium)

    2013-05-15

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand.

  17. Chemiomics: network reconstruction and kinetics of port wine aging.

    Science.gov (United States)

    Monforte, Ana Rita; Jacobson, Dan; Silva Ferreira, A C

    2015-03-11

    Network reconstruction (NR) has proven to be useful in the detection and visualization of relationships among the compounds present in a Port wine aging data set. This view of the data provides a considerable amount of information with which to understand the kinetic contexts of the molecules represented by peaks in each chromatogram. The aim of this study was to use NR together with the determination of kinetic parameters to extract more information about the mechanisms involved in Port wine aging. The volatile compounds present in samples of Port wines spanning 128 years in age were measured with the use of GC-MS. After chromatogram alignment, a peak matrix was created, and all peak vectors were compared to one another to determine their Pearson correlations over time. A correlation network was created and filtered on the basis of the resulting correlation values. Some nodes in the network were further studied in experiments on Port wines stored under different conditions of oxygen and temperature in order to determine their kinetic parameters. The resulting network can be divided into three main branches. The first branch is related to compounds that do not directly correlate to age, the second branch contains compounds affected by temperature, and the third branch contains compounds associated with oxygen. Compounds clustered in the same branch of the network have similar expression patterns over time as well as the same kinetic order, thus are likely to be dependent on the same technological parameters. Network construction and visualization provides more information with which to understand the probable kinetic contexts of the molecules represented by peaks in each chromatogram. The approach described here is a powerful tool for the study of mechanisms and kinetics in complex systems and should aid in the understanding and monitoring of wine quality.

  18. Next-Generation Global Biomonitoring: Large-scale, Automated Reconstruction of Ecological Networks.

    Science.gov (United States)

    Bohan, David A; Vacher, Corinne; Tamaddoni-Nezhad, Alireza; Raybould, Alan; Dumbrell, Alex J; Woodward, Guy

    2017-07-01

    We foresee a new global-scale, ecological approach to biomonitoring emerging within the next decade that can detect ecosystem change accurately, cheaply, and generically. Next-generation sequencing of DNA sampled from the Earth's environments would provide data for the relative abundance of operational taxonomic units or ecological functions. Machine-learning methods would then be used to reconstruct the ecological networks of interactions implicit in the raw NGS data. Ultimately, we envision the development of autonomous samplers that would sample nucleic acids and upload NGS sequence data to the cloud for network reconstruction. Large numbers of these samplers, in a global array, would allow sensitive automated biomonitoring of the Earth's major ecosystems at high spatial and temporal resolution, revolutionising our understanding of ecosystem change. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two

  20. Robustness and Optimization of Complex Networks : Reconstructability, Algorithms and Modeling

    NARCIS (Netherlands)

    Liu, D.

    2013-01-01

    The infrastructure networks, including the Internet, telecommunication networks, electrical power grids, transportation networks (road, railway, waterway, and airway networks), gas networks and water networks, are becoming more and more complex. The complex infrastructure networks are crucial to our

  1. Reconstruction of three-dimensional porous media using generative adversarial neural networks

    Science.gov (United States)

    Mosser, Lukas; Dubrule, Olivier; Blunt, Martin J.

    2017-10-01

    To evaluate the variability of multiphase flow properties of porous media at the pore scale, it is necessary to acquire a number of representative samples of the void-solid structure. While modern x-ray computer tomography has made it possible to extract three-dimensional images of the pore space, assessment of the variability in the inherent material properties is often experimentally not feasible. We present a method to reconstruct the solid-void structure of porous media by applying a generative neural network that allows an implicit description of the probability distribution represented by three-dimensional image data sets. We show, by using an adversarial learning approach for neural networks, that this method of unsupervised learning is able to generate representative samples of porous media that honor their statistics. We successfully compare measures of pore morphology, such as the Euler characteristic, two-point statistics, and directional single-phase permeability of synthetic realizations with the calculated properties of a bead pack, Berea sandstone, and Ketton limestone. Results show that generative adversarial networks can be used to reconstruct high-resolution three-dimensional images of porous media at different scales that are representative of the morphology of the images used to train the neural network. The fully convolutional nature of the trained neural network allows the generation of large samples while maintaining computational efficiency. Compared to classical stochastic methods of image reconstruction, the implicit representation of the learned data distribution can be stored and reused to generate multiple realizations of the pore structure very rapidly.

  2. Interrogation Methods and Terror Networks

    Science.gov (United States)

    Baccara, Mariagiovanna; Bar-Isaac, Heski

    We examine how the structure of terror networks varies with legal limits on interrogation and the ability of authorities to extract information from detainees. We assume that terrorist networks are designed to respond optimally to a tradeoff caused by information exchange: Diffusing information widely leads to greater internal efficiency, but it leaves the organization more vulnerable to law enforcement. The extent of this vulnerability depends on the law enforcement authority’s resources, strategy and interrogation methods. Recognizing that the structure of a terrorist network responds to the policies of law enforcement authorities allows us to begin to explore the most effective policies from the authorities’ point of view.

  3. Reconstruction-classification method for quantitative photoacoustic tomography

    CERN Document Server

    Malone, Emma; Cox, Ben T; Arridge, Simon R

    2015-01-01

    We propose a combined reconstruction-classification method for simultaneously recovering absorption and scattering in turbid media from images of absorbed optical energy. This method exploits knowledge that optical parameters are determined by a limited number of classes to iteratively improve their estimate. Numerical experiments show that the proposed approach allows for accurate recovery of absorption and scattering in 2 and 3 dimensions, and delivers superior image quality with respect to traditional reconstruction-only approaches.

  4. Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting

    Science.gov (United States)

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Lu

    2017-09-01

    In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.

  5. Reconstruction of the metabolic network of Pseudomonas aeruginosa to interrogate virulence factor synthesis

    DEFF Research Database (Denmark)

    Bartell, Jennifer; Blazier, Anna S; Yen, Phillip

    2017-01-01

    to metabolism. We evaluate the complex interrelationships between growth and virulence-linked pathways using a genome-scale metabolic network reconstruction of Pseudomonas aeruginosa strain PA14 and an updated, expanded reconstruction of P. aeruginosa strain PAO1. The PA14 reconstruction accounts...

  6. AIR: fused Analytical and Iterative Reconstruction method for computed tomography

    CERN Document Server

    Yang, Liu; Qi, Sharon X; Gao, Hao

    2013-01-01

    Purpose: CT image reconstruction techniques have two major categories: analytical reconstruction (AR) method and iterative reconstruction (IR) method. AR reconstructs images through analytical formulas, such as filtered backprojection (FBP) in 2D and Feldkamp-Davis-Kress (FDK) method in 3D, which can be either mathematically exact or approximate. On the other hand, IR is often based on the discrete forward model of X-ray transform and formulated as a minimization problem with some appropriate image regularization method, so that the reconstructed image corresponds to the minimizer of the optimization problem. This work is to investigate the fused analytical and iterative reconstruction (AIR) method. Methods: Based on IR with L1-type image regularization, AIR is formulated with a AR-specific preconditioner in the data fidelity term, which results in the minimal change of the solution algorithm that replaces the adjoint X-ray transform by the filtered X-ray transform. As a proof-of-concept 2D example of AIR, FB...

  7. Multiple network interface core apparatus and method

    Science.gov (United States)

    Underwood, Keith D [Albuquerque, NM; Hemmert, Karl Scott [Albuquerque, NM

    2011-04-26

    A network interface controller and network interface control method comprising providing a single integrated circuit as a network interface controller and employing a plurality of network interface cores on the single integrated circuit.

  8. Reconstruction methods for phase-contrast tomography

    Energy Technology Data Exchange (ETDEWEB)

    Raven, C.

    1997-02-01

    Phase contrast imaging with coherent x-rays can be distinguished in outline imaging and holography, depending on the wavelength {lambda}, the object size d and the object-to-detector distance r. When r << d{sup 2}{lambda}, phase contrast occurs only in regions where the refractive index fastly changes, i.e. at interfaces and edges in the sample. With increasing object-to-detector distance we come in the area of holographic imaging. The image contrast outside the shadow region of the object is due to interference of the direct, undiffracted beam and a beam diffracted by the object, or, in terms of holography, the interference of a reference wave with the object wave. Both, outline imaging and holography, offer the possibility to obtain three dimensional information of the sample in conjunction with a tomographic technique. But the data treatment and the kind of information one can obtain from the reconstruction is different.

  9. Reconstruction and analysis of nutrient-induced phosphorylation networks in Arabidopsis thaliana.

    Directory of Open Access Journals (Sweden)

    Guangyou eDuan

    2013-12-01

    Full Text Available Elucidating the dynamics of molecular processes in living organisms in response to external perturbations is a central goal in modern systems biology. We investigated the dynamics of protein phosphorylation events in Arabidopsis thaliana exposed to changing nutrient conditions. Phosphopeptide expression levels were detected at five consecutive time points over a time interval of 30 minutes after nutrient resupply following prior starvation. The three tested inorganic, ionic nutrients NH4+, NO3-, PO43- elicited similar phosphosignaling responses that were distinguishable from those invoked by the sugars mannitol, sucrose. When embedded in the protein-protein interaction network of Arabidopsis thaliana, phosphoproteins were found to exhibit a higher degree compared to average proteins. Based on the time-series data, we reconstructed a network of regulatory interactions mediated by phosphorylation. The performance of different network inference methods was evaluated by the observed likelihood of physical interactions within and across different subcellular compartments and based on gene ontology semantic similarity. The dynamic phosphorylation network was then reconstructed using a Pearson correlation method with added directionality based on partial variance differences. The topology of the inferred integrated network corresponds to an information dissemination architecture, in which the phosphorylation signal is passed on to an increasing number of phosphoproteins stratified into an initiation, processing, and effector layer. Specific phosphorylation peptide motifs associated with the distinct layers were identified indicating the action of layer-specific kinases. Despite the limited temporal resolution, combined with information on subcellular location, the available time-series data proved useful for reconstructing the dynamics of the molecular signaling cascade in response to nutrient stress conditions in the plant Arabidopsis thaliana.

  10. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  11. Filter-based reconstruction methods for tomography

    NARCIS (Netherlands)

    Pelt, D.M.

    2016-01-01

    In X-ray tomography, a three-dimensional image of the interior of an object is computed from multiple X-ray images, acquired over a range of angles. Two types of methods are commonly used to compute such an image: analytical methods and iterative methods. Analytical methods are computationally

  12. Reconstruction of social group networks from friendship networks using a tag-based model

    Science.gov (United States)

    Guan, Yuan-Pan; You, Zhi-Qiang; Han, Xiao-Pu

    2016-12-01

    Social group is a type of mesoscopic structure that connects human individuals in microscopic level and the global structure of society. In this paper, we propose a tag-based model considering that social groups expand along the edge that connects two neighbors with a similar tag of interest. The model runs on a real-world friendship network, and its simulation results show that various properties of simulated group network can well fit the empirical analysis on real-world social groups, indicating that the model catches the major mechanism driving the evolution of social groups and successfully reconstructs the social group network from a friendship network and throws light on digging of relationships between social functional organizations.

  13. Maximum-entropy networks pattern detection, network reconstruction and graph combinatorics

    CERN Document Server

    Squartini, Tiziano

    2017-01-01

    This book is an introduction to maximum-entropy models of random graphs with given topological properties and their applications. Its original contribution is the reformulation of many seemingly different problems in the study of both real networks and graph theory within the unified framework of maximum entropy. Particular emphasis is put on the detection of structural patterns in real networks, on the reconstruction of the properties of networks from partial information, and on the enumeration and sampling of graphs with given properties.  After a first introductory chapter explaining the motivation, focus, aim and message of the book, chapter 2 introduces the formal construction of maximum-entropy ensembles of graphs with local topological constraints. Chapter 3 focuses on the problem of pattern detection in real networks and provides a powerful way to disentangle nontrivial higher-order structural features from those that can be traced back to simpler local constraints. Chapter 4 focuses on the problem o...

  14. Novel trace norm regularization method for fluorescence molecular tomography reconstruction

    Science.gov (United States)

    Liu, Yuhao; Liu, Jie; An, Yu; Jiang, Shixin; Ye, Jinzuo; Mao, Yamin; He, Kunshan; Zhang, Guanglei; Chi, Chongwei; Tian, Jie

    2017-02-01

    Fluorescence molecular tomography (FMT) is developing rapidly in the field of molecular imaging. FMT has been used in surgical navigation for tumor resection and has many potential applications at the physiological, metabolic, and molecular levels in tissues. Due to the ill-posed nature of the problem, many regularized methods are generally adopted. In this paper, we propose a region reconstruction method for FMT in which the trace norm regularization. The trace norm penalty was defined as the sum of the singular values of the matrix. The proposed method adopts a priori information which is the structured sparsity of the fluorescent regions for FMT reconstruction. In order to improve the solution efficiency, the accelerated proximal gradient algorithms was used to accelerate the computation. The numerical phantom experiment was conducted to evaluate the performance of the proposed trace norm regularization method. The simulation study shows that the proposed method achieves accurate and is able to reconstruct image effectively.

  15. Compressive sensing reconstruction of feed-forward connectivity in pulse-coupled nonlinear networks

    Science.gov (United States)

    Barranca, Victor J.; Zhou, Douglas; Cai, David

    2016-06-01

    Utilizing the sparsity ubiquitous in real-world network connectivity, we develop a theoretical framework for efficiently reconstructing sparse feed-forward connections in a pulse-coupled nonlinear network through its output activities. Using only a small ensemble of random inputs, we solve this inverse problem through the compressive sensing theory based on a hidden linear structure intrinsic to the nonlinear network dynamics. The accuracy of the reconstruction is further verified by the fact that complex inputs can be well recovered using the reconstructed connectivity. We expect this Rapid Communication provides a new perspective for understanding the structure-function relationship as well as compressive sensing principle in nonlinear network dynamics.

  16. A Comparison of Methods for Ocean Reconstruction from Sparse Observations

    Science.gov (United States)

    Streletz, G. J.; Kronenberger, M.; Weber, C.; Gebbie, G.; Hagen, H.; Garth, C.; Hamann, B.; Kreylos, O.; Kellogg, L. H.; Spero, H. J.

    2014-12-01

    We present a comparison of two methods for developing reconstructions of oceanic scalar property fields from sparse scattered observations. Observed data from deep sea core samples provide valuable information regarding the properties of oceans in the past. However, because the locations of sample sites are distributed on the ocean floor in a sparse and irregular manner, developing a global ocean reconstruction is a difficult task. Our methods include a flow-based and a moving least squares -based approximation method. The flow-based method augments the process of interpolating or approximating scattered scalar data by incorporating known flow information. The scheme exploits this additional knowledge to define a non-Euclidean distance measure between points in the spatial domain. This distance measure is used to create a reconstruction of the desired scalar field on the spatial domain. The resulting reconstruction thus incorporates information from both the scattered samples and the known flow field. The second method does not assume a known flow field, but rather works solely with the observed scattered samples. It is based on a modification of the moving least squares approach, a weighted least squares approximation method that blends local approximations into a global result. The modifications target the selection of data used for these local approximations and the construction of the weighting function. The definition of distance used in the weighting function is crucial for this method, so we use a machine learning approach to determine a set of near-optimal parameters for the weighting. We have implemented both of the reconstruction methods and have tested them using several sparse oceanographic datasets. Based upon these studies, we discuss the advantages and disadvantages of each method and suggest possible ways to combine aspects of both methods in order to achieve an overall high-quality reconstruction.

  17. Deep Learning Methods for Particle Reconstruction in the HGCal

    CERN Document Server

    Arzi, Ofir

    2017-01-01

    The High Granularity end-cap Calorimeter is part of the phase-2 CMS upgrade (see Figure \\ref{fig:cms})\\cite{Contardo:2020886}. It's goal it to provide measurements of high resolution in time, space and energy. Given such measurements, the purpose of this work is to discuss the use of Deep Neural Networks for the task of particle and trajectory reconstruction, identification and energy estimation, during my participation in the CERN Summer Students Program.

  18. Deep Learning Methods for Particle Reconstruction in the HGCal

    CERN Document Server

    Arzi, Ofir

    2017-01-01

    The High Granularity end-cap Calorimeter is part of the phase-2 CMS upgrade (see Figure 1)[1]. It’s goal it to provide measurements of high resolution in time, space and energy. Given such measurements, the purpose of this work is to discuss the use of Deep Neural Networks for the task of particle and trajectory reconstruction, identification and energy estimation, during my participation in the CERN Summer Students Program.

  19. Network Forensics Method Based on Evidence Graph and Vulnerability Reasoning

    Directory of Open Access Journals (Sweden)

    Jingsha He

    2016-11-01

    Full Text Available As the Internet becomes larger in scale, more complex in structure and more diversified in traffic, the number of crimes that utilize computer technologies is also increasing at a phenomenal rate. To react to the increasing number of computer crimes, the field of computer and network forensics has emerged. The general purpose of network forensics is to find malicious users or activities by gathering and dissecting firm evidences about computer crimes, e.g., hacking. However, due to the large volume of Internet traffic, not all the traffic captured and analyzed is valuable for investigation or confirmation. After analyzing some existing network forensics methods to identify common shortcomings, we propose in this paper a new network forensics method that uses a combination of network vulnerability and network evidence graph. In our proposed method, we use vulnerability evidence and reasoning algorithm to reconstruct attack scenarios and then backtrack the network packets to find the original evidences. Our proposed method can reconstruct attack scenarios effectively and then identify multi-staged attacks through evidential reasoning. Results of experiments show that the evidence graph constructed using our method is more complete and credible while possessing the reasoning capability.

  20. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  1. Neural network based real-time reconstruction of KSTAR magnetic equilibria with Bayesian-based preprocessing

    Science.gov (United States)

    Joung, Semin; Kwak, Sehyun; Ghim, Y.-C.

    2017-10-01

    Obtaining plasma shapes during tokamak discharges requires real-time estimation of magnetic configuration using Grad-Shafranov solver such as EFIT. Since off-line EFIT is computationally intensive and the real-time reconstructions do not agree with the results of off-line EFIT within our desired accuracy, we use a neural network to generate an off-line-quality equilibrium in real time. To train the neural network (two hidden layers with 30 and 20 nodes for each layer), we create database consisting of the magnetic signals and off-line EFIT results from KSTAR as inputs and targets, respectively. To compensate drifts in the magnetic signals originated from electronic circuits, we develop a Bayesian-based two-step real-time correction method. Additionally, we infer missing inputs, i.e. when some of inputs to the network are not usable, using Gaussian process coupled with Bayesian model. The likelihood of this model is determined based on the Maxwell's equations. We find that our network can withstand at least up to 20% of input errors. Note that this real-time reconstruction scheme is not yet implemented for KSTAR operation.

  2. Fast Tomographic Reconstruction From Limited Data Using Artificial Neural Networks

    NARCIS (Netherlands)

    D.M. Pelt (Daniel); K.J. Batenburg (Joost)

    2013-01-01

    htmlabstractImage reconstruction from a small number of projections is a challenging problem in tomography. Advanced algorithms that incorporate prior knowledge can sometimes produce accurate reconstructions, but they typically require long computation times. Furthermore, the required prior

  3. COMPARISON OF HOLOGRAPHIC AND ITERATIVE METHODS FOR AMPLITUDE OBJECT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    I. A. Shevkunov

    2015-01-01

    Full Text Available Experimental comparison of four methods for the wavefront reconstruction is presented. We considered two iterative and two holographic methods with different mathematical models and algorithms for recovery. The first two of these methods do not use a reference wave recording scheme that reduces requirements for stability of the installation. A major role in phase information reconstruction by such methods is played by a set of spatial intensity distributions, which are recorded as the recording matrix is being moved along the optical axis. The obtained data are used consistently for wavefront reconstruction using an iterative procedure. In the course of this procedure numerical distribution of the wavefront between the planes is performed. Thus, phase information of the wavefront is stored in every plane and calculated amplitude distributions are replaced for the measured ones in these planes. In the first of the compared methods, a two-dimensional Fresnel transform and iterative calculation in the object plane are used as a mathematical model. In the second approach, an angular spectrum method is used for numerical wavefront propagation, and the iterative calculation is carried out only between closely located planes of data registration. Two digital holography methods, based on the usage of the reference wave in the recording scheme and differing from each other by numerical reconstruction algorithm of digital holograms, are compared with the first two methods. The comparison proved that the iterative method based on 2D Fresnel transform gives results comparable with the result of common holographic method with the Fourier-filtering. It is shown that holographic method for reconstructing of the object complex amplitude in the process of the object amplitude reduction is the best among considered ones.

  4. Fast alternating projection methods for constrained tomographic reconstruction.

    Science.gov (United States)

    Liu, Li; Han, Yongxin; Jin, Mingwu

    2017-01-01

    The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification.

  5. EnzDP: improved enzyme annotation for metabolic network reconstruction based on domain composition profiles.

    Science.gov (United States)

    Nguyen, Nam-Ninh; Srihari, Sriganesh; Leong, Hon Wai; Chong, Ket-Fah

    2015-10-01

    Determining the entire complement of enzymes and their enzymatic functions is a fundamental step for reconstructing the metabolic network of cells. High quality enzyme annotation helps in enhancing metabolic networks reconstructed from the genome, especially by reducing gaps and increasing the enzyme coverage. Currently, structure-based and network-based approaches can only cover a limited number of enzyme families, and the accuracy of homology-based approaches can be further improved. Bottom-up homology-based approach improves the coverage by rebuilding Hidden Markov Model (HMM) profiles for all known enzymes. However, its clustering procedure relies firmly on BLAST similarity score, ignoring protein domains/patterns, and is sensitive to changes in cut-off thresholds. Here, we use functional domain architecture to score the association between domain families and enzyme families (Domain-Enzyme Association Scoring, DEAS). The DEAS score is used to calculate the similarity between proteins, which is then used in clustering procedure, instead of using sequence similarity score. We improve the enzyme annotation protocol using a stringent classification procedure, and by choosing optimal threshold settings and checking for active sites. Our analysis shows that our stringent protocol EnzDP can cover up to 90% of enzyme families available in Swiss-Prot. It achieves a high accuracy of 94.5% based on five-fold cross-validation. EnzDP outperforms existing methods across several testing scenarios. Thus, EnzDP serves as a reliable automated tool for enzyme annotation and metabolic network reconstruction. Available at: www.comp.nus.edu.sg/~nguyennn/EnzDP .

  6. Comparison of advanced iterative reconstruction methods for SPECT/CT

    Energy Technology Data Exchange (ETDEWEB)

    Knoll, Peter; Koechle, Gunnar; Mirzaei, Siroos [Wilhelminenspital, Vienna (Austria). Dept. of Nuclear Medicine and PET Center; Kotalova, Daniela; Samal, Martin [Charles Univ. Prague, Prague (Czech Republic); Kuzelka, Ivan; Zadrazil, Ladislav [Hospital Havlickuv Brod (Czech Republic); Minear, Greg [Landesklinikum St. Poelten (Austria). Dept. of Internal Medicine II; Bergmann, Helmar [Medical Univ. of Vienna (Austria). Center for Medical Physics and Biomedical Engineering

    2012-07-01

    Aim: Corrective image reconstruction methods which produce reconstructed images with improved spatial resolution and decreased noise level became recently commercially available. In this work, we tested the performance of three new software packages with reconstruction schemes recommended by the manufacturers using physical phantoms simulating realistic clinical settings. Methods: A specially designed resolution phantom containing three {sup 99m}Tc lines sources and the NEMA NU-2 image quality phantom were acquired on three different SPECT/CT systems (General Electrics Infinia, Philips BrightView and Siemens Symbia T6). Measurement of both phantoms was done with the trunk filled with a {sup 99m}Tc-water solution. The projection data were reconstructed using the GE's Evolution for Bone {sup registered}, Philips Astonish {sup registered} and Siemens Flash3D {sup registered} software. The reconstruction parameters employed (number of iterations and subsets, the choice of post-filtering) followed theses recommendations of each vendor. These results were compared with reference reconstructions using the ordered subset expectation maximization (OSEM) reconstruction scheme. Results: The best results (smallest value for resolution, highest percent contrast values) for all three packages were found for the scatter corrected data without applying any post-filtering. The advanced reconstruction methods improve the full width at half maximum (FWHM) of the line sources from 11.4 to 9.5 mm (GE), from 9.1 to 6.4 mm (Philips), and from 12.1 to 8.9 mm (Siemens) if no additional post filter was applied. The total image quality control index measured for a concentration ratio of 8:1 improves for GE from 147 to 189, from 179. to 325 for Philips and from 217 to 320 for Siemens using the reference method for comparison. The same trends can be observed for the 4:1 concentration ratio. The use of a post-filter reduces the background variability approximately by a factor of two, but

  7. Bubble reconstruction method for wire-mesh sensors measurements

    Science.gov (United States)

    Mukin, Roman V.

    2016-08-01

    A new algorithm is presented for post-processing of void fraction measurements with wire-mesh sensors, particularly for identifying and reconstructing bubble surfaces in a two-phase flow. This method is a combination of the bubble recognition algorithm presented in Prasser (Nuclear Eng Des 237(15):1608, 2007) and Poisson surface reconstruction algorithm developed in Kazhdan et al. (Poisson surface reconstruction. In: Proceedings of the fourth eurographics symposium on geometry processing 7, 2006). To verify the proposed technique, a comparison was done of the reconstructed individual bubble shapes with those obtained numerically in Sato and Ničeno (Int J Numer Methods Fluids 70(4):441, 2012). Using the difference between reconstructed and referenced bubble shapes, the accuracy of the proposed algorithm was estimated. At the next step, the algorithm was applied to void fraction measurements performed in Ylönen (High-resolution flow structure measurements in a rod bundle (Diss., Eidgenössische Technische Hochschule ETH Zürich, Nr. 20961, 2013) by means of wire-mesh sensors in a rod bundle geometry. The reconstructed bubble shape yields bubble surface area and volume, hence its Sauter diameter d_{32} as well. Sauter diameter is proved to be more suitable for bubbles size characterization compared to volumetric diameter d_{30}, proved capable to capture the bi-disperse bubble size distribution in the flow. The effect of a spacer grid was studied as well: For the given spacer grid and considered flow rates, bubble size frequency distribution is obtained almost at the same position for all cases, approximately at d_{32} = 3.5 mm. This finding can be related to the specific geometry of the spacer grid or the air injection device applied in the experiments, or even to more fundamental properties of the bubble breakup and coagulation processes. In addition, an application of the new algorithm for reconstruction of a large air-water interface in a tube bundle is

  8. Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography

    Science.gov (United States)

    Chu, Pan; Lei, Jing

    2017-11-01

    The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.

  9. Methods for Analyzing Pipe Networks

    DEFF Research Database (Denmark)

    Nielsen, Hans Bruun

    1989-01-01

    The governing equations for a general network are first set up and then reformulated in terms of matrices. This is developed to show that the choice of model for the flow equations is essential for the behavior of the iterative method used to solve the problem. It is shown that it is better to fo...... demonstrated that this method offers good starting values for a Newton-Raphson iteration.......The governing equations for a general network are first set up and then reformulated in terms of matrices. This is developed to show that the choice of model for the flow equations is essential for the behavior of the iterative method used to solve the problem. It is shown that it is better...... to formulate the flow equations in terms of pipe discharges than in terms of energy heads. The behavior of some iterative methods is compared in the initial phase with large errors. It is explained why the linear theory method oscillates when the iteration gets close to the solution, and it is further...

  10. A Splitting-based Iterative Method for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Liquan Kang

    2016-02-01

    Full Text Available In this paper, we study a ℓ1-norm regularized minimization method for sparse solution recovery in compressed sensing and X-ray CT image reconstruction. In the proposed method, an alternating minimization algorithm is employed to solve the involved ℓ1-norm regularized minimization problem. Under some suitable conditions, the proposed algorithm is shown to be globally convergent. Numerical results indicate that the presented method is effective and promising.

  11. A consensus yeast metabolic network reconstruction obtained from a community approach to systems biology

    Science.gov (United States)

    Herrgård, Markus J.; Swainston, Neil; Dobson, Paul; Dunn, Warwick B.; Arga, K. Yalçin; Arvas, Mikko; Blüthgen, Nils; Borger, Simon; Costenoble, Roeland; Heinemann, Matthias; Hucka, Michael; Le Novère, Nicolas; Li, Peter; Liebermeister, Wolfram; Mo, Monica L.; Oliveira, Ana Paula; Petranovic, Dina; Pettifer, Stephen; Simeonidis, Evangelos; Smallbone, Kieran; Spasić, Irena; Weichart, Dieter; Brent, Roger; Broomhead, David S.; Westerhoff, Hans V.; Kırdar, Betül; Penttilä, Merja; Klipp, Edda; Palsson, Bernhard Ø.; Sauer, Uwe; Oliver, Stephen G.; Mendes, Pedro; Nielsen, Jens; Kell, Douglas B.

    2014-01-01

    Genomic data now allow the large-scale manual or semi-automated reconstruction of metabolic networks. A network reconstruction represents a highly curated organism-specific knowledge base. A few genome-scale network reconstructions have appeared for metabolism in the baker’s yeast Saccharomyces cerevisiae. These alternative network reconstructions differ in scope and content, and further have used different terminologies to describe the same chemical entities, thus making comparisons between them difficult. The formulation of a ‘community consensus’ network that collects and formalizes the ‘community knowledge’ of yeast metabolism is thus highly desirable. We describe how we have produced a consensus metabolic network reconstruction for S. cerevisiae. Special emphasis is laid on referencing molecules to persistent databases or using database-independent forms such as SMILES or InChI strings, since this permits their chemical structure to be represented unambiguously and in a manner that permits automated reasoning. The reconstruction is readily available via a publicly accessible database and in the Systems Biology Markup Language, and we describe the manner in which it can be maintained as a community resource. It should serve as a common denominator for system biology studies of yeast. Similar strategies will be of benefit to communities studying genome-scale metabolic networks of other organisms. PMID:18846089

  12. Reconstruction of CT images by the Bayes- back projection method

    CERN Document Server

    Haruyama, M; Takase, M; Tobita, H

    2002-01-01

    In the course of research on quantitative assay of non-destructive measurement of radioactive waste, the have developed a unique program based on the Bayesian theory for reconstruction of transmission computed tomography (TCT) image. The reconstruction of cross-section images in the CT technology usually employs the Filtered Back Projection method. The new imaging reconstruction program reported here is based on the Bayesian Back Projection method, and it has a function of iterative improvement images by every step of measurement. Namely, this method has the capability of prompt display of a cross-section image corresponding to each angled projection data from every measurement. Hence, it is possible to observe an improved cross-section view by reflecting each projection data in almost real time. From the basic theory of Baysian Back Projection method, it can be not only applied to CT types of 1st, 2nd, and 3rd generation. This reported deals with a reconstruction program of cross-section images in the CT of ...

  13. A Total Variation-Based Reconstruction Method for Dynamic MRI

    Directory of Open Access Journals (Sweden)

    Germana Landi

    2008-01-01

    Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.

  14. Reconstructing Program Theories : Methods Available and Problems to be Solved

    NARCIS (Netherlands)

    Leeuw, Frans de

    2003-01-01

    This paper discusses methods for reconstructing theories underlying programs and policies. It describes three approaches. One is empirical–analytical in nature and focuses on interviews, documents and argumentational analysis. The second has strategic assessment, group dynamics, and dialogue as its

  15. Splinting of penis following microvascular reconstruction- A simple inexpensive method

    Directory of Open Access Journals (Sweden)

    Sharma Abhishek

    2009-01-01

    Full Text Available We present a simple method of splintage following microvascular reconstruction of penis. The splint is made by removing the bases of two thermocol glasses and joining them with paper adhesive tapes to form a hollow cylinder to protect and support the penis and keep it vertical. The splint is slid over the catheter and the reconstructed penis and fixed to the lower abdominal wall and the thighs with paper tapes for stability. A window at the base of the splint is made for the purpose of observation, while the tip is monitored from the open end at the top.

  16. A Systematic, Automated Network Planning Method

    DEFF Research Database (Denmark)

    Holm, Jens Åge; Pedersen, Jens Myrup

    2006-01-01

    This paper describes a case study conducted to evaluate the viability of a systematic, automated network planning method. The motivation for developing the network planning method was that many data networks are planned in an adhoc manner with no assurance of quality of the solution with respect...... to consistency and long-term characteristics. The developed method gives significant improvements on these parameters. The case study was conducted as a comparison between an existing network where the traffic was known and a proposed network designed by the developed method. It turned out that the proposed...... network performed better than the existing network with regard to the performance measurements used which reflected how well the traffic was routed in the networks and the cost of establishing the networks. Challenges that need to be solved before the developed method can be used to design network...

  17. Genome-scale reconstruction of the sigma factor network in Escherichia coli: topology and functional states

    DEFF Research Database (Denmark)

    Cho, Byung-Kwan; Kim, Donghyuk; Knight, Eric M.

    2014-01-01

    and negative regulation by alternative s-factors. Comparison with sigma-factor binding in Klebsiella pneumoniae showed that transcriptional regulation of conserved genes in closely related species is unexpectedly divergent. Conclusions: The reconstructed network reveals the regulatory complexity...

  18. Graph methods for the investigation of metabolic networks in parasitology.

    Science.gov (United States)

    Cottret, Ludovic; Jourdan, Fabien

    2010-08-01

    Recently, a way was opened with the development of many mathematical methods to model and analyze genome-scale metabolic networks. Among them, methods based on graph models enable to us quickly perform large-scale analyses on large metabolic networks. However, it could be difficult for parasitologists to select the graph model and methods adapted to their biological questions. In this review, after briefly addressing the problem of the metabolic network reconstruction, we propose an overview of the graph-based approaches used in whole metabolic network analyses. Applications highlight the usefulness of this kind of approach in the field of parasitology, especially by suggesting metabolic targets for new drugs. Their development still represents a major challenge to fight against the numerous diseases caused by parasites.

  19. lpNet: a linear programming approach to reconstruct signal transduction networks.

    Science.gov (United States)

    Matos, Marta R A; Knapp, Bettina; Kaderali, Lars

    2015-10-01

    With the widespread availability of high-throughput experimental technologies it has become possible to study hundreds to thousands of cellular factors simultaneously, such as coding- or non-coding mRNA or protein concentrations. Still, extracting information about the underlying regulatory or signaling interactions from these data remains a difficult challenge. We present a flexible approach towards network inference based on linear programming. Our method reconstructs the interactions of factors from a combination of perturbation/non-perturbation and steady-state/time-series data. We show both on simulated and real data that our methods are able to reconstruct the underlying networks fast and efficiently, thus shedding new light on biological processes and, in particular, into disease's mechanisms of action. We have implemented the approach as an R package available through bioconductor. This R package is freely available under the Gnu Public License (GPL-3) from bioconductor.org (http://bioconductor.org/packages/release/bioc/html/lpNet.html) and is compatible with most operating systems (Windows, Linux, Mac OS) and hardware architectures. bettina.knapp@helmholtz-muenchen.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Pantograph: A template-based method for genome-scale metabolic model reconstruction.

    Science.gov (United States)

    Loira, Nicolas; Zhukova, Anna; Sherman, David James

    2015-04-01

    Genome-scale metabolic models are a powerful tool to study the inner workings of biological systems and to guide applications. The advent of cheap sequencing has brought the opportunity to create metabolic maps of biotechnologically interesting organisms. While this drives the development of new methods and automatic tools, network reconstruction remains a time-consuming process where extensive manual curation is required. This curation introduces specific knowledge about the modeled organism, either explicitly in the form of molecular processes, or indirectly in the form of annotations of the model elements. Paradoxically, this knowledge is usually lost when reconstruction of a different organism is started. We introduce the Pantograph method for metabolic model reconstruction. This method combines a template reaction knowledge base, orthology mappings between two organisms, and experimental phenotypic evidence, to build a genome-scale metabolic model for a target organism. Our method infers implicit knowledge from annotations in the template, and rewrites these inferences to include them in the resulting model of the target organism. The generated model is well suited for manual curation. Scripts for evaluating the model with respect to experimental data are automatically generated, to aid curators in iterative improvement. We present an implementation of the Pantograph method, as a toolbox for genome-scale model reconstruction, curation and validation. This open source package can be obtained from: http://pathtastic.gforge.inria.fr.

  1. A Robust Shape Reconstruction Method for Facial Feature Point Detection.

    Science.gov (United States)

    Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi

    2017-01-01

    Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  2. Validation and reconstruction of FY-3B/MWRI soil moisture using an artificial neural network based on reconstructed MODIS optical products over the Tibetan Plateau

    Science.gov (United States)

    Cui, Yaokui; Long, Di; Hong, Yang; Zeng, Chao; Zhou, Jie; Han, Zhongying; Liu, Ronghua; Wan, Wei

    2016-12-01

    Soil moisture is a key variable in the exchange of water and energy between the land surface and the atmosphere, especially over the Tibetan Plateau (TP) which is climatically and hydrologically sensitive as the Earth's 'third pole'. Large-scale spatially consistent and temporally continuous soil moisture datasets are of great importance to meteorological and hydrological applications, such as weather forecasting and drought monitoring. The Fengyun-3B Microwave Radiation Imager (FY-3B/MWRI) soil moisture product is a relatively new passive microwave product, with the satellite being launched on November 5, 2010. This study validates and reconstructs FY-3B/MWRI soil moisture across the TP. First, the validation is performed using in situ measurements within two in situ soil moisture measurement networks (1° × 1° and 0.25° × 0.25°), and also compared with the Essential Climate Variable (ECV) soil moisture product from multiple active and passive satellite soil moisture products using new merging procedures. Results show that the ascending FY-3B/MWRI product outperforms the descending product. The ascending FY-3B/MWRI product has almost the same correlation as the ECV product with the in situ measurements. The ascending FY-3B/MWRI product has better performance than the ECV product in the frozen season and under the lower NDVI condition. When the NDVI is higher in the unfrozen season, uncertainty in the ascending FY-3B/MWRI product increases with increasing NDVI, but it could still capture the variability in soil moisture. Second, the FY-3B/MWRI soil moisture product is subsequently reconstructed using the back-propagation neural network (BP-NN) based on reconstructed MODIS products, i.e., LST, NDVI, and albedo. The reconstruction method of generating the soil moisture product not only considers the relationship between the soil moisture and NDVI, LST, and albedo, but also the relationship between the soil moisture and four-dimensional variations using the

  3. Sparse reconstruction methods in x-ray CT

    Science.gov (United States)

    Abascal, J. F. P. J.; Abella, M.; Mory, C.; Ducros, N.; de Molina, C.; Marinetto, E.; Peyrin, F.; Desco, M.

    2017-10-01

    Recent progress in X-ray CT is contributing to the advent of new clinical applications. A common challenge for these applications is the need for new image reconstruction methods that meet tight constraints in radiation dose and geometrical limitations in the acquisition. The recent developments in sparse reconstruction methods provide a framework that permits obtaining good quality images from drastically reduced signal-to-noise-ratio and limited-view data. In this work, we present our contributions in this field. For dynamic studies (3D+Time), we explored the possibility of extending the exploitation of sparsity to the temporal dimension: a temporal operator based on modelling motion between consecutive temporal points in gated-CT and based on experimental time curves in contrast-enhanced CT. In these cases, we also exploited sparsity by using a prior image estimated from the complete acquired dataset and assessed the effect on image quality of using different sparsity operators. For limited-view CT, we evaluated total-variation regularization in different simulated limited-data scenarios from a real small animal acquisition with a cone-beam microCT scanner, considering different angular span and number of projections. For other emerging imaging modalities, such as spectral CT, the image reconstruction problem is nonlinear, so we explored new efficient approaches to exploit sparsity for multi-energy CT data. In conclusion, we review our approaches to challenging CT data reconstruction problems and show results that support the feasibility for new clinical applications.

  4. An Assessment of Iterative Reconstruction Methods for Sparse Ultrasound Imaging.

    Science.gov (United States)

    Valente, Solivan A; Zibetti, Marcelo V W; Pipa, Daniel R; Maia, Joaquim M; Schneider, Fabio K

    2017-03-08

    Ultrasonic image reconstruction using inverse problems has recently appeared as an alternative to enhance ultrasound imaging over beamforming methods. This approach depends on the accuracy of the acquisition model used to represent transducers, reflectivity, and medium physics. Iterative methods, well known in general sparse signal reconstruction, are also suited for imaging. In this paper, a discrete acquisition model is assessed by solving a linear system of equations by an ℓ 1 -regularized least-squares minimization, where the solution sparsity may be adjusted as desired. The paper surveys 11 variants of four well-known algorithms for sparse reconstruction, and assesses their optimization parameters with the goal of finding the best approach for iterative ultrasound imaging. The strategy for the model evaluation consists of using two distinct datasets. We first generate data from a synthetic phantom that mimics real targets inside a professional ultrasound phantom device. This dataset is contaminated with Gaussian noise with an estimated SNR, and all methods are assessed by their resulting images and performances. The model and methods are then assessed with real data collected by a research ultrasound platform when scanning the same phantom device, and results are compared with beamforming. A distinct real dataset is finally used to further validate the proposed modeling. Although high computational effort is required by iterative methods, results show that the discrete model may lead to images closer to ground-truth than traditional beamforming. However, computing capabilities of current platforms need to evolve before frame rates currently delivered by ultrasound equipments are achievable.

  5. Efficient ghost cell reconstruction for embedded boundary methods

    Science.gov (United States)

    Rapaka, Narsimha; Al-Marouf, Mohamad; Samtaney, Ravi

    2016-11-01

    A non-iterative linear reconstruction procedure for Cartesian grid embedded boundary methods is introduced. The method exploits the inherent geometrical advantage of the Cartesian grid and employs batch sorting of the ghost cells to eliminate the need for an iterative solution procedure. This reduces the computational cost of the reconstruction procedure significantly, especially for large scale problems in a parallel environment that have significant communication overhead, e.g., patch based adaptive mesh refinement (AMR) methods. In this approach, prior computation and storage of the weightage coefficients for the neighbour cells is not required which is particularly attractive for moving boundary problems and memory intensive stationary boundary problems. The method utilizes a compact and unique interpolation stencil but also provides second order spatial accuracy. It provides a single step/direct reconstruction for the ghost cells that enforces the boundary conditions on the embedded boundary. The method is extendable to higher order interpolations as well. Examples that demonstrate the advantages of the present approach are presented. Supported by the KAUST Office of Competitive Research Funds under Award No. URF/1/1394-01.

  6. Reconstruction of chalk pore networks from 2D backscatter electron micrographs using a simulated annealing technique

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, M.S.; Torsaeter, O. [Department of Petroleum Engineering and Applied Geophysics, Norwegian University of Science and Technology, Trondheim (Norway)

    2002-05-01

    We report the stochastic reconstruction of chalk pore networks from limited morphological information that may be readily extracted from 2D backscatter electron (BSE) images of the pore space. The reconstruction technique employs a simulated annealing (SA) algorithm, which can be constrained by an arbitrary number of morphological descriptors. Backscatter electron images of a high-porosity North Sea chalk sample are analyzed and the morphological descriptors of the pore space are determined. The morphological descriptors considered are the void-phase two-point probability function and lineal path function computed with or without the application of periodic boundary conditions (PBC). 2D and 3D samples have been reconstructed with different combinations of the descriptors and the reconstructed pore networks have been analyzed quantitatively to evaluate the quality of reconstructions. The results demonstrate that simulated annealing technique may be used to reconstruct chalk pore networks with reasonable accuracy using the void-phase two-point probability function and/or void-phase lineal path function. Void-phase two-point probability function produces slightly better reconstruction than the void-phase lineal path function. Imposing void-phase lineal path function results in slight improvement over what is achieved by using the void-phase two-point probability function as the only constraint. Application of periodic boundary conditions appears to be not critically important when reasonably large samples are reconstructed.

  7. Mouse obesity network reconstruction with a variational Bayes algorithm to employ aggressive false positive control

    Directory of Open Access Journals (Sweden)

    Logsdon Benjamin A

    2012-04-01

    Full Text Available Abstract Background We propose a novel variational Bayes network reconstruction algorithm to extract the most relevant disease factors from high-throughput genomic data-sets. Our algorithm is the only scalable method for regularized network recovery that employs Bayesian model averaging and that can internally estimate an appropriate level of sparsity to ensure few false positives enter the model without the need for cross-validation or a model selection criterion. We use our algorithm to characterize the effect of genetic markers and liver gene expression traits on mouse obesity related phenotypes, including weight, cholesterol, glucose, and free fatty acid levels, in an experiment previously used for discovery and validation of network connections: an F2 intercross between the C57BL/6 J and C3H/HeJ mouse strains, where apolipoprotein E is null on the background. Results We identified eleven genes, Gch1, Zfp69, Dlgap1, Gna14, Yy1, Gabarapl1, Folr2, Fdft1, Cnr2, Slc24a3, and Ccl19, and a quantitative trait locus directly connected to weight, glucose, cholesterol, or free fatty acid levels in our network. None of these genes were identified by other network analyses of this mouse intercross data-set, but all have been previously associated with obesity or related pathologies in independent studies. In addition, through both simulations and data analysis we demonstrate that our algorithm achieves superior performance in terms of power and type I error control than other network recovery algorithms that use the lasso and have bounds on type I error control. Conclusions Our final network contains 118 previously associated and novel genes affecting weight, cholesterol, glucose, and free fatty acid levels that are excellent obesity risk candidates.

  8. Technical Note: Evaluation of pre-reconstruction interpolation methods for iterative reconstruction of radial k-space data.

    Science.gov (United States)

    Tian, Ye; Erb, Kay Condie; Adluru, Ganesh; Likhite, Devavrat; Pedgaonkar, Apoorva; Blatt, Michael; Kamesh Iyer, Srikant; Roberts, John; DiBella, Edward

    2017-08-01

    To evaluate the use of three different pre-reconstruction interpolation methods to convert non-Cartesian k-space data to Cartesian samples such that iterative reconstructions can be performed more simply and more rapidly. Phantom as well as cardiac perfusion radial datasets were reconstructed by four different methods. Three of the methods used pre-reconstruction interpolation once followed by a fast Fourier transform (FFT) at each iteration. The methods were: bilinear interpolation of nearest-neighbor points (BINN), 3-point interpolation, and a multi-coil interpolator called GRAPPA Operator Gridding (GROG). The fourth method performed a full non-Uniform FFT (NUFFT) at each iteration. An iterative reconstruction with spatiotemporal total variation constraints was used with each method. Differences in the images were quantified and compared. The GROG multicoil interpolation, the 3-point interpolation, and the NUFFT-at-each-iteration approaches produced high quality images compared to BINN, with the GROG-derived images having the fewest streaks among the three preinterpolation approaches. However, all reconstruction methods produced approximately equal results when applied to perfusion quantitation tasks. Pre-reconstruction interpolation gave approximately an 83% reduction in reconstruction time. Image quality suffers little from using a pre-reconstruction interpolation approach compared to the more accurate NUFFT-based approach. GROG-based pre-reconstruction interpolation appears to offer the best compromise by using multicoil information to perform the interpolation to Cartesian sample points prior to image reconstruction. Speed gains depend on the implementation and relatively standard optimizations on a MATLAB platform result in preinterpolation speedups of ~ 6 compared to using NUFFT at every iteration, reducing the reconstruction time from around 42 min to 7 min. © 2017 American Association of Physicists in Medicine.

  9. Deep learning methods to guide CT image reconstruction and reduce metal artifacts

    Science.gov (United States)

    Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge

    2017-03-01

    The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.

  10. Reconstruction method for inversion problems in an acoustic tomography based temperature distribution measurement

    Science.gov (United States)

    Liu, Sha; Liu, Shi; Tong, Guowei

    2017-11-01

    In industrial areas, temperature distribution information provides a powerful data support for improving system efficiency, reducing pollutant emission, ensuring safety operation, etc. As a noninvasive measurement technology, acoustic tomography (AT) has been widely used to measure temperature distribution where the efficiency of the reconstruction algorithm is crucial for the reliability of the measurement results. Different from traditional reconstruction techniques, in this paper a two-phase reconstruction method is proposed to ameliorate the reconstruction accuracy (RA). In the first phase, the measurement domain is discretized by a coarse square grid to reduce the number of unknown variables to mitigate the ill-posed nature of the AT inverse problem. By taking into consideration the inaccuracy of the measured time-of-flight data, a new cost function is constructed to improve the robustness of the estimation, and a grey wolf optimizer is used to solve the proposed cost function to obtain the temperature distribution on the coarse grid. In the second phase, the Adaboost.RT based BP neural network algorithm is developed for predicting the temperature distribution on the refined grid in accordance with the temperature distribution data estimated in the first phase. Numerical simulations and experiment measurement results validate the superiority of the proposed reconstruction algorithm in improving the robustness and RA.

  11. Analysis of Interpolation Methods in the Image Reconstruction Tasks

    Directory of Open Access Journals (Sweden)

    V. T. Nguyen

    2017-01-01

    Full Text Available The article studies the interpolation methods used for image reconstruction. These methods were also implemented and tested with several images to estimate their effectiveness.The considered interpolation methods are a nearest-neighbor method, linear method, a cubic B-spline method, a cubic convolution method, and a Lanczos method. For each method were presented an interpolation kernel (interpolation function and a frequency response (Fourier transform.As a result of the experiment, the following conclusions were drawn:-         the nearest neighbor algorithm is very simple and often used. With using this method, the reconstructed images contain artifacts (blurring and haloing;-         the linear method is quickly and easily performed. It also reduces some visual distortion caused by changing image size. Despite the advantages using this method causes a large amount of interpolation artifacts, such as blurring and haloing;-         cubic B-spline method provides smoothness of reconstructed images and eliminates apparent ramp phenomenon. But in the interpolation process a low-pass filter is used, and a high frequency component is suppressed. This will lead to fuzzy edge and false artificial traces;-         cubic convolution method offers less distortion interpolation. But its algorithm is more complicated and more execution time is required as compared to the nearest-neighbor method and the linear method;-         using the Lanczos method allows us to achieve a high-definition image. In spite of the great advantage the method requires more execution time as compared to the other methods of interpolation.The result obtained not only shows a comparison of the considered interpolation methods for various aspects, but also enables users to select an appropriate interpolation method for their applications.It is advisable to study further the existing methods and develop new ones using a number of methods

  12. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva

    2012-09-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  13. High time resolution reconstruction of electron temperature profiles with a neural network in C-2U

    Science.gov (United States)

    Player, Gabriel; Magee, Richard; Trask, Erik; Korepanov, Sergey; Clary, Ryan; Tri Alpha Energy Team

    2017-10-01

    One of the most important parameters governing fast ion dynamics in a plasma is the electron temperature, as the fast ion-electron collision rate goes as νei Te3 / 2 . Unfortunately, the electron temperature is difficult to directly measure-methods relying on high-powered laser pulses or fragile probes lead to limited time resolution or measurements restricted to the edge. In order to rectify the lack of time resolution on the Thomson scattering data in the core, a type of learning algorithm, specifically a neural network, was implemented. This network uses 3 hidden layers to correlate information from nearly 250 signals, including magnetics, interferometers, and several arrays of bolometers, with Thomson scattering data over the entire C-2U database, totalling nearly 20,000 samples. The network uses the Levenberg-Marquardt algorithm with Bayesian regularization to learn from the large number of samples and inputs how to accurately reconstruct the entire electron temperature time history at a resolution of 500 kHz, a huge improvement over the 2 time points per shot provided by Thomson scattering. These results can be used in many different types of analysis and plasma characterization-in this work, we use the network to quantify electron heating.

  14. Bayesian Models for Streamflow and River Network Reconstruction using Tree Rings

    Science.gov (United States)

    Ravindranath, A.; Devineni, N.

    2016-12-01

    Water systems face non-stationary, dynamically shifting risks due to shifting societal conditions and systematic long-term variations in climate manifesting as quasi-periodic behavior on multi-decadal time scales. Water systems are thus vulnerable to long periods of wet or dry hydroclimatic conditions. Streamflow is a major component of water systems and a primary means by which water is transported to serve ecosystems' and human needs. Thus, our concern is in understanding streamflow variability. Climate variability and impacts on water resources are crucial factors affecting streamflow, and multi-scale variability increases risk to water sustainability and systems. Dam operations are necessary for collecting water brought by streamflow while maintaining downstream ecological health. Rules governing dam operations are based on streamflow records that are woefully short compared to periods of systematic variation present in the climatic factors driving streamflow variability and non-stationarity. We use hierarchical Bayesian regression methods in order to reconstruct paleo-streamflow records for dams within a basin using paleoclimate proxies (e.g. tree rings) to guide the reconstructions. The riverine flow network for the entire basin is subsequently modeled hierarchically using feeder stream and tributary flows. This is a starting point in analyzing streamflow variability and risks to water systems, and developing a scientifically-informed dynamic risk management framework for formulating dam operations and water policies to best hedge such risks. We will apply this work to the Missouri and Delaware River Basins (DRB). Preliminary results of streamflow reconstructions for eight dams in the upper DRB using standard Gaussian regression with regional tree ring chronologies give streamflow records that now span two to two and a half centuries, and modestly smoothed versions of these reconstructed flows indicate physically-justifiable trends in the time series.

  15. Sensor Network Data Fusion Methods

    Directory of Open Access Journals (Sweden)

    Martynas Vervečka

    2011-03-01

    Full Text Available Sensor network data fusion is widely used in warfare, in areas such as automatic target recognition, battlefield surveillance, automatic vehicle control, multiple target surveillance, etc. Non-military use example are: medical equipment status monitoring, intelligent home. The paper describes sensor networks topologies, sensor network advantages against the isolated sensors, most common network topologies, their advantages and disadvantages.Article in Lithuanian

  16. Muscle Activity Map Reconstruction from High Density Surface EMG Signals With Missing Channels Using Image Inpainting and Surface Reconstruction Methods.

    Science.gov (United States)

    Ghaderi, Parviz; Marateb, Hamid R

    2017-07-01

    The aim of this study was to reconstruct low-quality High-density surface EMG (HDsEMG) signals, recorded with 2-D electrode arrays, using image inpainting and surface reconstruction methods. It is common that some fraction of the electrodes may provide low-quality signals. We used variety of image inpainting methods, based on partial differential equations (PDEs), and surface reconstruction methods to reconstruct the time-averaged or instantaneous muscle activity maps of those outlier channels. Two novel reconstruction algorithms were also proposed. HDsEMG signals were recorded from the biceps femoris and brachial biceps muscles during low-to-moderate-level isometric contractions, and some of the channels (5-25%) were randomly marked as outliers. The root-mean-square error (RMSE) between the original and reconstructed maps was then calculated. Overall, the proposed Poisson and wave PDE outperformed the other methods (average RMSE 8.7 μVrms ± 6.1 μVrms and 7.5 μVrms ± 5.9 μVrms) for the time-averaged single-differential and monopolar map reconstruction, respectively. Biharmonic Spline, the discrete cosine transform, and the Poisson PDE outperformed the other methods for the instantaneous map reconstruction. The running time of the proposed Poisson and wave PDE methods, implemented using a Vectorization package, was 4.6 ± 5.7 ms and 0.6 ± 0.5 ms, respectively, for each signal epoch or time sample in each channel. The proposed reconstruction algorithms could be promising new tools for reconstructing muscle activity maps in real-time applications. Proper reconstruction methods could recover the information of low-quality recorded channels in HDsEMG signals.

  17. Reconstruction Methods for Inverse Problems with Partial Data

    DEFF Research Database (Denmark)

    Hoffmann, Kristoffer

    Impedance Tomography, and Ultrasound Modulated Electrical Impedance Tomography. After giving an introduction to hybrid inverse problems in impedance tomography and the mathematical tools that facilitate the related analysis, we explain in detail the stability properties associated with the classification...... in the case of a non-elliptic problem. To conduct a numerical analysis, we develop four iterative reconstruction methods using the Picard and Newton iterative schemes, and the unified approach to the reconstruction problem encompasses several algorithms suggested in the literature. The algorithms...... are implemented numerically in two dimensions and the properties of the algorithms and their implementations are investigated theoretically. Novel numerical results are presented for both the full and partial data problem, and they show similarities and differences between the proposed algorithms, which...

  18. Reconstruction of the Sunspot Group Number: The Backbone Method

    Science.gov (United States)

    Svalgaard, Leif; Schatten, Kenneth H.

    2016-11-01

    We have reconstructed the sunspot-group count, not by comparisons with other reconstructions and correcting those where they were deemed to be deficient, but by a re-assessment of original sources. The resulting series is a pure solar index and does not rely on input from other proxies, e.g. radionuclides, auroral sightings, or geomagnetic records. "Backboning" the data sets, our chosen method, provides substance and rigidity by using long-time observers as a stiffness character. Solar activity, as defined by the Group Number, appears to reach and sustain for extended intervals of time the same level in each of the last three centuries since 1700 and the past several decades do not seem to have been exceptionally active, contrary to what is often claimed.

  19. A New Method for Coronal Magnetic Field Reconstruction

    Science.gov (United States)

    Yi, Sibaek; Choe, Gwang-Son; Cho, Kyung-Suk; Kim, Kap-Sung

    2017-08-01

    A precise way of coronal magnetic field reconstruction (extrapolation) is an indispensable tool for understanding of various solar activities. A variety of reconstruction codes have been developed so far and are available to researchers nowadays, but they more or less bear this and that shortcoming. In this paper, a new efficient method for coronal magnetic field reconstruction is presented. The method imposes only the normal components of magnetic field and current density at the bottom boundary to avoid the overspecification of the reconstruction problem, and employs vector potentials to guarantee the divergence-freeness. In our method, the normal component of current density is imposed, not by adjusting the tangential components of A, but by adjusting its normal component. This allows us to avoid a possible numerical instability that on and off arises in codes using A. In real reconstruction problems, the information for the lateral and top boundaries is absent. The arbitrariness of the boundary conditions imposed there as well as various preprocessing brings about the diversity of resulting solutions. We impose the source surface condition at the top boundary to accommodate flux imbalance, which always shows up in magnetograms. To enhance the convergence rate, we equip our code with a gradient-method type accelerator. Our code is tested on two analytical force-free solutions. When the solution is given only at the bottom boundary, our result surpasses competitors in most figures of merits devised by Schrijver et al. (2006). We have also applied our code to a real active region NOAA 11974, in which two M-class flares and a halo CME took place. The EUV observation shows a sudden appearance of an erupting loop before the first flare. Our numerical solutions show that two entwining flux tubes exist before the flare and their shackling is released after the CME with one of them opened up. We suggest that the erupting loop is created by magnetic reconnection between

  20. Comparing 3D virtual methods for hemimandibular body reconstruction.

    Science.gov (United States)

    Benazzi, Stefano; Fiorenza, Luca; Kozakowski, Stephanie; Kullmer, Ottmar

    2011-07-01

    Reconstruction of fractured, distorted, or missing parts in human skeleton presents an equal challenge in the fields of paleoanthropology, bioarcheology, forensics, and medicine. This is particularly important within the disciplines such as orthodontics and surgery, when dealing with mandibular defects due to tumors, developmental abnormalities, or trauma. In such cases, proper restorations of both form (for esthetic purposes) and function (restoration of articulation, occlusion, and mastication) are required. Several digital approaches based on three-dimensional (3D) digital modeling, computer-aided design (CAD)/computer-aided manufacturing techniques, and more recently geometric morphometric methods have been used to solve this problem. Nevertheless, comparisons among their outcomes are rarely provided. In this contribution, three methods for hemimandibular body reconstruction have been tested. Two bone defects were virtually simulated in a 3D digital model of a human hemimandible. Accordingly, 3D digital scaffolds were obtained using the mirror copy of the unaffected hemimandible (Method 1), the thin plate spline (TPS) interpolation (Method 2), and the combination between TPS and CAD techniques (Method 3). The mirror copy of the unaffected hemimandible does not provide a suitable solution for bone restoration. The combination between TPS interpolation and CAD techniques (Method 3) produces an almost perfect-fitting 3D digital model that can be used for biocompatible custom-made scaffolds generated by rapid prototyping technologies. Copyright © 2011 Wiley-Liss, Inc.

  1. A Method for 3D Histopathology Reconstruction Supporting Mouse Microvasculature Analysis.

    Science.gov (United States)

    Xu, Yiwen; Pickering, J Geoffrey; Nong, Zengxuan; Gibson, Eli; Arpino, John-Michael; Yin, Hao; Ward, Aaron D

    2015-01-01

    Structural abnormalities of the microvasculature can impair perfusion and function. Conventional histology provides good spatial resolution with which to evaluate the microvascular structure but affords no 3-dimensional information; this limitation could lead to misinterpretations of the complex microvessel network in health and disease. The objective of this study was to develop and evaluate an accurate, fully automated 3D histology reconstruction method to visualize the arterioles and venules within the mouse hind-limb. Sections of the tibialis anterior muscle from C57BL/J6 mice (both normal and subjected to femoral artery excision) were reconstructed using pairwise rigid and affine registrations of 5 µm-thick, paraffin-embedded serial sections digitized at 0.25 µm/pixel. Low-resolution intensity-based rigid registration was used to initialize the nucleus landmark-based registration, and conventional high-resolution intensity-based registration method. The affine nucleus landmark-based registration was developed in this work and was compared to the conventional affine high-resolution intensity-based registration method. Target registration errors were measured between adjacent tissue sections (pairwise error), as well as with respect to a 3D reference reconstruction (accumulated error, to capture propagation of error through the stack of sections). Accumulated error measures were lower (p registration based on automatic extraction and correspondence of small, homologous landmarks may support accurate 3D histology reconstruction. This technique avoids the otherwise problematic "banana-into-cylinder" effect observed using conventional methods that optimize the pairwise alignment of salient structures, forcing them to be section-orthogonal. This approach will provide a valuable tool for high-accuracy 3D histology tissue reconstructions for analysis of diseased microvasculature.

  2. A Method for 3D Histopathology Reconstruction Supporting Mouse Microvasculature Analysis.

    Directory of Open Access Journals (Sweden)

    Yiwen Xu

    Full Text Available Structural abnormalities of the microvasculature can impair perfusion and function. Conventional histology provides good spatial resolution with which to evaluate the microvascular structure but affords no 3-dimensional information; this limitation could lead to misinterpretations of the complex microvessel network in health and disease. The objective of this study was to develop and evaluate an accurate, fully automated 3D histology reconstruction method to visualize the arterioles and venules within the mouse hind-limb. Sections of the tibialis anterior muscle from C57BL/J6 mice (both normal and subjected to femoral artery excision were reconstructed using pairwise rigid and affine registrations of 5 µm-thick, paraffin-embedded serial sections digitized at 0.25 µm/pixel. Low-resolution intensity-based rigid registration was used to initialize the nucleus landmark-based registration, and conventional high-resolution intensity-based registration method. The affine nucleus landmark-based registration was developed in this work and was compared to the conventional affine high-resolution intensity-based registration method. Target registration errors were measured between adjacent tissue sections (pairwise error, as well as with respect to a 3D reference reconstruction (accumulated error, to capture propagation of error through the stack of sections. Accumulated error measures were lower (p < 0.01 for the nucleus landmark technique and superior vasculature continuity was observed. These findings indicate that registration based on automatic extraction and correspondence of small, homologous landmarks may support accurate 3D histology reconstruction. This technique avoids the otherwise problematic "banana-into-cylinder" effect observed using conventional methods that optimize the pairwise alignment of salient structures, forcing them to be section-orthogonal. This approach will provide a valuable tool for high-accuracy 3D histology tissue

  3. Digital 3D reconstructions using histological serial sections of lung tissue including the alveolar capillary network.

    Science.gov (United States)

    Grothausmann, Roman; Knudsen, Lars; Ochs, Matthias; Mühlfeld, Christian

    2017-02-01

    Grothausmann R, Knudsen L, Ochs M, Mühlfeld C. Digital 3D reconstructions using histological serial sections of lung tissue including the alveolar capillary network. Am J Physiol Lung Cell Mol Physiol 312: L243-L257, 2017. First published December 2, 2016; doi:10.1152/ajplung.00326.2016-The alveolar capillary network (ACN) provides an enormously large surface area that is necessary for pulmonary gas exchange. Changes of the ACN during normal or pathological development or in pulmonary diseases are of great functional impact and warrant further analysis. Due to the complexity of the three-dimensional (3D) architecture of the ACN, 2D approaches are limited in providing a comprehensive impression of the characteristics of the normal ACN or the nature of its alterations. Stereological methods offer a quantitative way to assess the ACN in 3D in terms of capillary volume, surface area, or number but lack a 3D visualization to interpret the data. Hence, the necessity to visualize the ACN in 3D and to correlate this with data from the same set of data arises. Such an approach requires a large sample volume combined with a high resolution. Here, we present a technically simple and cost-efficient approach to create 3D representations of lung tissue ranging from bronchioles over alveolar ducts and alveoli up to the ACN from more than 1 mm sample extent to a resolution of less than 1 μm. The method is based on automated image acquisition of serially sectioned epoxy resin-embedded lung tissue fixed by vascular perfusion and subsequent automated digital reconstruction and analysis of the 3D data. This efficient method may help to better understand mechanisms of vascular development and pathology of the lung. Copyright © 2017 the American Physiological Society.

  4. Tensor-based dynamic reconstruction method for electrical capacitance tomography

    Science.gov (United States)

    Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.

    2017-03-01

    Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.

  5. Does PET reconstruction method affect Deauville scoring in lymphoma patients?

    Science.gov (United States)

    Enilorac, Blandine; Lasnon, Charline; Nganoa, Cathy; Fruchart, Christophe; Gac, Anne Claire; Damaj, Gandhi; Aide, Nicolas

    2017-12-14

    Background: When scoring 18F-Fluorodeoxyglucose (FDG) positron emission tomography (PET) with the Deauville scale (DS), the quantification of tumor and reference organs limits the problem of optical misinterpretation. Compared to conventional reconstruction algorithms, point spread function (PSF) modeling significantly increases standardized uptake values (SUVs) in tumors but only moderately in the liver, which could affect the DS. We investigated whether the choice of the reconstruction algorithm affects the DS and whether discordances affect the capability of FDG PET to stratify lymphoma patients. Materials and Methods: Overall, 126 diffuse large B-cell Lymphoma (DLBCL) patients were included (56 females, 70 males, median (range) age: 65 (20-88) years). PET data were reconstructed with unfiltered PSF reconstruction. Additionally, a 6-mm filter was applied to PSF images to meet the European Association of Nuclear Medicine (EANM)/European Association Research Ltd (EARL) requirements (PSFEARL). One hundred interim PET (i-PET) and 95 end-of-treatment PET (EoT-PET) studies were analyzed. SUVmax in the liver and aorta were determined using automatic volumes of interest (VOIs) and compared to SUVmax of the residual mass with the highest FDG uptake. Results: For i-PET, using PSF and PSFEARL, patients were classified as responders and non-responders in 60 and 40 cases versus 63 and 37 cases, respectively. Five (5.0%) major discordances (i.e., changes from responder to non-responder) occurred. For Eot-PET, patients were classified using PSF and PSFEARL as responders and non-responders in 69 and 26 cases versus 72 and 23 cases, respectively. Three (3.2%) major discordances occurred. Concordance (Cohen's unweighted Kappa) between PSF and the PSFEARL Deauville scoring was 0.82 (95%CI: 0.73-0.91) for i-PET and 0.89 (95%CI: 0.81-0.96) for EoT-PET. The median follow-up periods were 28.4 and 27.4 months for i-PET and EoT-PET, respectively. Kaplan-Meier analysis showed

  6. Filtered Iterative Reconstruction (FIR) via Proximal Forward-Backward Splitting: A Synergy of Analytical and Iterative Reconstruction Method for CT

    CERN Document Server

    Gao, Hao

    2015-01-01

    This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Specifically, FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected to the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergenc...

  7. Asymptotic approximation method of force reconstruction: Proof of concept

    Science.gov (United States)

    Sanchez, J.; Benaroya, H.

    2017-08-01

    An important problem in engineering is the determination of the system input based on the system response. This type of problem is difficult to solve as it is often ill-defined, and produces inaccurate or non-unique results. Current reconstruction techniques typically involve the employment of optimization methods or additional constraints to regularize the problem, but these methods are not without their flaws as they may be sub-optimally applied and produce inadequate results. An alternative approach is developed that draws upon concepts from control systems theory, the equilibrium analysis of linear dynamical systems with time-dependent inputs, and asymptotic approximation analysis. This paper presents the theoretical development of the proposed method. A simple application of the method is presented to demonstrate the procedure. A more complex application to a continuous system is performed to demonstrate the applicability of the method.

  8. Representation and Reconstruction of Triangular Irregular Networks with Vertical Walls

    NARCIS (Netherlands)

    Gorte, B.G.H.; Lesparre, J.

    2012-01-01

    Point clouds obtained by aerial laser scanning are a convenient input source for high resolution 2.5d elevation models, such as the Dutch AHN-2. More challenging is the fully automatic reconstruction of 3d city models. An actual demand for a combined 2.5d terrain and 3d city model for an urban

  9. Reconstruction and analysis of hybrid composite shells using meshless methods

    Science.gov (United States)

    Bernardo, G. M. S.; Loja, M. A. R.

    2017-06-01

    The importance of focusing on the research of viable models to predict the behaviour of structures which may possess in some cases complex geometries is an issue that is growing in different scientific areas, ranging from the civil and mechanical engineering to the architecture or biomedical devices fields. In these cases, the research effort to find an efficient approach to fit laser scanning point clouds, to the desired surface, has been increasing, leading to the possibility of modelling as-built/as-is structures and components' features. However, combining the task of surface reconstruction and the implementation of a structural analysis model is not a trivial task. Although there are works focusing those different phases in separate, there is still an effective need to find approaches able to interconnect them in an efficient way. Therefore, achieving a representative geometric model able to be subsequently submitted to a structural analysis in a similar based platform is a fundamental step to establish an effective expeditious processing workflow. With the present work, one presents an integrated methodology based on the use of meshless approaches, to reconstruct shells described by points' clouds, and to subsequently predict their static behaviour. These methods are highly appropriate on dealing with unstructured points clouds, as they do not need to have any specific spatial or geometric requirement when implemented, depending only on the distance between the points. Details on the formulation, and a set of illustrative examples focusing the reconstruction of cylindrical and double-curvature shells, and its further analysis, are presented.

  10. Network Reconstruction and Systems Analysis of Cardiac Myocyte Hypertrophy Signaling*

    Science.gov (United States)

    Ryall, Karen A.; Holland, David O.; Delaney, Kyle A.; Kraeutler, Matthew J.; Parker, Audrey J.; Saucerman, Jeffrey J.

    2012-01-01

    Cardiac hypertrophy is managed by a dense web of signaling pathways with many pathways influencing myocyte growth. A quantitative understanding of the contributions of individual pathways and their interactions is needed to better understand hypertrophy signaling and to develop more effective therapies for heart failure. We developed a computational model of the cardiac myocyte hypertrophy signaling network to determine how the components and network topology lead to differential regulation of transcription factors, gene expression, and myocyte size. Our computational model of the hypertrophy signaling network contains 106 species and 193 reactions, integrating 14 established pathways regulating cardiac myocyte growth. 109 of 114 model predictions were validated using published experimental data testing the effects of receptor activation on transcription factors and myocyte phenotypic outputs. Network motif analysis revealed an enrichment of bifan and biparallel cross-talk motifs. Sensitivity analysis was used to inform clustering of the network into modules and to identify species with the greatest effects on cell growth. Many species influenced hypertrophy, but only a few nodes had large positive or negative influences. Ras, a network hub, had the greatest effect on cell area and influenced more species than any other protein in the network. We validated this model prediction in cultured cardiac myocytes. With this integrative computational model, we identified the most influential species in the cardiac hypertrophy signaling network and demonstrate how different levels of network organization affect myocyte size, transcription factors, and gene expression. PMID:23091058

  11. Reconstruction of stochastic temporal networks through diffusive arrival times

    National Research Council Canada - National Science Library

    Xun Li; Xiang Li

    2017-01-01

    .... We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent...

  12. Reverse Engineering Cellular Networks with Information Theoretic Methods

    Directory of Open Access Journals (Sweden)

    Julio R. Banga

    2013-05-01

    Full Text Available Building mathematical models of cellular networks lies at the core of systems biology. It involves, among other tasks, the reconstruction of the structure of interactions between molecular components, which is known as network inference or reverse engineering. Information theory can help in the goal of extracting as much information as possible from the available data. A large number of methods founded on these concepts have been proposed in the literature, not only in biology journals, but in a wide range of areas. Their critical comparison is difficult due to the different focuses and the adoption of different terminologies. Here we attempt to review some of the existing information theoretic methodologies for network inference, and clarify their differences. While some of these methods have achieved notable success, many challenges remain, among which we can mention dealing with incomplete measurements, noisy data, counterintuitive behaviour emerging from nonlinear relations or feedback loops, and computational burden of dealing with large data sets.

  13. Dynamic Regulatory Network Reconstruction for Alzheimer’s Disease Based on Matrix Decomposition Techniques

    Directory of Open Access Journals (Sweden)

    Wei Kong

    2014-01-01

    Full Text Available Alzheimer’s disease (AD is the most common form of dementia and leads to irreversible neurodegenerative damage of the brain. Finding the dynamic responses of genes, signaling proteins, transcription factor (TF activities, and regulatory networks of the progressively deteriorative progress of AD would represent a significant advance in discovering the pathogenesis of AD. However, the high throughput technologies of measuring TF activities are not yet available on a genome-wide scale. In this study, based on DNA microarray gene expression data and a priori information of TFs, network component analysis (NCA algorithm is applied to determining the TF activities and regulatory influences on TGs of incipient, moderate, and severe AD. Based on that, the dynamical gene regulatory networks of the deteriorative courses of AD were reconstructed. To select significant genes which are differentially expressed in different courses of AD, independent component analysis (ICA, which is better than the traditional clustering methods and can successfully group one gene in different meaningful biological processes, was used. The molecular biological analysis showed that the changes of TF activities and interactions of signaling proteins in mitosis, cell cycle, immune response, and inflammation play an important role in the deterioration of AD.

  14. Reverse optimization reconstruction method in non-null aspheric interferometry

    Science.gov (United States)

    Zhang, Lei; Liu, Dong; Shi, Tu; Yang, Yongying; Chong, Shiyao; Shen, Yibing; Bai, Jian

    2015-10-01

    Aspheric non-null test achieves more flexible measurements than the null test. However, the precision calibration for retrace error has always been difficult. A reverse optimization reconstruction (ROR) method is proposed for the retrace error calibration as well as the aspheric figure error extraction based on system modeling. An optimization function is set up with system model, in which the wavefront data from experiment is inserted as the optimization objective while the figure error under test in the model as the optimization variable. The optimization is executed by the reverse ray tracing in the system model until the test wavefront in the model is consistent with the one in experiment. At this point, the surface figure error in the model is considered to be consistent with the one in experiment. With the Zernike fitting, the aspheric surface figure error is then reconstructed in the form of Zernike polynomials. Numerical simulations verifying the high accuracy of the ROR method are presented with error considerations. A set of experiments are carried out to demonstrate the validity and repeatability of ROR method. Compared with the results of Zygo interferometer (null test), the measurement error by the ROR method achieves better than 1/10λ.

  15. Reconstruction of neutron spectra through neural networks; Reconstruccion de espectros de neutrones mediante redes neuronales

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E. [Cuerpo Academico de Radiobiologia, Estudios Nucleares, Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico)] e-mail: rvega@cantera.reduaz.mx [and others

    2003-07-01

    A neural network has been used to reconstruct the neutron spectra starting from the counting rates of the detectors of the Bonner sphere spectrophotometric system. A group of 56 neutron spectra was selected to calculate the counting rates that would produce in a Bonner sphere system, with these data and the spectra it was trained the neural network. To prove the performance of the net, 12 spectra were used, 6 were taken of the group used for the training, 3 were obtained of mathematical functions and those other 3 correspond to real spectra. When comparing the original spectra of those reconstructed by the net we find that our net has a poor performance when reconstructing monoenergetic spectra, this attributes it to those characteristic of the spectra used for the training of the neural network, however for the other groups of spectra the results of the net are appropriate with the prospective ones. (Author)

  16. Probabilistic Reconstruction of Orthodox Churches from Precision Point Clouds Using Bayesian Networks and Cellular Automata

    Science.gov (United States)

    Chizhova, M.; Korovin, D.; Gurianov, A.; Brodovskii, M.; Brunn, A.; Stilla, U.; Luhmann, T.

    2017-02-01

    The point cloud interpretation and reconstruction of 3d-buildings from point clouds has already been treated for a few decades. There are many articles which consider the different methods and workows of the automatic detection and reconstruction of geometrical objects from point clouds. Each method is suitable for the special geometry type of object or sensor. General approaches are rare. In our work we present an algorithm which develops the optimal process sequence of the automatic search, detection and reconstruction of buildings and building components from a point cloud. It can be used for the detection of the set of geometric objects to be reconstructed, independent of its destruction. In a simulated example we reconstruct a complete Russian-orthodox church starting from the set of detected structural components and reconstruct missing components with high probability.

  17. Gene Regulatory Network Reconstruction Using Conditional Mutual Information

    Directory of Open Access Journals (Sweden)

    Xiaodong Wang

    2008-06-01

    Full Text Available The inference of gene regulatory network from expression data is an important area of research that provides insight to the inner workings of a biological system. The relevance-network-based approaches provide a simple and easily-scalable solution to the understanding of interaction between genes. Up until now, most works based on relevance network focus on the discovery of direct regulation using correlation coefficient or mutual information. However, some of the more complicated interactions such as interactive regulation and coregulation are not easily detected. In this work, we propose a relevance network model for gene regulatory network inference which employs both mutual information and conditional mutual information to determine the interactions between genes. For this purpose, we propose a conditional mutual information estimator based on adaptive partitioning which allows us to condition on both discrete and continuous random variables. We provide experimental results that demonstrate that the proposed regulatory network inference algorithm can provide better performance when the target network contains coregulated and interactively regulated genes.

  18. Yeast 5 – an expanded reconstruction of the Saccharomyces cerevisiae metabolic network

    Directory of Open Access Journals (Sweden)

    Heavner Benjamin D

    2012-06-01

    Full Text Available Abstract Background Efforts to improve the computational reconstruction of the Saccharomyces cerevisiae biochemical reaction network and to refine the stoichiometrically constrained metabolic models that can be derived from such a reconstruction have continued since the first stoichiometrically constrained yeast genome scale metabolic model was published in 2003. Continuing this ongoing process, we have constructed an update to the Yeast Consensus Reconstruction, Yeast 5. The Yeast Consensus Reconstruction is a product of efforts to forge a community-based reconstruction emphasizing standards compliance and biochemical accuracy via evidence-based selection of reactions. It draws upon models published by a variety of independent research groups as well as information obtained from biochemical databases and primary literature. Results Yeast 5 refines the biochemical reactions included in the reconstruction, particularly reactions involved in sphingolipid metabolism; updates gene-reaction annotations; and emphasizes the distinction between reconstruction and stoichiometrically constrained model. Although it was not a primary goal, this update also improves the accuracy of model prediction of viability and auxotrophy phenotypes and increases the number of epistatic interactions. This update maintains an emphasis on standards compliance, unambiguous metabolite naming, and computer-readable annotations available through a structured document format. Additionally, we have developed MATLAB scripts to evaluate the model’s predictive accuracy and to demonstrate basic model applications such as simulating aerobic and anaerobic growth. These scripts, which provide an independent tool for evaluating the performance of various stoichiometrically constrained yeast metabolic models using flux balance analysis, are included as Additional files 1, 2 and 3. Additional file 1 Function testYeastModel.m.m. Click here for file Additional file 2 Function modelToReconstruction

  19. Biblio-MetReS: A bibliometric network reconstruction application and server

    Directory of Open Access Journals (Sweden)

    Alves Rui

    2011-10-01

    Full Text Available Abstract Background Reconstruction of genes and/or protein networks from automated analysis of the literature is one of the current targets of text mining in biomedical research. Some user-friendly tools already perform this analysis on precompiled databases of abstracts of scientific papers. Other tools allow expert users to elaborate and analyze the full content of a corpus of scientific documents. However, to our knowledge, no user friendly tool that simultaneously analyzes the latest set of scientific documents available on line and reconstructs the set of genes referenced in those documents is available. Results This article presents such a tool, Biblio-MetReS, and compares its functioning and results to those of other user-friendly applications (iHOP, STRING that are widely used. Under similar conditions, Biblio-MetReS creates networks that are comparable to those of other user friendly tools. Furthermore, analysis of full text documents provides more complete reconstructions than those that result from using only the abstract of the document. Conclusions Literature-based automated network reconstruction is still far from providing complete reconstructions of molecular networks. However, its value as an auxiliary tool is high and it will increase as standards for reporting biological entities and relationships become more widely accepted and enforced. Biblio-MetReS is an application that can be downloaded from http://metres.udl.cat/. It provides an easy to use environment for researchers to reconstruct their networks of interest from an always up to date set of scientific documents.

  20. Reconstruction of floodplain sedimentation rates: a combination of methods to optimize estimates

    NARCIS (Netherlands)

    Hobo, N.; Makaske, B.; Middelkoop, H.; Wallinga, J.

    2010-01-01

    Reconstruction of overbank sedimentation rates over the past decades gives insight into floodplain dynamics, and thereby provides a basis for efficient and sustainable floodplain management. We compared the results of four independent reconstruction methods - optically stimulated luminescence (OSL)

  1. Using the reconstructed genome-scale human metabolic network to study physiology and pathology

    OpenAIRE

    Bordbar, Aarash; Palsson, Bernhard O.

    2012-01-01

    Metabolism plays a key role in many major human diseases. Generation of high-throughput omics data has ushered in a new era of systems biology. Genome-scale metabolic network reconstructions provide a platform to interpret omics data in a biochemically meaningful manner. The release of the global human metabolic network, Recon 1, in 2007 has enabled new systems biology approaches to study human physiology, pathology, and pharmacology. There are currently over 20 publications that utilize Reco...

  2. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  3. An algebra-based method for inferring gene regulatory networks.

    Science.gov (United States)

    Vera-Licona, Paola; Jarrah, Abdul; Garcia-Puente, Luis David; McGee, John; Laubenbacher, Reinhard

    2014-03-26

    The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the

  4. Comparative genomic reconstruction of transcriptional networks controlling central metabolism in the Shewanella genus

    Directory of Open Access Journals (Sweden)

    Kovaleva Galina

    2011-06-01

    Full Text Available Abstract Background Genome-scale prediction of gene regulation and reconstruction of transcriptional regulatory networks in bacteria is one of the critical tasks of modern genomics. The Shewanella genus is comprised of metabolically versatile gamma-proteobacteria, whose lifestyles and natural environments are substantially different from Escherichia coli and other model bacterial species. The comparative genomics approaches and computational identification of regulatory sites are useful for the in silico reconstruction of transcriptional regulatory networks in bacteria. Results To explore conservation and variations in the Shewanella transcriptional networks we analyzed the repertoire of transcription factors and performed genomics-based reconstruction and comparative analysis of regulons in 16 Shewanella genomes. The inferred regulatory network includes 82 transcription factors and their DNA binding sites, 8 riboswitches and 6 translational attenuators. Forty five regulons were newly inferred from the genome context analysis, whereas others were propagated from previously characterized regulons in the Enterobacteria and Pseudomonas spp.. Multiple variations in regulatory strategies between the Shewanella spp. and E. coli include regulon contraction and expansion (as in the case of PdhR, HexR, FadR, numerous cases of recruiting non-orthologous regulators to control equivalent pathways (e.g. PsrA for fatty acid degradation and, conversely, orthologous regulators to control distinct pathways (e.g. TyrR, ArgR, Crp. Conclusions We tentatively defined the first reference collection of ~100 transcriptional regulons in 16 Shewanella genomes. The resulting regulatory network contains ~600 regulated genes per genome that are mostly involved in metabolism of carbohydrates, amino acids, fatty acids, vitamins, metals, and stress responses. Several reconstructed regulons including NagR for N-acetylglucosamine catabolism were experimentally validated in S

  5. Multiple Linear Regression for Reconstruction of Gene Regulatory Networks in Solving Cascade Error Problems

    Science.gov (United States)

    Zainudin, Suhaila; Arif, Shereena M.

    2017-01-01

    Gene regulatory network (GRN) reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR) to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C) as a direct interaction (A → C). Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5. PMID:28250767

  6. Multiple Linear Regression for Reconstruction of Gene Regulatory Networks in Solving Cascade Error Problems.

    Science.gov (United States)

    Salleh, Faridah Hani Mohamed; Zainudin, Suhaila; Arif, Shereena M

    2017-01-01

    Gene regulatory network (GRN) reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR) to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C) as a direct interaction (A → C). Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5.

  7. Multiple Linear Regression for Reconstruction of Gene Regulatory Networks in Solving Cascade Error Problems

    Directory of Open Access Journals (Sweden)

    Faridah Hani Mohamed Salleh

    2017-01-01

    Full Text Available Gene regulatory network (GRN reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C as a direct interaction (A → C. Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5.

  8. The Convolutional Visual Network for Identification and Reconstruction of NOvA Events

    Energy Technology Data Exchange (ETDEWEB)

    Psihas, Fernanda [Indiana U.

    2017-11-22

    In 2016 the NOvA experiment released results for the observation of oscillations in the vμ and ve channels as well as ve cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identification and reconstruction of the neutrino flavor and energy recorded by our detectors. This presentation describes the first application of convolutional neural network technology for event identification and reconstruction in particle detectors like NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identification, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the ve appearance signal by 40% and studies show potential impact to the vμ disappearance analysis.

  9. CVN A Convolutional Visual Network for Identication and Reconstruction of NOvA Events

    Science.gov (United States)

    Psihas, Fernanda; NOvA Collaboration

    2017-09-01

    In the past year, the NOvA experiment released results for the observation of neutrino oscillations in the νμ and νe channels as well as νe cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identication and reconstruction of the neutrino avor and energy recorded by our detectors. This presentation describes the rst application of convolutional neural network technology for event identication and reconstruction in particle detectors such as NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identication, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the νe appearance signal by 40% and studies show potential impact to the νμ disappearance analysis.

  10. The Convolutional Visual Network for Identification and Reconstruction of NOvA Events

    Science.gov (United States)

    Psihas, Fernanda; NOvA Collaboration

    2017-10-01

    In 2016 the NOvA experiment released results for the observation of oscillations in the vμ and ve channels as well as ve cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identification and reconstruction of the neutrino flavor and energy recorded by our detectors. This presentation describes the first application of convolutional neural network technology for event identification and reconstruction in particle detectors like NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identification, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the ve appearance signal by 40% and studies show potential impact to the vμ disappearance analysis.

  11. Reconstructing missing daily precipitation data using regression trees and artificial neural networks

    Science.gov (United States)

    Incomplete meteorological data has been a problem in environmental modeling studies. The objective of this work was to develop a technique to reconstruct missing daily precipitation data in the central part of Chesapeake Bay Watershed using regression trees (RT) and artificial neural networks (ANN)....

  12. Overview of the neural network based technique for monitoring of road condition via reconstructed road profiles

    CSIR Research Space (South Africa)

    Ngwangwa, HM

    2008-07-01

    Full Text Available on the road and driver to assess the integrity of road and vehicle infrastructure. In this paper, vehicle vibration data are applied to an artificial neural network to reconstruct the corresponding road surface profiles. The results show that the technique...

  13. Features of the method of large-scale paleolandscape reconstructions

    Science.gov (United States)

    Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina

    2017-04-01

    The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the

  14. Application of Bayesian neural networks to energy reconstruction in EAS experiments for ground-based TeV astrophysics

    Science.gov (United States)

    Bai, Y.; Xu, Y.; Pan, J.; Lan, J. Q.; Gao, W. W.

    2016-07-01

    A toy detector array is designed to detect a shower generated by the interaction between a TeV cosmic ray and the atmosphere. In the present paper, the primary energies of showers detected by the detector array are reconstructed with the algorithm of Bayesian neural networks (BNNs) and a standard method like the LHAASO experiment [1], respectively. Compared to the standard method, the energy resolutions are significantly improved using the BNNs. And the improvement is more obvious for the high energy showers than the low energy ones.

  15. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    Energy Technology Data Exchange (ETDEWEB)

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.

  16. Sampling of temporal networks: Methods and biases

    Science.gov (United States)

    Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter

    2017-11-01

    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

  17. Constructing an Intelligent Patent Network Analysis Method

    OpenAIRE

    Chao-Chan Wu; Ching-Bang Yao

    2012-01-01

    Patent network analysis, an advanced method of patent analysis, is a useful tool for technology management. This method visually displays all the relationships among the patents and enables the analysts to intuitively comprehend the overview of a set of patents in the field of the technology being studied. Although patent network analysis possesses relative advantages different from traditional methods of patent analysis, it is subject to several crucial limitations. To overcome the drawbacks...

  18. Reconstruction of an engine combustion process with a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, P.J.; Gu, F.; Ball, A.D. [School of Engineering, University of Manchester, Manchester (United Kingdom)

    1997-12-31

    The cylinder pressure waveform in an internal combustion engine is one of the most important parameters in describing the engine combustion process. It is used for a range of diagnostic tasks such as identification of ignition faults or mechanical wear in the cylinders. However, it is very difficult to measure this parameter directly. Never-the-less, the cylinder pressure may be inferred from other more readily obtainable parameters. In this presentation it is shown how a Radial Basis Function network, which may be regarded as a form of neural network, may be used to model the cylinder pressure as a function of the instantaneous crankshaft velocity, recorded with a simple magnetic sensor. The application of the model is demonstrated on a four cylinder DI diesel engine with data from a wide range of speed and load settings. The prediction capabilities of the model once trained are validated against measured data. (orig.) 4 refs.

  19. MO-DE-209-02: Tomosynthesis Reconstruction Methods

    Energy Technology Data Exchange (ETDEWEB)

    Mainprize, J. [Sunnybrook Health Sciences Centre, Toronto, ON (Canada)

    2016-06-15

    Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support

  20. A novel mechanochemical method for reconstructing the moisture-degraded HKUST-1.

    Science.gov (United States)

    Sun, Xuejiao; Li, Hao; Li, Yujie; Xu, Feng; Xiao, Jing; Xia, Qibin; Li, Yingwei; Li, Zhong

    2015-07-11

    A novel mechanochemical method was proposed to reconstruct quickly moisture-degraded HKUST-1. The degraded HKUST-1 can be restored within minutes. The reconstructed samples were characterized, and confirmed to have 95% surface area and 92% benzene capacity of the fresh HKUST-1. It is a simple and effective strategy for degraded MOF reconstruction.

  1. Reconstructing Generalized Logical Networks of Transcriptional Regulation in Mouse Brain from Temporal Gene Expression Data

    Energy Technology Data Exchange (ETDEWEB)

    Song, Mingzhou (Joe) [New Mexico State University, Las Cruces; Lewis, Chris K. [New Mexico State University, Las Cruces; Lance, Eric [New Mexico State University, Las Cruces; Chesler, Elissa J [ORNL; Kirova, Roumyana [Bristol-Myers Squibb Pharmaceutical Research & Development, NJ; Langston, Michael A [University of Tennessee, Knoxville (UTK); Bergeson, Susan [Texas Tech University, Lubbock

    2009-01-01

    The problem of reconstructing generalized logical networks to account for temporal dependencies among genes and environmental stimuli from high-throughput transcriptomic data is addressed. A network reconstruction algorithm was developed that uses the statistical significance as a criterion for network selection to avoid false-positive interactions arising from pure chance. Using temporal gene expression data collected from the brains of alcohol-treated mice in an analysis of the molecular response to alcohol, this algorithm identified genes from a major neuronal pathway as putative components of the alcohol response mechanism. Three of these genes have known associations with alcohol in the literature. Several other potentially relevant genes, highlighted and agreeing with independent results from literature mining, may play a role in the response to alcohol. Additional, previously-unknown gene interactions were discovered that, subject to biological verification, may offer new clues in the search for the elusive molecular mechanisms of alcoholism.

  2. CREST (Climate REconstruction SofTware): a probability density function (PDF)-based quantitative climate reconstruction method

    Science.gov (United States)

    Chevalier, M.; Cheddadi, R.; Chase, B. M.

    2014-11-01

    Several methods currently exist to quantitatively reconstruct palaeoclimatic variables from fossil botanical data. Of these, probability density function (PDF)-based methods have proven valuable as they can be applied to a wide range of plant assemblages. Most commonly applied to fossil pollen data, their performance, however, can be limited by the taxonomic resolution of the pollen data, as many species may belong to a given pollen type. Consequently, the climate information associated with different species cannot always be precisely identified, resulting in less-accurate reconstructions. This can become particularly problematic in regions of high biodiversity. In this paper, we propose a novel PDF-based method that takes into account the different climatic requirements of each species constituting the broader pollen type. PDFs are fitted in two successive steps, with parametric PDFs fitted first for each species and then a combination of those individual species PDFs into a broader single PDF to represent the pollen type as a unit. A climate value for the pollen assemblage is estimated from the likelihood function obtained after the multiplication of the pollen-type PDFs, with each being weighted according to its pollen percentage. To test its performance, we have applied the method to southern Africa as a regional case study and reconstructed a suite of climatic variables (e.g. winter and summer temperature and precipitation, mean annual aridity, rainfall seasonality). The reconstructions are shown to be accurate for both temperature and precipitation. Predictable exceptions were areas that experience conditions at the extremes of the regional climatic spectra. Importantly, the accuracy of the reconstructed values is independent of the vegetation type where the method is applied or the number of species used. The method used in this study is publicly available in a software package entitled CREST (Climate REconstruction SofTware) and will provide the

  3. Constructing an Intelligent Patent Network Analysis Method

    Directory of Open Access Journals (Sweden)

    Chao-Chan Wu

    2012-11-01

    Full Text Available Patent network analysis, an advanced method of patent analysis, is a useful tool for technology management. This method visually displays all the relationships among the patents and enables the analysts to intuitively comprehend the overview of a set of patents in the field of the technology being studied. Although patent network analysis possesses relative advantages different from traditional methods of patent analysis, it is subject to several crucial limitations. To overcome the drawbacks of the current method, this study proposes a novel patent analysis method, called the intelligent patent network analysis method, to make a visual network with great precision. Based on artificial intelligence techniques, the proposed method provides an automated procedure for searching patent documents, extracting patent keywords, and determining the weight of each patent keyword in order to generate a sophisticated visualization of the patent network. This study proposes a detailed procedure for generating an intelligent patent network that is helpful for improving the efficiency and quality of patent analysis. Furthermore, patents in the field of Carbon Nanotube Backlight Unit (CNT-BLU were analyzed to verify the utility of the proposed method.

  4. Vertex Reconstructing Neural Network at the ZEUS Central Tracking Detector

    CERN Document Server

    Dror, G; Dror, Gideon; Etzion, Erez

    2001-01-01

    An unconventional solution for finding the location of event creation is presented. It is based on two feed-forward neural networks with fixed architecture, whose parameters are chosen so as to reach a high accuracy. The interaction point location is a parameter that can be used to select events of interest from the very high rate of events created at the current experiments in High Energy Physics. The system suggested here is tested on simulated data sets of the ZEUS Central Tracking Detector, and is shown to perform better than conventional algorithms.

  5. Harnessing diversity towards the reconstructing of large scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Takeshi Hase

    Full Text Available Elucidating gene regulatory network (GRN from large scale experimental data remains a central challenge in systems biology. Recently, numerous techniques, particularly consensus driven approaches combining different algorithms, have become a potentially promising strategy to infer accurate GRNs. Here, we develop a novel consensus inference algorithm, TopkNet that can integrate multiple algorithms to infer GRNs. Comprehensive performance benchmarking on a cloud computing framework demonstrated that (i a simple strategy to combine many algorithms does not always lead to performance improvement compared to the cost of consensus and (ii TopkNet integrating only high-performance algorithms provide significant performance improvement compared to the best individual algorithms and community prediction. These results suggest that a priori determination of high-performance algorithms is a key to reconstruct an unknown regulatory network. Similarity among gene-expression datasets can be useful to determine potential optimal algorithms for reconstruction of unknown regulatory networks, i.e., if expression-data associated with known regulatory network is similar to that with unknown regulatory network, optimal algorithms determined for the known regulatory network can be repurposed to infer the unknown regulatory network. Based on this observation, we developed a quantitative measure of similarity among gene-expression datasets and demonstrated that, if similarity between the two expression datasets is high, TopkNet integrating algorithms that are optimal for known dataset perform well on the unknown dataset. The consensus framework, TopkNet, together with the similarity measure proposed in this study provides a powerful strategy towards harnessing the wisdom of the crowds in reconstruction of unknown regulatory networks.

  6. Harnessing diversity towards the reconstructing of large scale gene regulatory networks.

    Science.gov (United States)

    Hase, Takeshi; Ghosh, Samik; Yamanaka, Ryota; Kitano, Hiroaki

    2013-01-01

    Elucidating gene regulatory network (GRN) from large scale experimental data remains a central challenge in systems biology. Recently, numerous techniques, particularly consensus driven approaches combining different algorithms, have become a potentially promising strategy to infer accurate GRNs. Here, we develop a novel consensus inference algorithm, TopkNet that can integrate multiple algorithms to infer GRNs. Comprehensive performance benchmarking on a cloud computing framework demonstrated that (i) a simple strategy to combine many algorithms does not always lead to performance improvement compared to the cost of consensus and (ii) TopkNet integrating only high-performance algorithms provide significant performance improvement compared to the best individual algorithms and community prediction. These results suggest that a priori determination of high-performance algorithms is a key to reconstruct an unknown regulatory network. Similarity among gene-expression datasets can be useful to determine potential optimal algorithms for reconstruction of unknown regulatory networks, i.e., if expression-data associated with known regulatory network is similar to that with unknown regulatory network, optimal algorithms determined for the known regulatory network can be repurposed to infer the unknown regulatory network. Based on this observation, we developed a quantitative measure of similarity among gene-expression datasets and demonstrated that, if similarity between the two expression datasets is high, TopkNet integrating algorithms that are optimal for known dataset perform well on the unknown dataset. The consensus framework, TopkNet, together with the similarity measure proposed in this study provides a powerful strategy towards harnessing the wisdom of the crowds in reconstruction of unknown regulatory networks.

  7. Reconstructing coherent networks from electroencephalography and magnetoencephalography with reduced contamination from volume conduction or magnetic field spread.

    Directory of Open Access Journals (Sweden)

    Mark Drakesmith

    Full Text Available Volume conduction (VC and magnetic field spread (MFS induce spurious correlations between EEG/MEG sensors, such that the estimation of functional networks from scalp recordings is inaccurate. Imaginary coherency [1] reduces VC/MFS artefacts between sensors by assuming that instantaneous interactions are caused predominantly by VC/MFS and do not contribute to the imaginary part of the cross-spectral densities (CSDs. We propose an adaptation of the dynamic imaging of coherent sources (DICS [2] - a method for reconstructing the CSDs between sources, and subsequently inferring functional connectivity based on coherences between those sources. Firstly, we reformulate the principle of imaginary coherency by performing an eigenvector decomposition of the imaginary part of the CSD to estimate the power that only contributes to the non-zero phase-lagged (NZPL interactions. Secondly, we construct an NZPL-optimised spatial filter with two a priori assumptions: (1 that only NZPL interactions exist at the source level and (2 the NZPL CSD at the sensor level is a good approximation of the projected source NZPL CSDs. We compare the performance of the NZPL method to the standard method by reconstructing a coherent network from simulated EEG/MEG recordings. We demonstrate that, as long as there are phase differences between the sources, the NZPL method reliably detects the underlying networks from EEG and MEG. We show that the method is also robust to very small phase lags, noise from phase jitter, and is less sensitive to regularisation parameters. The method is applied to a human dataset to infer parts of a coherent network underpinning face recognition.

  8. Complex networks principles, methods and applications

    CERN Document Server

    Latora, Vito; Russo, Giovanni

    2017-01-01

    Networks constitute the backbone of complex systems, from the human brain to computer communications, transport infrastructures to online social systems and metabolic reactions to financial markets. Characterising their structure improves our understanding of the physical, biological, economic and social phenomena that shape our world. Rigorous and thorough, this textbook presents a detailed overview of the new theory and methods of network science. Covering algorithms for graph exploration, node ranking and network generation, among the others, the book allows students to experiment with network models and real-world data sets, providing them with a deep understanding of the basics of network theory and its practical applications. Systems of growing complexity are examined in detail, challenging students to increase their level of skill. An engaging presentation of the important principles of network science makes this the perfect reference for researchers and undergraduate and graduate students in physics, ...

  9. Granger Causality Network Reconstruction of Conductance-Based Integrate-and-Fire Neuronal Systems

    Science.gov (United States)

    Zhou, Douglas; Xiao, Yanyang; Zhang, Yaoyu; Xu, Zhiqin; Cai, David

    2014-01-01

    Reconstruction of anatomical connectivity from measured dynamical activities of coupled neurons is one of the fundamental issues in the understanding of structure-function relationship of neuronal circuitry. Many approaches have been developed to address this issue based on either electrical or metabolic data observed in experiment. The Granger causality (GC) analysis remains one of the major approaches to explore the dynamical causal connectivity among individual neurons or neuronal populations. However, it is yet to be clarified how such causal connectivity, i.e., the GC connectivity, can be mapped to the underlying anatomical connectivity in neuronal networks. We perform the GC analysis on the conductance-based integrate-and-fire (IF) neuronal networks to obtain their causal connectivity. Through numerical experiments, we find that the underlying synaptic connectivity amongst individual neurons or subnetworks, can be successfully reconstructed by the GC connectivity constructed from voltage time series. Furthermore, this reconstruction is insensitive to dynamical regimes and can be achieved without perturbing systems and prior knowledge of neuronal model parameters. Surprisingly, the synaptic connectivity can even be reconstructed by merely knowing the raster of systems, i.e., spike timing of neurons. Using spike-triggered correlation techniques, we establish a direct mapping between the causal connectivity and the synaptic connectivity for the conductance-based IF neuronal networks, and show the GC is quadratically related to the coupling strength. The theoretical approach we develop here may provide a framework for examining the validity of the GC analysis in other settings. PMID:24586285

  10. Binary Classification Method of Social Network Users

    Directory of Open Access Journals (Sweden)

    I. A. Poryadin

    2017-01-01

    Full Text Available The subject of research is a binary classification method of social network users based on the data analysis they have placed. Relevance of the task to gain information about a person by examining the content of his/her pages in social networks is exemplified. The most common approach to its solution is a visual browsing. The order of the regional authority in our country illustrates that its using in school education is needed. The article shows restrictions on the visual browsing of pupil’s pages in social networks as a tool for the teacher and the school psychologist and justifies that a process of social network users’ data analysis should be automated. Explores publications, which describe such data acquisition, processing, and analysis methods and considers their advantages and disadvantages. The article also gives arguments to support a proposal to study the classification method of social network users. One such method is credit scoring, which is used in banks and credit institutions to assess the solvency of clients. Based on the high efficiency of the method there is a proposal for significant expansion of its using in other areas of society. The possibility to use logistic regression as the mathematical apparatus of the proposed method of binary classification has been justified. Such an approach enables taking into account the different types of data extracted from social networks. Among them: the personal user data, information about hobbies, friends, graphic and text information, behaviour characteristics. The article describes a number of existing methods of data transformation that can be applied to solve the problem. An experiment of binary gender-based classification of social network users is described. A logistic model obtained for this example includes multiple logical variables obtained by transforming the user surnames. This experiment confirms the feasibility of the proposed method. Further work is to define a system

  11. Accelerated gradient methods for total-variation-based CT image reconstruction

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Heide; Jensen, Tobias Lindstrøm; Hansen, Per Christian

    2011-01-01

    reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-demanding methods such as Newton’s method. The simple gradient method has much lower memory requirements, but exhibits slow convergence...

  12. Revisiting a model-independent dark energy reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)

    2012-09-15

    In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)

  13. Sensor data validation and reconstruction in water networks : a methodology and software implementation

    OpenAIRE

    García Valverde, Diego; Quevedo Casín, Joseba Jokin; Puig Cayuela, Vicenç; Cugueró Escofet, Miquel Àngel

    2014-01-01

    In this paper, a data validation and reconstruction methodology that can be applied to the sensors used for real-time monitoring in water networks is presented. On the one hand, a validation approach based on quality levels is described to detect potential invalid and missing data. On the other hand, the reconstruction strategy is based on a set of temporal and spatial models used to estimate missing/invalid data with the model estimation providing the best fit. A software tool implementing t...

  14. Reconstruction of two-dimensional fracture network geometry by transdimensional inversion

    Science.gov (United States)

    Somogyvári, Márk; Jalali, Mohammadreza; Jimenez Parras, Santos; Bayer, Peter

    2017-04-01

    Transport processes in a fractured aquifer are mainly controlled by the geometry of the fracture network. Such a network is ideally modelled as discrete fracture network (DFN), which is composed by a skeleton of hydraulically conductive fractures that intersect the impermeable rock matrix. The orientation and connectivity of the fractures are highly case-specific, and mapping especially the hydraulically active parts of a fracture network requires insight from hydraulic or transport related experiments, such as tracer tests. Single tracer tests, however, offer only an integral picture of an aquifeŕs transport properties. Here, multiple tracer tests are proposed and evaluated together in a tracer tomography framework to obtain spatially distributed data. The interpretation of the data obtained from these experiments is challenging, since there exists no common recipe for reconstructing the fracture network in a DFN model. A crucial point is that the number of fractures (and thus the number of model parameters) is unknown. We propose the use of a transdimensional inversion method, which can be applied to calibrate fracture properties and number. In this study, the reversible jump Markov Chain Monte Carlo algorithm is selected and conservative tracer tomography experiments are interpreted with two-dimensional DFN models. In our approach, a randomly generated initial DFN solution is evolved through a Markov sequence. In each iteration the DFN model is updated by a random manipulation of the geometry (fracture addition, fracture deletion or fracture shift). The tracer tomography experiment is simulated with the updated model, and the simulated tracer breakthroughs curves are compared to the original observations. Each updated DFN realization is evaluated using the Metropolis-Hastings-Green acceptance criteria. This evaluation is based on probabilistic properties of the updates and the improvement of the fit of the breakthrough curves. The transdimensional algorithm

  15. Advanced fault diagnosis methods in molecular networks.

    Science.gov (United States)

    Habibi, Iman; Emamian, Effat S; Abdi, Ali

    2014-01-01

    Analysis of the failure of cell signaling networks is an important topic in systems biology and has applications in target discovery and drug development. In this paper, some advanced methods for fault diagnosis in signaling networks are developed and then applied to a caspase network and an SHP2 network. The goal is to understand how, and to what extent, the dysfunction of molecules in a network contributes to the failure of the entire network. Network dysfunction (failure) is defined as failure to produce the expected outputs in response to the input signals. Vulnerability level of a molecule is defined as the probability of the network failure, when the molecule is dysfunctional. In this study, a method to calculate the vulnerability level of single molecules for different combinations of input signals is developed. Furthermore, a more complex yet biologically meaningful method for calculating the multi-fault vulnerability levels is suggested, in which two or more molecules are simultaneously dysfunctional. Finally, a method is developed for fault diagnosis of networks based on a ternary logic model, which considers three activity levels for a molecule instead of the previously published binary logic model, and provides equations for the vulnerabilities of molecules in a ternary framework. Multi-fault analysis shows that the pairs of molecules with high vulnerability typically include a highly vulnerable molecule identified by the single fault analysis. The ternary fault analysis for the caspase network shows that predictions obtained using the more complex ternary model are about the same as the predictions of the simpler binary approach. This study suggests that by increasing the number of activity levels the complexity of the model grows; however, the predictive power of the ternary model does not appear to be increased proportionally.

  16. Reconstruction of t anti tH (H → bb) events using deep neural networks with the CMS detector

    Energy Technology Data Exchange (ETDEWEB)

    Rieger, Marcel; Erdmann, Martin; Fischer, Benjamin; Fischer, Robert; Heidemann, Fabian; Quast, Thorben; Rath, Yannik [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    The measurement of Higgs boson production in association with top-quark pairs (t anti tH) is an important goal of Run 2 of the LHC as it allows for a direct measurement of the underlying Yukawa coupling. Due to the complex final state, however, the analysis of semi-leptonic t anti tH events with the Higgs boson decaying into a pair of bottom-quarks is challenging. A promising method for tackling jet parton associations are Deep Neural Networks (DNN). While being a widely spread machine learning algorithm in modern industry, DNNs are on the way to becoming established in high energy physics. We present a study on the reconstruction of the final state using DNNs, comparing to Boosted Decision Trees (BDT) as benchmark scenario. This is accomplished by generating permutations of simulated events and comparing them with truth information to extract reconstruction efficiencies.

  17. Tissue expansion for breast reconstruction: Methods and techniques

    Directory of Open Access Journals (Sweden)

    Nicolò Bertozzi

    2017-09-01

    Conclusions: TE/implant-based reconstruction has proved to be a safe, cost-effective, and reliable technique that can be performed in women with various comorbidities. Short operative time, fast recovery, and absence of donor site morbidity are other advantages over autologous breast reconstruction.

  18. Metabolism and evolution: A comparative study of reconstructed genome-level metabolic networks

    Science.gov (United States)

    Almaas, Eivind

    2008-03-01

    The availability of high-quality annotations of sequenced genomes has made it possible to generate organism-specific comprehensive maps of cellular metabolism. Currently, more than twenty such metabolic reconstructions are publicly available, with the majority focused on bacteria. A typical metabolic reconstruction for a bacterium results in a complex network containing hundreds of metabolites (nodes) and reactions (links), while some even contain more than a thousand. The constrain-based optimization approach of flux-balance analysis (FBA) is used to investigate the functional characteristics of such large-scale metabolic networks, making it possible to estimate an organism's growth behavior in a wide variety of nutrient environments, as well as its robustness to gene loss. We have recently completed the genome-level metabolic reconstruction of Yersinia pseudotuberculosis, as well as the three Yersinia pestis biovars Antiqua, Mediaevalis, and Orientalis. While Y. pseudotuberculosis typically only causes fever and abdominal pain that can mimic appendicitis, the evolutionary closely related Y. pestis strains are the aetiological agents of the bubonic plague. In this presentation, I will discuss our results and conclusions from a comparative study on the evolution of metabolic function in the four Yersiniae networks using FBA and related techniques, and I will give particular focus to the interplay between metabolic network topology and evolutionary flexibility.

  19. Genome-scale reconstruction and analysis of the metabolic network in the hyperthermophilic archaeon Sulfolobus solfataricus.

    Directory of Open Access Journals (Sweden)

    Thomas Ulas

    Full Text Available We describe the reconstruction of a genome-scale metabolic model of the crenarchaeon Sulfolobus solfataricus, a hyperthermoacidophilic microorganism. It grows in terrestrial volcanic hot springs with growth occurring at pH 2-4 (optimum 3.5 and a temperature of 75-80°C (optimum 80°C. The genome of Sulfolobus solfataricus P2 contains 2,992,245 bp on a single circular chromosome and encodes 2,977 proteins and a number of RNAs. The network comprises 718 metabolic and 58 transport/exchange reactions and 705 unique metabolites, based on the annotated genome and available biochemical data. Using the model in conjunction with constraint-based methods, we simulated the metabolic fluxes induced by different environmental and genetic conditions. The predictions were compared to experimental measurements and phenotypes of S. solfataricus. Furthermore, the performance of the network for 35 different carbon sources known for S. solfataricus from the literature was simulated. Comparing the growth on different carbon sources revealed that glycerol is the carbon source with the highest biomass flux per imported carbon atom (75% higher than glucose. Experimental data was also used to fit the model to phenotypic observations. In addition to the commonly known heterotrophic growth of S. solfataricus, the crenarchaeon is also able to grow autotrophically using the hydroxypropionate-hydroxybutyrate cycle for bicarbonate fixation. We integrated this pathway into our model and compared bicarbonate fixation with growth on glucose as sole carbon source. Finally, we tested the robustness of the metabolism with respect to gene deletions using the method of Minimization of Metabolic Adjustment (MOMA, which predicted that 18% of all possible single gene deletions would be lethal for the organism.

  20. Simulation of signal flow in 3D reconstructions of an anatomically realistic neural network in rat vibrissal cortex.

    Science.gov (United States)

    Lang, Stefan; Dercksen, Vincent J; Sakmann, Bert; Oberlaender, Marcel

    2011-11-01

    The three-dimensional (3D) structure of neural circuits represents an essential constraint for information flow in the brain. Methods to directly monitor streams of excitation, at subcellular and millisecond resolution, are at present lacking. Here, we describe a pipeline of tools that allow investigating information flow by simulating electrical signals that propagate through anatomically realistic models of average neural networks. The pipeline comprises three blocks. First, we review tools that allow fast and automated acquisition of 3D anatomical data, such as neuron soma distributions or reconstructions of dendrites and axons from in vivo labeled cells. Second, we introduce NeuroNet, a tool for assembling the 3D structure and wiring of average neural networks. Finally, we introduce a simulation framework, NeuroDUNE, to investigate structure-function relationships within networks of full-compartmental neuron models at subcellular, cellular and network levels. We illustrate the pipeline by simulations of a reconstructed excitatory network formed between the thalamus and spiny stellate neurons in layer 4 (L4ss) of a cortical barrel column in rat vibrissal cortex. Exciting the ensemble of L4ss neurons with realistic input from an ensemble of thalamic neurons revealed that the location-specific thalamocortical connectivity may result in location-specific spiking of cortical cells. Specifically, a radial decay in spiking probability toward the column borders could be a general feature of signal flow in a barrel column. Our simulations provide insights of how anatomical parameters, such as the subcellular organization of synapses, may constrain spiking responses at the cellular and network levels. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Summer drought reconstruction in northeastern Spain inferred from a tree ring latewood network since 1734

    Science.gov (United States)

    Tejedor, E.; Saz, M. A.; Esper, J.; Cuadrat, J. M.; de Luis, M.

    2017-08-01

    Drought recurrence in the Mediterranean is regarded as a fundamental factor for socioeconomic development and the resilience of natural systems in context of global change. However, knowledge of past droughts has been hampered by the absence of high-resolution proxies. We present a drought reconstruction for the northeast of the Iberian Peninsula based on a new dendrochronology network considering the Standardized Evapotranspiration Precipitation Index (SPEI). A total of 774 latewood width series from 387 trees of P. sylvestris and P. uncinata was combined in an interregional chronology. The new chronology, calibrated against gridded climate data, reveals a robust relationship with the SPEI representing drought conditions of July and August. We developed a summer drought reconstruction for the period 1734-2013 representative for the northeastern and central Iberian Peninsula. We identified 16 extremely dry and 17 extremely wet summers and four decadal scale dry and wet periods, including 2003-2013 as the driest episode of the reconstruction.

  2. A two-step filtering-based iterative image reconstruction method for interior tomography.

    Science.gov (United States)

    Zhang, Hanming; Li, Lei; Yan, Bin; Wang, Linyuan; Cai, Ailong; Hu, Guoen

    2016-10-06

    The optimization-based method that utilizes the additional sparse prior of region-of-interest (ROI) image, such as total variation, has been the subject of considerable research in problems of interior tomography reconstruction. One challenge for optimization-based iterative ROI image reconstruction is to build the relationship between ROI image and truncated projection data. When the reconstruction support region is smaller than the original object, an unsuitable representation of data fidelity may lead to bright truncation artifacts in the boundary region of field of view. In this work, we aim to develop an iterative reconstruction method to suppress the truncation artifacts and improve the image quality for direct ROI image reconstruction. A novel reconstruction approach is proposed based on an optimization problem involving a two-step filtering-based data fidelity. Data filtering is achieved in two steps: the first takes the derivative of projection data; in the second step, Hilbert filtering is applied in the differentiated data. Numerical simulations and real data reconstructions have been conducted to validate the new reconstruction method. Both qualitative and quantitative results indicate that, as theoretically expected, the proposed method brings reasonable performance in suppressing truncation artifacts and preserving detailed features. The presented local reconstruction method based on the two-step filtering strategy provides a simple and efficient approach for the iterative reconstruction from truncated projections.

  3. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  4. NETWORK ECONOMY INNOVATIVE POTENTIAL EVALUATION METHOD

    Directory of Open Access Journals (Sweden)

    E. V. Loguinova

    2011-01-01

    Full Text Available Existing methodological approaches to assessment of the innovation potential having been analyzed, a network system innovative potential identification and characterization method is proposed that makes it possible to assess the potential’s qualitative and quantitative components and to determine their consistency with national innovative system formation and development objectives. Four stages are recommended and determined to assess the network economy innovative potential. Main structural elements of the network economy innovative potential are the resource, institutional, infrastructural and resulting factor totalities.

  5. Engine cylinder pressure reconstruction using crank kinematics and recurrently-trained neural networks

    Science.gov (United States)

    Bennett, C.; Dunne, J. F.; Trimby, S.; Richardson, D.

    2017-02-01

    A recurrent non-linear autoregressive with exogenous input (NARX) neural network is proposed, and a suitable fully-recurrent training methodology is adapted and tuned, for reconstructing cylinder pressure in multi-cylinder IC engines using measured crank kinematics. This type of indirect sensing is important for cost effective closed-loop combustion control and for On-Board Diagnostics. The challenge addressed is to accurately predict cylinder pressure traces within the cycle under generalisation conditions: i.e. using data not previously seen by the network during training. This involves direct construction and calibration of a suitable inverse crank dynamic model, which owing to singular behaviour at top-dead-centre (TDC), has proved difficult via physical model construction, calibration, and inversion. The NARX architecture is specialised and adapted to cylinder pressure reconstruction, using a fully-recurrent training methodology which is needed because the alternatives are too slow and unreliable for practical network training on production engines. The fully-recurrent Robust Adaptive Gradient Descent (RAGD) algorithm, is tuned initially using synthesised crank kinematics, and then tested on real engine data to assess the reconstruction capability. Real data is obtained from a 1.125 l, 3-cylinder, in-line, direct injection spark ignition (DISI) engine involving synchronised measurements of crank kinematics and cylinder pressure across a range of steady-state speed and load conditions. The paper shows that a RAGD-trained NARX network using both crank velocity and crank acceleration as input information, provides fast and robust training. By using the optimum epoch identified during RAGD training, acceptably accurate cylinder pressures, and especially accurate location-of-peak-pressure, can be reconstructed robustly under generalisation conditions, making it the most practical NARX configuration and recurrent training methodology for use on production engines.

  6. Homotopy methods for counting reaction network equilibria

    OpenAIRE

    Craciun, Gheorghe; Helton, J. William; Williams, Ruth J

    2007-01-01

    Dynamical system models of complex biochemical reaction networks are usually high-dimensional, nonlinear, and contain many unknown parameters. In some cases the reaction network structure dictates that positive equilibria must be unique for all values of the parameters in the model. In other cases multiple equilibria exist if and only if special relationships between these parameters are satisfied. We describe methods based on homotopy invariance of degree which allow us to determine the numb...

  7. Reconstruction of fluorescence molecular tomography with a cosinoidal level set method

    National Research Council Canada - National Science Library

    Xuanxuan Zhang; Xu Cao; Shouping Zhu

    2017-01-01

    .... The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods...

  8. Iterative Reconstruction Methods for Hybrid Inverse Problems in Impedance Tomography

    DEFF Research Database (Denmark)

    Hoffmann, Kristoffer; Knudsen, Kim

    2014-01-01

    impedance imaging, magnetic resonance electrical impedance tomography, and ultrasound modulated electrical impedance tomography, and the unified approach to the reconstruction problem encompasses several algorithms suggested in the literature. The four proposed algorithms are implemented numerically in two......For a general formulation of hybrid inverse problems in impedance tomography the Picard and Newton iterative schemes are adapted and four iterative reconstruction algorithms are developed. The general problem formulation includes several existing hybrid imaging modalities such as current density...

  9. Genome-scale reconstruction of the Streptococcus pyogenes M49 metabolic network reveals growth requirements and indicates potential drug targets

    NARCIS (Netherlands)

    Levering, J.; Fiedler, T.; Sieg, A.; van Grinsven, K.W.A.; Hering, S.; Veith, N.; Olivier, B.G.; Klett, L.; Hugenholtz, J.; Teusink, B.; Kreikemeyer, B.; Kummer, U.

    2016-01-01

    Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes

  10. Noniterative convex optimization methods for network component analysis.

    Science.gov (United States)

    Jacklin, Neil; Ding, Zhi; Chen, Wei; Chang, Chunqi

    2012-01-01

    This work studies the reconstruction of gene regulatory networks by the means of network component analysis (NCA). We will expound a family of convex optimization-based methods for estimating the transcription factor control strengths and the transcription factor activities (TFAs). The approach taken in this work is to decompose the problem into a network connectivity strength estimation phase and a transcription factor activity estimation phase. In the control strength estimation phase, we formulate a new subspace-based method incorporating a choice of multiple error metrics. For the source estimation phase we propose a total least squares (TLS) formulation that generalizes many existing methods. Both estimation procedures are noniterative and yield the optimal estimates according to various proposed error metrics. We test the performance of the proposed algorithms on simulated data and experimental gene expression data for the yeast Saccharomyces cerevisiae and demonstrate that the proposed algorithms have superior effectiveness in comparison with both Bayesian Decomposition (BD) and our previous FastNCA approach, while the computational complexity is still orders of magnitude less than BD.

  11. Comparative genomic reconstruction of transcriptional networks controlling central metabolism in the Shewanella genus

    Energy Technology Data Exchange (ETDEWEB)

    Rodionov, Dmitry A.; Novichkov, Pavel; Stavrovskaya, Elena D.; Rodionova, Irina A.; Li, Xiaoqing; Kazanov, Marat D.; Ravcheev, Dmitry A.; Gerasimova, Anna V.; Kazakov, Alexey E.; Kovaleva, Galina Y.; Permina, Elizabeth A.; Laikova, Olga N.; Overbeek, Ross; Romine, Margaret F.; Fredrickson, Jim K.; Arkin, Adam P.; Dubchak, Inna; Osterman, Andrei L.; Gelfand, Mikhail S.

    2011-06-15

    Genome-scale prediction of gene regulation and reconstruction of transcriptional regulatory networks in bacteria is one of the critical tasks of modern genomics. Despite the growing number of genome-scale gene expression studies, our abilities to convert the results of these studies into accurate regulatory annotations and to project them from model to other organisms are extremely limited. The comparative genomics approaches and computational identification of regulatory sites are useful for the in silico reconstruction of transcriptional regulatory networks in bacteria. The Shewanella genus is comprised of metabolically versatile gamma-proteobacteria, whose lifestyles and natural environments are substantially different from Escherichia coli and other model bacterial species. To explore conservation and variations in the Shewanella transcriptional networks we analyzed the repertoire of transcription factors and performed genomics-based reconstruction and comparative analysis of regulons in 16 Shewanella genomes. The inferred regulatory network includes 82 transcription factors and their DNA binding sites, 8 riboswitches and 6 translational attenuators. Forty five regulons were newly inferred from the genome context analysis, whereas others were propagated from previously characterized regulons in the Enterobacteria and Pseudomonas spp.. However, even orthologous regulators with conserved DNA-binding motifs may control substantially different gene sets, revealing striking differences in regulatory strategies between the Shewanella spp. and E. coli. Multiple examples of regulatory network rewiring include regulon contraction and expansion (as in the case of PdhR, HexR, FadR), and numerous cases of recruiting non-orthologous regulators to control equivalent pathways (e.g. NagR for N-acetylglucosamine catabolism and PsrA for fatty acid degradation) and, conversely, orthologous regulators to control distinct pathways (e.g. TyrR, ArgR, Crp).

  12. Blockwise conjugate gradient methods for image reconstruction in volumetric CT.

    Science.gov (United States)

    Qiu, W; Titley-Peloquin, D; Soleimani, M

    2012-11-01

    Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Noise robustness of a combined phase retrieval and reconstruction method for phase-contrast tomography

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas; Jørgensen, Jakob Sauer; Poulsen, Henning Friis

    2016-01-01

    Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. [Opt. Express 21, 12185 (2013) [CrossRef], and preliminary results demonstrated improve...

  14. On the 3D reconstruction of diatom frustules : a novel method, applications, and limitations

    NARCIS (Netherlands)

    Mansilla, Catalina; Novais, Maria Helena; Faber, Enne; Martinez-Martinez, Diego; De Hosson, J. Th.

    Because of the importance of diatoms and the lack of information about their third dimension, a new method for the 3D reconstruction is explored, based on digital image correlation of several scanning electron microscope images. The accuracy of the method to reconstruct both centric and pennate

  15. Natural Cubic Spline Regression Modeling Followed by Dynamic Network Reconstruction for the Identification of Radiation-Sensitivity Gene Association Networks from Time-Course Transcriptome Data.

    Science.gov (United States)

    Michna, Agata; Braselmann, Herbert; Selmansberger, Martin; Dietz, Anne; Hess, Julia; Gomolka, Maria; Hornhardt, Sabine; Blüthgen, Nils; Zitzelsberger, Horst; Unger, Kristian

    2016-01-01

    Gene expression time-course experiments allow to study the dynamics of transcriptomic changes in cells exposed to different stimuli. However, most approaches for the reconstruction of gene association networks (GANs) do not propose prior-selection approaches tailored to time-course transcriptome data. Here, we present a workflow for the identification of GANs from time-course data using prior selection of genes differentially expressed over time identified by natural cubic spline regression modeling (NCSRM). The workflow comprises three major steps: 1) the identification of differentially expressed genes from time-course expression data by employing NCSRM, 2) the use of regularized dynamic partial correlation as implemented in GeneNet to infer GANs from differentially expressed genes and 3) the identification and functional characterization of the key nodes in the reconstructed networks. The approach was applied on a time-resolved transcriptome data set of radiation-perturbed cell culture models of non-tumor cells with normal and increased radiation sensitivity. NCSRM detected significantly more genes than another commonly used method for time-course transcriptome analysis (BETR). While most genes detected with BETR were also detected with NCSRM the false-detection rate of NCSRM was low (3%). The GANs reconstructed from genes detected with NCSRM showed a better overlap with the interactome network Reactome compared to GANs derived from BETR detected genes. After exposure to 1 Gy the normal sensitive cells showed only sparse response compared to cells with increased sensitivity, which exhibited a strong response mainly of genes related to the senescence pathway. After exposure to 10 Gy the response of the normal sensitive cells was mainly associated with senescence and that of cells with increased sensitivity with apoptosis. We discuss these results in a clinical context and underline the impact of senescence-associated pathways in acute radiation response of normal

  16. Porous media microstructure reconstruction using pixel-based and object-based simulated annealing: comparison with other reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Diogenes, Alysson N.; Santos, Luis O.E. dos; Fernandes, Celso P. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil); Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil)

    2008-07-01

    The reservoir rocks physical properties are usually obtained in laboratory, through standard experiments. These experiments are often very expensive and time-consuming. Hence, the digital image analysis techniques are a very fast and low cost methodology for physical properties prediction, knowing only geometrical parameters measured from the rock microstructure thin sections. This research analyzes two methods for porous media reconstruction using the relaxation method simulated annealing. Using geometrical parameters measured from rock thin sections, it is possible to construct a three-dimensional (3D) model of the microstructure. We assume statistical homogeneity and isotropy and the 3D model maintains porosity spatial correlation, chord size distribution and d 3-4 distance transform distribution for a pixel-based reconstruction and spatial correlation for an object-based reconstruction. The 2D and 3D preliminary results are compared with microstructures reconstructed by truncated Gaussian methods. As this research is in its beginning, only the 2D results will be presented. (author)

  17. Stepwise method based on Wiener estimation for spectral reconstruction in spectroscopic Raman imaging.

    Science.gov (United States)

    Chen, Shuo; Wang, Gang; Cui, Xiaoyu; Liu, Quan

    2017-01-23

    Raman spectroscopy has demonstrated great potential in biomedical applications. However, spectroscopic Raman imaging is limited in the investigation of fast changing phenomena because of slow data acquisition. Our previous studies have indicated that spectroscopic Raman imaging can be significantly sped up using the approach of narrow-band imaging followed by spectral reconstruction. A multi-channel system was built to demonstrate the feasibility of fast wide-field spectroscopic Raman imaging using the approach of simultaneous narrow-band image acquisition followed by spectral reconstruction based on Wiener estimation in phantoms. To further improve the accuracy of reconstructed Raman spectra, we propose a stepwise spectral reconstruction method in this study, which can be combined with the earlier developed sequential weighted Wiener estimation to improve spectral reconstruction accuracy. The stepwise spectral reconstruction method first reconstructs the fluorescence background spectrum from narrow-band measurements and then the pure Raman narrow-band measurements can be estimated by subtracting the estimated fluorescence background from the overall narrow-band measurements. Thereafter, the pure Raman spectrum can be reconstructed from the estimated pure Raman narrow-band measurements. The result indicates that the stepwise spectral reconstruction method can improve spectral reconstruction accuracy significantly when combined with sequential weighted Wiener estimation, compared with the traditional Wiener estimation. In addition, qualitatively accurate cell Raman spectra were successfully reconstructed using the stepwise spectral reconstruction method from the narrow-band measurements acquired by a four-channel wide-field Raman spectroscopic imaging system. This method can potentially facilitate the adoption of spectroscopic Raman imaging to the investigation of fast changing phenomena.

  18. On multigrid methods for image reconstruction from projections

    Energy Technology Data Exchange (ETDEWEB)

    Henson, V.E.; Robinson, B.T. [Naval Postgraduate School, Monterey, CA (United States); Limber, M. [Simon Fraser Univ., Burnaby, British Columbia (Canada)

    1994-12-31

    The sampled Radon transform of a 2D function can be represented as a continuous linear map R : L{sup 1} {yields} R{sup N}. The image reconstruction problem is: given a vector b {element_of} R{sup N}, find an image (or density function) u(x, y) such that Ru = b. Since in general there are infinitely many solutions, the authors pick the solution with minimal 2-norm. Numerous proposals have been made regarding how best to discretize this problem. One can, for example, select a set of functions {phi}{sub j} that span a particular subspace {Omega} {contained_in} L{sup 1}, and model R : {Omega} {yields} R{sup N}. The subspace {Omega} may be chosen as a member of a sequence of subspaces whose limit is dense in L{sup 1}. One approach to the choice of {Omega} gives rise to a natural pixel discretization of the image space. Two possible choices of the set {phi}{sub j} are the set of characteristic functions of finite-width `strips` representing energy transmission paths and the set of intersections of such strips. The authors have studied the eigenstructure of the matrices B resulting from these choices and the effect of applying a Gauss-Seidel iteration to the problem Bw = b. There exists a near null space into which the error vectors migrate with iteration, after which Gauss-Seidel iteration stalls. The authors attempt to accelerate convergence via a multilevel scheme, based on the principles of McCormick`s Multilevel Projection Method (PML). Coarsening is achieved by thickening the rays which results in a much smaller discretization of an optimal grid, and a halving of the number of variables. This approach satisfies all the requirements of the PML scheme. They have observed that a multilevel approach based on this idea accelerates convergence at least to the point where noise in the data dominates.

  19. Digital reconstructed radiography quality control with software methods

    Science.gov (United States)

    Denis, Eloise; Beaumont, Stephane; Guedon, JeanPierre

    2005-04-01

    Nowadays, most of treatments for external radiotherapy are prepared with Treatment Planning Systems (TPS) which uses a virtual patient generated by a set of transverse slices acquired with a CT scanner of the patient in treatment position 1 2 3. In the first step of virtual simulation, the TPS is used to define a ballistic allowing a good target covering and the lowest irradiation for normal tissues. This parameters optimisation of the treatment with the TPS is realised with particular graphic tools allowing to: ×Contour the target, ×Expand the limit of the target in order to take into account contouring uncertainties, patient set up errors, movements of the target during the treatment (internal movement of the target and external movement of the patient), and beam's penumbra, ×Determine beams orientation and define dimensions and forms of the beams, ×Visualize beams on the patient's skin and calculate some characteristic points which will be tattooed on the patient to assist the patient set up before treating, ×Calculate for each beam a Digital Reconstructed Radiography (DRR) consisting in projecting the 3D CT virtual patient and beam limits with a cone beam geometry onto a plane. These DRR allow one for insuring the patient positioning during the treatment, essentially bone structures alignment by comparison with real radiography realized with the treatment X-ray source in the same geometric conditions (portal imaging). Then DRR are preponderant to insure the geometric accuracy of the treatment. For this reason quality control of its computation is mandatory4 . Until now, this control is realised with real test objects including some special inclusions4 5 . This paper proposes to use some numerical test objects to control the quality DRR calculation in terms of computation time, beam angle, divergence and magnification precision, spatial and contrast resolutions. The main advantage of this proposed method is to avoid a real test object CT acquisition

  20. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  1. Reconstruction of on-axis lensless Fourier transform digital hologram with the screen division method

    Science.gov (United States)

    Jiang, Hongzhen; Liu, Xu; Liu, Yong; Li, Dong; Chen, Zhu; Zheng, Fanglan; Yu, Deqiang

    2017-10-01

    An effective approach for reconstructing on-axis lensless Fourier Transform digital hologram by using the screen division method is proposed. Firstly, the on-axis Fourier Transform digital hologram is divided into sub-holograms. Then the reconstruction result of every sub-hologram is obtained according to the position of corresponding sub-hologram in the hologram reconstruction plane with Fourier transform operation. Finally, the reconstruction image of on-axis Fourier Transform digital hologram can be acquired by the superposition of the reconstruction result of sub-holograms. Compared with the traditional reconstruction method with the phase shifting technology, in which multiple digital holograms are required to record for obtaining the reconstruction image, this method can obtain the reconstruction image with only one digital hologram and therefore greatly simplify the recording and reconstruction process of on-axis lensless Fourier Transform digital holography. The effectiveness of the proposed method is well proved with the experimental results and it will have potential application foreground in the holographic measurement and display field.

  2. Dictionary-Learning-Based Reconstruction Method for Electron Tomography

    Science.gov (United States)

    LIU, BAODONG; YU, HENGYONG; VERBRIDGE, SCOTT S.; SUN, LIZHI; WANG, GE

    2014-01-01

    Summary Electron tomography usually suffers from so-called “missing wedge” artifacts caused by limited tilt angle range. An equally sloped tomography (EST) acquisition scheme (which should be called the linogram sampling scheme) was recently applied to achieve 2.4-angstrom resolution. On the other hand, a compressive sensing inspired reconstruction algorithm, known as adaptive dictionary based statistical iterative reconstruction (ADSIR), has been reported for X-ray computed tomography. In this paper, we evaluate the EST, ADSIR, and an ordered-subset simultaneous algebraic reconstruction technique (OS-SART), and compare the ES and equally angled (EA) data acquisition modes. Our results show that OS-SART is comparable to EST, and the ADSIR outperforms EST and OS-SART. Furthermore, the equally sloped projection data acquisition mode has no advantage over the conventional equally angled mode in this context. PMID:25104167

  3. Reconstruction of enhancer-target networks in 935 samples of human primary cells, tissues and cell lines.

    Science.gov (United States)

    Cao, Qin; Anyansi, Christine; Hu, Xihao; Xu, Liangliang; Xiong, Lei; Tang, Wenshu; Mok, Myth T S; Cheng, Chao; Fan, Xiaodan; Gerstein, Mark; Cheng, Alfred S L; Yip, Kevin Y

    2017-10-01

    We propose a new method for determining the target genes of transcriptional enhancers in specific cells and tissues. It combines global trends across many samples and sample-specific information, and considers the joint effect of multiple enhancers. Our method outperforms existing methods when predicting the target genes of enhancers in unseen samples, as evaluated by independent experimental data. Requiring few types of input data, we are able to apply our method to reconstruct the enhancer-target networks in 935 samples of human primary cells, tissues and cell lines, which constitute by far the largest set of enhancer-target networks. The similarity of these networks from different samples closely follows their cell and tissue lineages. We discover three major co-regulation modes of enhancers and find defense-related genes often simultaneously regulated by multiple enhancers bound by different transcription factors. We also identify differentially methylated enhancers in hepatocellular carcinoma (HCC) and experimentally confirm their altered regulation of HCC-related genes.

  4. Reconstruction methods - P-bar ANDA Focussing-Lightguide Disc DIRC

    Energy Technology Data Exchange (ETDEWEB)

    Cowie, E; Hill, G; Hoek, M; Kaiser, R; Keri, T; Murray, M; Rosner, G; Seitz, B [Department of Physics and Astronomy, Kelvin Building, University of Glasgow, Glasgow G12 8QQ, Scotland (United Kingdom); Foehl, K; Glazier, D [School of Physics, University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)], E-mail: g.hill@physics.gla.ac.uk

    2009-09-15

    The Focussing-Lightguide Disc DIRC will provide crucial Particle Identification (PID) information for the P-bar ANDA experiment at FAIR, GSI. This detector presents a challenging environment for reconstruction due to the complexity of the expected hit patterns and the operating conditions of the P-bar ANDA experiment. A discussion of possible methods to reconstruct PID from this detector is given here. Reconstruction software is currently under development.

  5. 3D reconstruction and standardization of the rat facial nucleus for precise mapping of vibrissal motor networks.

    Science.gov (United States)

    Guest, Jason M; Seetharama, Mythreya M; Wendel, Elizabeth S; Strick, Peter L; Oberlaender, Marcel

    2017-09-27

    The rodent facial nucleus (FN) comprises motoneurons (MNs) that control the facial musculature. In the lateral part of the FN, populations of vibrissal motoneurons (vMNs) innervate two groups of muscles that generate movements of the whiskers. Vibrissal MNs thus represent the terminal point of the neuronal networks that generate rhythmic whisking during exploratory behaviors and that modify whisker movements based on sensory-motor feedback during tactile-based perception. Here, we combined retrograde tracer injections into whisker-specific muscles, with large-scale immunohistochemistry and digital reconstructions to generate an average model of the rat FN. The model incorporates measurements of the FN geometry, its cellular organization and a whisker row-specific map formed by vMNs. Furthermore, the model provides a digital 3D reference frame that allows registering structural data - obtained across scales and animals - into a common coordinate system with a precision of ∼60 µm. We illustrate the registration method by injecting replication competent rabies virus into the muscle of a single whisker. Retrograde transport of the virus to vMNs enabled reconstruction of their dendrites. Subsequent trans-synaptic transport enabled mapping the presynaptic neurons of the reconstructed vMNs. Registration of these data to the FN reference frame provides a first account of the morphological and synaptic input variability within a population of vMNs that innervate the same muscle. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Polysaccharides utilization in human gut bacterium Bacteroides thetaiotaomicron: comparative genomics reconstruction of metabolic and regulatory networks.

    Science.gov (United States)

    Ravcheev, Dmitry A; Godzik, Adam; Osterman, Andrei L; Rodionov, Dmitry A

    2013-12-12

    Bacteroides thetaiotaomicron, a predominant member of the human gut microbiota, is characterized by its ability to utilize a wide variety of polysaccharides using the extensive saccharolytic machinery that is controlled by an expanded repertoire of transcription factors (TFs). The availability of genomic sequences for multiple Bacteroides species opens an opportunity for their comparative analysis to enable characterization of their metabolic and regulatory networks. A comparative genomics approach was applied for the reconstruction and functional annotation of the carbohydrate utilization regulatory networks in 11 Bacteroides genomes. Bioinformatics analysis of promoter regions revealed putative DNA-binding motifs and regulons for 31 orthologous TFs in the Bacteroides. Among the analyzed TFs there are 4 SusR-like regulators, 16 AraC-like hybrid two-component systems (HTCSs), and 11 regulators from other families. Novel DNA motifs of HTCSs and SusR-like regulators in the Bacteroides have the common structure of direct repeats with a long spacer between two conserved sites. The inferred regulatory network in B. thetaiotaomicron contains 308 genes encoding polysaccharide and sugar catabolic enzymes, carbohydrate-binding and transport systems, and TFs. The analyzed TFs control pathways for utilization of host and dietary glycans to monosaccharides and their further interconversions to intermediates of the central metabolism. The reconstructed regulatory network allowed us to suggest and refine specific functional assignments for sugar catabolic enzymes and transporters, providing a substantial improvement to the existing metabolic models for B. thetaiotaomicron. The obtained collection of reconstructed TF regulons is available in the RegPrecise database (http://regprecise.lbl.gov).

  7. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    Science.gov (United States)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  8. Reducing the effects of acoustic heterogeneity with an iterative reconstruction method from experimental data in microwave induced thermoacoustic tomography.

    Science.gov (United States)

    Wang, Jinguo; Zhao, Zhiqin; Song, Jian; Chen, Guoping; Nie, Zaiping; Liu, Qing-Huo

    2015-05-01

    An iterative reconstruction method has been previously reported by the authors of this paper. However, the iterative reconstruction method was demonstrated by solely using the numerical simulations. It is essential to apply the iterative reconstruction method to practice conditions. The objective of this work is to validate the capability of the iterative reconstruction method for reducing the effects of acoustic heterogeneity with the experimental data in microwave induced thermoacoustic tomography. Most existing reconstruction methods need to combine the ultrasonic measurement technology to quantitatively measure the velocity distribution of heterogeneity, which increases the system complexity. Different to existing reconstruction methods, the iterative reconstruction method combines time reversal mirror technique, fast marching method, and simultaneous algebraic reconstruction technique to iteratively estimate the velocity distribution of heterogeneous tissue by solely using the measured data. Then, the estimated velocity distribution is used subsequently to reconstruct the highly accurate image of microwave absorption distribution. Experiments that a target placed in an acoustic heterogeneous environment are performed to validate the iterative reconstruction method. By using the estimated velocity distribution, the target in an acoustic heterogeneous environment can be reconstructed with better shape and higher image contrast than targets that are reconstructed with a homogeneous velocity distribution. The distortions caused by the acoustic heterogeneity can be efficiently corrected by utilizing the velocity distribution estimated by the iterative reconstruction method. The advantage of the iterative reconstruction method over the existing correction methods is that it is successful in improving the quality of the image of microwave absorption distribution without increasing the system complexity.

  9. Exploration Knowledge Sharing Networks Using Social Network Analysis Methods

    Directory of Open Access Journals (Sweden)

    Győző Attila Szilágyi

    2017-10-01

    Full Text Available Knowledge sharing within organization is one of the key factor for success. The organization, where knowledge sharing takes place faster and more efficiently, is able to adapt to changes in the market environment more successfully, and as a result, it may obtain a competitive advantage. Knowledge sharing in an organization is carried out through formal and informal human communication contacts during work. This forms a multi-level complex network whose quantitative and topological characteristics largely determine how quickly and to what extent the knowledge travels within organization. The study presents how different networks of knowledge sharing in the organization can be explored by means of network analysis methods through a case study, and which role play the properties of these networks in fast and sufficient spread of knowledge in organizations. The study also demonstrates the practical applications of our research results. Namely, on the basis of knowledge sharing educational strategies can be developed in an organization, and further, competitiveness of an organization may increase due to those strategies’ application.

  10. The equivalent source method as a sparse signal reconstruction

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Xenaki, Angeliki

    2015-01-01

    sources in the case of this model. Sparse solutions can be achieved by l-1 norm minimization, providing accurate reconstruction and robustness to noise, because favouring sparsity suppresses noisy components. The study addresses the influence of the ill-conditioning of the propagation matrix, which can...

  11. A generalized Prony method for reconstruction of sparse sums of eigenfunctions of linear operators

    Science.gov (United States)

    Peter, Thomas; Plonka, Gerlind

    2013-02-01

    We derive a new generalization of Prony’s method to reconstruct M-sparse expansions of (generalized) eigenfunctions of linear operators from only O(M) suitable values in a deterministic way. The proposed method covers the well-known reconstruction methods for M-sparse sums of exponentials as well as for the interpolation of M-sparse polynomials by using special linear operators in C({{ {R}}}). Further, we can derive new reconstruction formulas for M-sparse expansions of orthogonal polynomials using the Sturm-Liouville operator. The method is also applied to the recovery of M-sparse vectors in finite-dimensional vector spaces.

  12. Multi-grid finite element method used for enhancing the reconstruction accuracy in Cerenkov luminescence tomography

    Science.gov (United States)

    Guo, Hongbo; He, Xiaowei; Liu, Muhan; Zhang, Zeyu; Hu, Zhenhua; Tian, Jie

    2017-03-01

    Cerenkov luminescence tomography (CLT), as a promising optical molecular imaging modality, can be applied to cancer diagnostic and therapeutic. Most researches about CLT reconstruction are based on the finite element method (FEM) framework. However, the quality of FEM mesh grid is still a vital factor to restrict the accuracy of the CLT reconstruction result. In this paper, we proposed a multi-grid finite element method framework, which was able to improve the accuracy of reconstruction. Meanwhile, the multilevel scheme adaptive algebraic reconstruction technique (MLS-AART) based on a modified iterative algorithm was applied to improve the reconstruction accuracy. In numerical simulation experiments, the feasibility of our proposed method were evaluated. Results showed that the multi-grid strategy could obtain 3D spatial information of Cerenkov source more accurately compared with the traditional single-grid FEM.

  13. An Improved Method for Power-Line Reconstruction from Point Cloud Data

    Directory of Open Access Journals (Sweden)

    Bo Guo

    2016-01-01

    Full Text Available This paper presents a robust algorithm to reconstruct power-lines using ALS technology. Point cloud data are automatically classified into five target classes before reconstruction. In order to improve upon the defaults of only using the local shape properties of a single power-line span in traditional methods, the distribution properties of power-line group between two neighbor pylons and contextual information of related pylon objects are used to improve the reconstruction results. First, the distribution properties of power-line sets are detected using a similarity detection method. Based on the probability of neighbor points belonging to the same span, a RANSAC rule based algorithm is then introduced to reconstruct power-lines through two important advancements: reliable initial parameters fitting and efficient candidate sample detection. Our experiments indicate that the proposed method is effective for reconstruction of power-lines from complex scenarios.

  14. A Novel Parallel Method for Speckle Masking Reconstruction Using the OpenMP

    Science.gov (United States)

    Li, Xuebao; Zheng, Yanfang

    2016-08-01

    High resolution reconstruction technology is developed to help enhance the spatial resolution of observational images for ground-based solar telescopes, such as speckle masking. Near real-time reconstruction performance is achieved on a high performance cluster using the Message Passing Interface (MPI). However, much time is spent in reconstructing solar subimages in such a speckle reconstruction. We design and implement a novel parallel method for speckle masking reconstruction of solar subimage on a shared memory machine using the OpenMP. Real tests are performed to verify the correctness of our codes. We present the details of several parallel reconstruction steps. The parallel implementation between various modules shows a great speed increase as compared to single thread serial implementation, and a speedup of about 2.5 is achieved in one subimage reconstruction. The timing result for reconstructing one subimage with 256×256 pixels shows a clear advantage with greater number of threads. This novel parallel method can be valuable in real-time reconstruction of solar images, especially after porting to a high performance cluster.

  15. Reconstruction of floodplain sedimentation rates: a combination of methods to optimize estimates

    NARCIS (Netherlands)

    Hobo, N.; Makaske, B.; Middelkoop, H.; Wallinga, J.

    2010-01-01

    Reconstruction of overbank sedimentation rates over the past decades gives insight into fl oodplain dynamics, and thereby provides a basis for effi cient and sustainable fl oodplain management. We compared the results of four independent reconstruction methods – optically stimulated luminescence

  16. Analysis of reconstructive methods in surgical treatment of nasal skin defects

    Directory of Open Access Journals (Sweden)

    Jović Marko S.

    2016-01-01

    Full Text Available Background/Aim. Surgeons often face with the problem when selecting a reconstructive method for nasal skin defects. The aim of this study was to determine functional and aesthetic character-istics of different reconstructive methods used for skin defects in different regions of the nose. Methods. The study involved 44 patients with basocellular carcinoma in nasal area. The nasal skin was divided into four subunits: the tip, the alar lobules, the side-walls and the dorsum. The average skin defect size was 10 mm in diameter. Local flaps and full thickness skin grafts were used in the study. We analyzed the functional and esthetic results of dif-ferent reconstructive methods used for nasal defects in different regions of the nose 12 months after the surgery. Results. The study shows that different reconstructive methods produce dif-ferent functional and esthetic results in the same nasal subunits and that the same reconstructive method produces different re-sults in different nasal subunits. Conclusions. Estimation the postoperative functional and esthetic characteristics of different reconstructive methods is one of the basic preconditions of suc-cessful reconstruction.

  17. Skeletal camera network embedded structure-from-motion for 3D scene reconstruction from UAV images.

    NARCIS (Netherlands)

    Xu, Z.; Wu, L.; Gerke, M.; Wang, R.; Yang, H.

    2016-01-01

    Structure-from-Motion (SfM) techniques have been widely used for 3D scene reconstruction from multi-view images. However, due to the large computational costs of SfM methods there is a major challenge in processing highly overlapping images, e.g. images from unmanned aerial vehicles (UAV). This

  18. Suwannee River flow variability 1550-2005 CE reconstructed from a multispecies tree-ring network

    Science.gov (United States)

    Harley, Grant L.; Maxwell, Justin T.; Larson, Evan; Grissino-Mayer, Henri D.; Henderson, Joseph; Huffman, Jean

    2017-01-01

    Understanding the long-term natural flow regime of rivers enables resource managers to more accurately model water level variability. Models for managing water resources are important in Florida where population increase is escalating demand on water resources and infrastructure. The Suwannee River is the second largest river system in Florida and the least impacted by anthropogenic disturbance. We used new and existing tree-ring chronologies from multiple species to reconstruct mean March-October discharge for the Suwannee River during the period 1550-2005 CE and place the short period of instrumental flows (since 1927 CE) into historical context. We used a nested principal components regression method to maximize the use of chronologies with varying time coverage in the network. Modeled streamflow estimates indicated that instrumental period flow conditions do not adequately capture the full range of Suwannee River flow variability beyond the observational period. Although extreme dry and wet events occurred in the gage record, pluvials and droughts that eclipse the intensity and duration of instrumental events occurred during the 16-19th centuries. The most prolonged and severe dry conditions during the past 450 years occurred during the 1560s CE. In this prolonged drought period mean flow was estimated at 17% of the mean instrumental period flow. Significant peaks in spectral density at 2-7, 10, 45, and 85-year periodicities indicated the important influence of coupled oceanic-atmospheric processes on Suwannee River streamflow over the past four centuries, though the strength of these periodicities varied over time. Future water planning based on current flow expectations could prove devastating to natural and human systems if a prolonged and severe drought mirroring the 16th and 18th century events occurred. Future work in the region will focus on updating existing tree-ring chronologies and developing new collections from moisture-sensitive sites to improve

  19. Investigating meta-approaches for reconstructing gene networks in a mammalian cellular context.

    Directory of Open Access Journals (Sweden)

    Azree Nazri

    Full Text Available The output of state-of-the-art reverse-engineering methods for biological networks is often based on the fitting of a mathematical model to the data. Typically, different datasets do not give single consistent network predictions but rather an ensemble of inconsistent networks inferred under the same reverse-engineering method that are only consistent with the specific experimentally measured data. Here, we focus on an alternative approach for combining the information contained within such an ensemble of inconsistent gene networks called meta-analysis, to make more accurate predictions and to estimate the reliability of these predictions. We review two existing meta-analysis approaches; the Fisher transformation combined coefficient test (FTCCT and Fisher's inverse combined probability test (FICPT; and compare their performance with five well-known methods, ARACNe, Context Likelihood or Relatedness network (CLR, Maximum Relevance Minimum Redundancy (MRNET, Relevance Network (RN and Bayesian Network (BN. We conducted in-depth numerical ensemble simulations and demonstrated for biological expression data that the meta-analysis approaches consistently outperformed the best gene regulatory network inference (GRNI methods in the literature. Furthermore, the meta-analysis approaches have a low computational complexity. We conclude that the meta-analysis approaches are a powerful tool for integrating different datasets to give more accurate and reliable predictions for biological networks.

  20. Skin sparing mastectomy: Technique and suggested methods of reconstruction

    Directory of Open Access Journals (Sweden)

    Ahmed M. Farahat

    2014-09-01

    Conclusions: Skin Sparing mastectomy through a circum-areolar incision has proven to be a safe and feasible option for the management of breast cancer in Egyptian women, offering them adequate oncologic control and optimum cosmetic outcome through preservation of the skin envelope of the breast when ever indicated. Our patients can benefit from safe surgery and have good cosmetic outcomeby applying different reconstructive techniques.

  1. Comparison of dating methods for paleoglacial reconstruction in Central Asia

    OpenAIRE

    Gribenski, Natacha

    2016-01-01

    Reconstruction of former Central Asian glaciers extents can provide valuable information about past atmospheric circulation variations. These extents, often marked by terminal moraines, need to be chronologically constrained. Cosmogenic nuclide exposure (CNE) dating is widely used to directly date moraines. In addition, there is increasing interest on using optically stimulated luminescence (OSL) techniques for dating glacial landforms. This thesis focuses on the methodological aspects of dir...

  2. Multi-core parallel reconstruction method for cone-beam computed tomography

    Science.gov (United States)

    Li, Mingjun; Zhang, Dinghua; Huang, Kuidong; Yu, Qingchao; Zhang, Shunli

    2009-10-01

    In the application of nondestructive testing and evaluation, this paper mainly deals with the problem of improving the image reconstruction speed in cone beam computed tomography (CBCT). FDK algorithm is a time costing method for CBCT image reconstruction, due to the voluminous data and long operating process. With the help of data organization and task distribution, we improved the SIMD instructions in Z-line data first reconstruction algorithm, which is an improved method based on the FDK algorithm. And then, we run it parallelized with multi-core technology and a certain divide-and-conquer strategy to get a fast reconstruction speed in CBCT. Finally, we evaluate the effectiveness of our method from a numerical test of a blade model on an 8-core computer with four channel memory. Our method has got a considerable speedup ratio of 217.22 to the FDK algorithm, and implemented the back-projection process of reconstructing the inscribed cylinder of 5123 reconstruction space in about 30 seconds. It has got the same image quality with the Z-line data first method, which retains the computational precision with FDK algorithm. Basically, our method has met the requirement of real-time reconstruction.

  3. METANNOGEN: compiling features of biochemical reactions needed for the reconstruction of metabolic networks

    Directory of Open Access Journals (Sweden)

    Holzhütter Hermann-Georg

    2007-01-01

    Full Text Available Abstract Background One central goal of computational systems biology is the mathematical modelling of complex metabolic reaction networks. The first and most time-consuming step in the development of such models consists in the stoichiometric reconstruction of the network, i. e. compilation of all metabolites, reactions and transport processes relevant to the considered network and their assignment to the various cellular compartments. Therefore an information system is required to collect and manage data from different databases and scientific literature in order to generate a metabolic network of biochemical reactions that can be subjected to further computational analyses. Results The computer program METANNOGEN facilitates the reconstruction of metabolic networks. It uses the well-known database of biochemical reactions KEGG of biochemical reactions as primary information source from which biochemical reactions relevant to the considered network can be selected, edited and stored in a separate, user-defined database. Reactions not contained in KEGG can be entered manually into the system. To aid the decision whether or not a reaction selected from KEGG belongs to the considered network METANNOGEN contains information of SWISSPROT and ENSEMBL and provides Web links to a number of important information sources like METACYC, BRENDA, NIST, and REACTOME. If a reaction is reported to occur in more than one cellular compartment, a corresponding number of reactions is generated each referring to one specific compartment. Transport processes of metabolites are entered like chemical reactions where reactants and products have different compartment attributes. The list of compartmentalized biochemical reactions and membrane transport processes compiled by means of METANNOGEN can be exported as an SBML file for further computational analysis. METANNOGEN is highly customizable with respect to the content of the SBML output file, additional data

  4. Image reconstruction method IRBis for optical/infrared long-baseline interferometry

    Science.gov (United States)

    Hofmann, Karl-Heinz; Heininger, Matthias; Schertl, Dieter; Weigelt, Gerd; Millour, Florentin; Berio, Philippe

    2016-07-01

    IRBis is an image reconstruction method for optical/infrared long-baseline interferometry. IRBis can reconstruct images from (a) measured visibilities and closure phases, or from (b) measured complex visibilities (i.e. the Fourier phases and visibilities). The applied optimization routine ASA CG is based on conjugate gradients. The method allows the user to implement different regularizers, as for example, maximum entropy, smoothness, total variation, etc., and apply residual ratios as an additional metric for goodness-of-fit. In addition, IRBis allows the user to change the following reconstruction parameters: (a) FOV of the area to be reconstructed, (b) the size of the pixel-grid used, (c) size of a binary mask in image space allowing reconstructed intensities real astronomical objects: (a) We have investigated image reconstruction experiments of MATISSE target candidates by computer simulations. We have modeled gaps in a disk of a young stellar object and have simulated interferometric data (squared visibilities and closure phases) with a signal-to-noise ratio as expected for MATISSE observations. We have performed image reconstruction experiments with this model for different flux levels of the target and different amount of observing time, that is, with different uv coverages. As expected, the quality of the reconstructions clearly depends on the flux of the source and the completeness of the uv coverage. (b) We also discuss reconstructions of the Luminous Blue Variable η Carinae obtained from AMBER observations in the high spectral resolution mode in the K band. The images were reconstruction (1) using the closure phases and (2) using the absolute phases derived from the measured wavelength-differential phases and the closure phase reconstruction in the continuum.

  5. Reconstruction of metabolic networks from high-throughput metabolite profiling data: in silico analysis of red blood cell metabolism

    OpenAIRE

    Nemenman, Ilya; Escola, G. Sean; Hlavacek, William S.; Unkefer, Pat J.; Unkefer, Clifford J.; Wall, Michael E.

    2007-01-01

    We investigate the ability of algorithms developed for reverse engineering of transcriptional regulatory networks to reconstruct metabolic networks from high-throughput metabolite profiling data. For this, we generate synthetic metabolic profiles for benchmarking purposes based on a well-established model for red blood cell metabolism. A variety of data sets is generated, accounting for different properties of real metabolic networks, such as experimental noise, metabolite correlations, and t...

  6. A Parallel Reconstructed Discontinuous Galerkin Method for the Compressible Flows on Aritrary Grids

    Energy Technology Data Exchange (ETDEWEB)

    Hong Luo; Amjad Ali; Robert Nourgaliev; Vincent A. Mousseau

    2010-01-01

    A reconstruction-based discontinuous Galerkin method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. In this method, an in-cell reconstruction is used to obtain a higher-order polynomial representation of the underlying discontinuous Galerkin polynomial solution and an inter-cell reconstruction is used to obtain a continuous polynomial solution on the union of two neighboring, interface-sharing cells. The in-cell reconstruction is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. The inter-cell reconstruction is devised to remove an interface discontinuity of the solution and its derivatives and thus to provide a simple, accurate, consistent, and robust approximation to the viscous and heat fluxes in the Navier-Stokes equations. A parallel strategy is also devised for the resulting reconstruction discontinuous Galerkin method, which is based on domain partitioning and Single Program Multiple Data (SPMD) parallel programming model. The RDG method is used to compute a variety of compressible flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results demonstrate that this RDG method is third-order accurate at a cost slightly higher than its underlying second-order DG method, at the same time providing a better performance than the third order DG method, in terms of both computing costs and storage requirements.

  7. Application of Symmetry Adapted Function Method for Three-Dimensional Reconstruction of Octahedral Biological Macromolecules

    Directory of Open Access Journals (Sweden)

    Songjun Zeng

    2010-01-01

    Full Text Available A method for three-dimensional (3D reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N =0.1,0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise.

  8. Computer methods in electric network analysis

    Energy Technology Data Exchange (ETDEWEB)

    Saver, P.; Hajj, I.; Pai, M.; Trick, T.

    1983-06-01

    The computational algorithms utilized in power system analysis have more than just a minor overlap with those used in electronic circuit computer aided design. This paper describes the computer methods that are common to both areas and highlights the differences in application through brief examples. Recognizing this commonality has stimulated the exchange of useful techniques in both areas and has the potential of fostering new approaches to electric network analysis through the interchange of ideas.

  9. Network reconstruction of platelet metabolism identifies metabolic signature for aspirin resistance

    Science.gov (United States)

    Thomas, Alex; Rahmanian, Sorena; Bordbar, Aarash; Palsson, Bernhard Ø.; Jamshidi, Neema

    2014-01-01

    Recently there has not been a systematic, objective assessment of the metabolic capabilities of the human platelet. A manually curated, functionally tested, and validated biochemical reaction network of platelet metabolism, iAT-PLT-636, was reconstructed using 33 proteomic datasets and 354 literature references. The network contains enzymes mapping to 403 diseases and 231 FDA approved drugs, alluding to an expansive scope of biochemical transformations that may affect or be affected by disease processes in multiple organ systems. The effect of aspirin (ASA) resistance on platelet metabolism was evaluated using constraint-based modeling, which revealed a redirection of glycolytic, fatty acid, and nucleotide metabolism reaction fluxes in order to accommodate eicosanoid synthesis and reactive oxygen species stress. These results were confirmed with independent proteomic data. The construction and availability of iAT-PLT-636 should stimulate further data-driven, systems analysis of platelet metabolism towards the understanding of pathophysiological conditions including, but not strictly limited to, coagulopathies.

  10. Reconstruction of an infrared band of meteorological satellite imagery with abductive networks

    Science.gov (United States)

    Singer, Harvey A.; Cockayne, John E.; Versteegen, Peter L.

    1995-01-01

    As the current fleet of meteorological satellites age, the accuracy of the imagery sensed on a spectral channel of the image scanning system is continually and progressively degraded by noise. In time, that data may even become unusable. We describe a novel approach to the reconstruction of the noisy satellite imagery according to empirical functional relationships that tie the spectral channels together. Abductive networks are applied to automatically learn the empirical functional relationships between the data sensed on the other spectral channels to calculate the data that should have been sensed on the corrupted channel. Using imagery unaffected by noise, it is demonstrated that abductive networks correctly predict the noise-free observed data.

  11. Spectral Analysis Methods of Social Networks

    Directory of Open Access Journals (Sweden)

    P. G. Klyucharev

    2017-01-01

    Full Text Available Online social networks (such as Facebook, Twitter, VKontakte, etc. being an important channel for disseminating information are often used to arrange an impact on the social consciousness for various purposes - from advertising products or services to the full-scale information war thereby making them to be a very relevant object of research. The paper reviewed the analysis methods of social networks (primarily, online, based on the spectral theory of graphs. Such methods use the spectrum of the social graph, i.e. a set of eigenvalues of its adjacency matrix, and also the eigenvectors of the adjacency matrix.Described measures of centrality (in particular, centrality based on the eigenvector and PageRank, which reflect a degree of impact one or another user of the social network has. A very popular PageRank measure uses, as a measure of centrality, the graph vertices, the final probabilities of the Markov chain, whose matrix of transition probabilities is calculated on the basis of the adjacency matrix of the social graph. The vector of final probabilities is an eigenvector of the matrix of transition probabilities.Presented a method of dividing the graph vertices into two groups. It is based on maximizing the network modularity by computing the eigenvector of the modularity matrix.Considered a method for detecting bots based on the non-randomness measure of a graph to be computed using the spectral coordinates of vertices - sets of eigenvector components of the adjacency matrix of a social graph.In general, there are a number of algorithms to analyse social networks based on the spectral theory of graphs. These algorithms show very good results, but their disadvantage is the relatively high (albeit polynomial computational complexity for large graphs.At the same time it is obvious that the practical application capacity of the spectral graph theory methods is still underestimated, and it may be used as a basis to develop new methods.The work

  12. Reconstruction of the metabolic network of Pseudomonas aeruginosa to interrogate virulence factor synthesis

    Science.gov (United States)

    Bartell, Jennifer A.; Blazier, Anna S.; Yen, Phillip; Thøgersen, Juliane C.; Jelsbak, Lars; Goldberg, Joanna B.; Papin, Jason A.

    2017-03-01

    Virulence-linked pathways in opportunistic pathogens are putative therapeutic targets that may be associated with less potential for resistance than targets in growth-essential pathways. However, efficacy of virulence-linked targets may be affected by the contribution of virulence-related genes to metabolism. We evaluate the complex interrelationships between growth and virulence-linked pathways using a genome-scale metabolic network reconstruction of Pseudomonas aeruginosa strain PA14 and an updated, expanded reconstruction of P. aeruginosa strain PAO1. The PA14 reconstruction accounts for the activity of 112 virulence-linked genes and virulence factor synthesis pathways that produce 17 unique compounds. We integrate eight published genome-scale mutant screens to validate gene essentiality predictions in rich media, contextualize intra-screen discrepancies and evaluate virulence-linked gene distribution across essentiality datasets. Computational screening further elucidates interconnectivity between inhibition of virulence factor synthesis and growth. Successful validation of selected gene perturbations using PA14 transposon mutants demonstrates the utility of model-driven screening of therapeutic targets.

  13. Genome-scale reconstruction of metabolic network for a halophilic extremophile, Chromohalobacter salexigens DSM 3043

    Directory of Open Access Journals (Sweden)

    Oner Ebru

    2011-01-01

    Full Text Available Abstract Background Chromohalobacter salexigens (formerly Halomonas elongata DSM 3043 is a halophilic extremophile with a very broad salinity range and is used as a model organism to elucidate prokaryotic osmoadaptation due to its strong euryhaline phenotype. Results C. salexigens DSM 3043's metabolism was reconstructed based on genomic, biochemical and physiological information via a non-automated but iterative process. This manually-curated reconstruction accounts for 584 genes, 1386 reactions, and 1411 metabolites. By using flux balance analysis, the model was extensively validated against literature data on the C. salexigens phenotypic features, the transport and use of different substrates for growth as well as against experimental observations on the uptake and accumulation of industrially important organic osmolytes, ectoine, betaine, and its precursor choline, which play important roles in the adaptive response to osmotic stress. Conclusions This work presents the first comprehensive genome-scale metabolic model of a halophilic bacterium. Being a useful guide for identification and filling of knowledge gaps, the reconstructed metabolic network iOA584 will accelerate the research on halophilic bacteria towards application of systems biology approaches and design of metabolic engineering strategies.

  14. Snapshot of iron response in Shewanella oneidensis by gene network reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yunfeng; Harris, Daniel P.; Luo, Feng; Xiong, Wenlu; Joachimiak, Marcin; Wu, Liyou; Dehal, Paramvir; Jacobsen, Janet; Yang, Zamin; Palumbo, Anthony V.; Arkin, Adam P.; Zhou, Jizhong

    2008-10-09

    Background: Iron homeostasis of Shewanella oneidensis, a gamma-proteobacterium possessing high iron content, is regulated by a global transcription factor Fur. However, knowledge is incomplete about other biological pathways that respond to changes in iron concentration, as well as details of the responses. In this work, we integrate physiological, transcriptomics and genetic approaches to delineate the iron response of S. oneidensis. Results: We show that the iron response in S. oneidensis is a rapid process. Temporal gene expression profiles were examined for iron depletion and repletion, and a gene co-expression network was reconstructed. Modules of iron acquisition systems, anaerobic energy metabolism and protein degradation were the most noteworthy in the gene network. Bioinformatics analyses suggested that genes in each of the modules might be regulated by DNA-binding proteins Fur, CRP and RpoH, respectively. Closer inspection of these modules revealed a transcriptional regulator (SO2426) involved in iron acquisition and ten transcriptional factors involved in anaerobic energy metabolism. Selected genes in the network were analyzed by genetic studies. Disruption of genes encoding a putative alcaligin biosynthesis protein (SO3032) and a gene previously implicated in protein degradation (SO2017) led to severe growth deficiency under iron depletion conditions. Disruption of a novel transcriptional factor (SO1415) caused deficiency in both anaerobic iron reduction and growth with thiosulfate or TMAO as an electronic acceptor, suggesting that SO1415 is required for specific branches of anaerobic energy metabolism pathways. Conclusions: Using a reconstructed gene network, we identified major biological pathways that were differentially expressed during iron depletion and repletion. Genetic studies not only demonstrated the importance of iron acquisition and protein degradation for iron depletion, but also characterized a novel transcriptional factor (SO1415) with a

  15. A new herbarium-based method for reconstructing the phenology of plant species across large areas

    National Research Council Canada - National Science Library

    Lavoie, Claude; Lachance, Daniel

    2006-01-01

    ... associated with sampling locations. In this study, we propose a new herbarium-based method for reconstructing the flowering dates of plant species that have been collected across large areas. Coltsfoot...

  16. Improved Reconstruction Quality of Bioluminescent Images by Combining SP3 Equations and Bregman Iteration Method

    Directory of Open Access Journals (Sweden)

    Qiang Wu

    2013-01-01

    Full Text Available Bioluminescence tomography (BLT has a great potential to provide a powerful tool for tumor detection, monitoring tumor therapy progress, and drug development; developing new reconstruction algorithms will advance the technique to practical applications. In the paper, we propose a BLT reconstruction algorithm by combining SP3 equations and Bregman iteration method to improve the quality of reconstructed sources. The numerical results for homogeneous and heterogeneous phantoms are very encouraging and give significant improvement over the algorithms without the use of SP3 equations and Bregman iteration method.

  17. Reconstruction of Sound Source Pressures in an Enclosure Using the Phased Beam Tracing Method

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho; Ih, Jeong-Guon

    2009-01-01

    Source identification in an enclosure is not an easy task due to complicated wave interference and wall reflections, in particular, at mid-high frequencies. In this study, a phased beam tracing method was applied to the reconstruction of source pressures inside an enclosure at medium frequencies......-directional sphere and a cubic source in a rectangular enclosure were taken as examples in the simulation tests. A reconstruction error was investigated by Monte Carlo simulation in terms of field point locations. When the source information was reconstructed by the present method, it was shown that the sound power...

  18. Methods and applications for detecting structure in complex networks

    Science.gov (United States)

    Leicht, Elizabeth A.

    The use of networks to represent systems of interacting components is now common in many fields including the biological, physical, and social sciences. Network models are widely applicable due to their relatively simple framework of vertices and edges. Network structure, patterns of connection between vertices, impacts both the functioning of networks and processes occurring on networks. However, many aspects of network structure are still poorly understood. This dissertation presents a set of network analysis methods and applications to real-world as well as simulated networks. The methods are divided into two main types: linear algebra formulations and probabilistic mixture model techniques. Network models lend themselves to compact mathematical representation as matrices, making linear algebra techniques useful probes of network structure. We present methods for the detection of two distinct, but related, network structural forms. First, we derive a measure of vertex similarity based upon network structure. The method builds on existing ideas concerning calculation of vertex similarity, but generalizes and extends the scope to large networks. Second, we address the detection of communities or modules in a specific class of networks, directed networks. We propose a method for detecting community structure in directed networks, which is an extension of a community detection method previously only known for undirected networks. Moving away from linear algebra formulations, we propose two methods for network structure detection based on probabilistic techniques. In the first method, we use the machinery of the expectation-maximization (EM) algorithm to probe patterns of connection among vertices in static networks. The technique allows for the detection of a broad range of types of structure in networks. The second method focuses on time evolving networks. We propose an application of the EM algorithm to evolving networks that can reveal significant structural

  19. Efficient reconstruction of biological networks via transitive reduction on general purpose graphics processors.

    Science.gov (United States)

    Bošnački, Dragan; Odenbrett, Maximilian R; Wijs, Anton; Ligtenberg, Willem; Hilbers, Peter

    2012-10-30

    Techniques for reconstruction of biological networks which are based on perturbation experiments often predict direct interactions between nodes that do not exist. Transitive reduction removes such relations if they can be explained by an indirect path of influences. The existing algorithms for transitive reduction are sequential and might suffer from too long run times for large networks. They also exhibit the anomaly that some existing direct interactions are also removed. We develop efficient scalable parallel algorithms for transitive reduction on general purpose graphics processing units for both standard (unweighted) and weighted graphs. Edge weights are regarded as uncertainties of interactions. A direct interaction is removed only if there exists an indirect interaction path between the same nodes which is strictly more certain than the direct one. This is a refinement of the removal condition for the unweighted graphs and avoids to a great extent the erroneous elimination of direct edges. Parallel implementations of these algorithms can achieve speed-ups of two orders of magnitude compared to their sequential counterparts. Our experiments show that: i) taking into account the edge weights improves the reconstruction quality compared to the unweighted case; ii) it is advantageous not to distinguish between positive and negative interactions since this lowers the complexity of the algorithms from NP-complete to polynomial without loss of quality.

  20. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction.

    Science.gov (United States)

    Kang, Eunhee; Min, Junhong; Ye, Jong Chul

    2017-10-01

    Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning. However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis. Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community. Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns. To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach. We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images. More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise. In addition, our CNN is designed with a residual learning architecture for faster network training and better performance. Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose. In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches. Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 "Low-Dose CT Grand Challenge." To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective. In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from

  1. Research on the configuration design method of heterogeneous constellation reconstruction under the multiple objective and multiple constraint

    Science.gov (United States)

    Zhao, Shuang; Xu, Yanli; Dai, Huayu

    2017-05-01

    Aiming at the problem of configuration design of heterogeneous constellation reconstruction, a design method of heterogeneous constellation reconstruction based on multi objective and multi constraints is proposed. At first, the concept of heterogeneous constellation is defined. Secondly, the heterogeneous constellation reconstruction methods were analyzed, and then the two typical existing design methods of reconstruction, phase position uniformity method and reconstruction configuration design method based on optimization algorithm are summarized. The advantages and shortcomings of different reconstruction configuration design methods are compared, finally the heterogeneous constellation reconstruction configuration design is currently facing problems are analyzed and put forward the thinking about the reconstruction index system of heterogeneous constellation and the selection of optimal variables and the establishment of constraints in the optimization design of the configuration.

  2. Performance Evaluation of Super-Resolution Reconstruction Methods on Real-World Data

    Directory of Open Access Journals (Sweden)

    L. J. van Vliet

    2007-01-01

    Full Text Available The performance of a super-resolution (SR reconstruction method on real-world data is not easy to measure, especially as a ground-truth (GT is often not available. In this paper, a quantitative performance measure is used, based on triangle orientation discrimination (TOD. The TOD measure, simulating a real-observer task, is capable of determining the performance of a specific SR reconstruction method under varying conditions of the input data. It is shown that the performance of an SR reconstruction method on real-world data can be predicted accurately by measuring its performance on simulated data. This prediction of the performance on real-world data enables the optimization of the complete chain of a vision system; from camera setup and SR reconstruction up to image detection/recognition/identification. Furthermore, different SR reconstruction methods are compared to show that the TOD method is a useful tool to select a specific SR reconstruction method according to the imaging conditions (camera's fill-factor, optical point-spread-function (PSF, signal-to-noise ratio (SNR.

  3. Tumor Diagnosis Using Backpropagation Neural Network Method

    Science.gov (United States)

    Ma, Lixing; Looney, Carl; Sukuta, Sydney; Bruch, Reinhard; Afanasyeva, Natalia

    1998-05-01

    For characterization of skin cancer, an artificial neural network (ANN) method has been developed to diagnose normal tissue, benign tumor and melanoma. The pattern recognition is based on a three-layer neural network fuzzy learning system. In this study, the input neuron data set is the Fourier Transform infrared (FT-IR)spectrum obtained by a new Fiberoptic Evanescent Wave Fourier Transform Infrared (FEW-FTIR) spectroscopy method in the range of 1480 to 1850 cm-1. Ten input features are extracted from the absorbency values in this region. A single hidden layer of neural nodes with sigmoids activation functions clusters the feature space into small subclasses and the output nodes are separated in different nonconvex classes to permit nonlinear discrimination of disease states. The output is classified as three classes: normal tissue, benign tumor and melanoma. The results obtained from the neural network pattern recognition are shown to be consistent with traditional medical diagnosis. Input features have also been extracted from the absorbency spectra using chemical factor analysis. These abstract features or factors are also used in the classification.

  4. Reconstruction for 3D PET Based on Total Variation Constrained Direct Fourier Method.

    Directory of Open Access Journals (Sweden)

    Haiqing Yu

    Full Text Available This paper presents a total variation (TV regularized reconstruction algorithm for 3D positron emission tomography (PET. The proposed method first employs the Fourier rebinning algorithm (FORE, rebinning the 3D data into a stack of ordinary 2D data sets as sinogram data. Then, the resulted 2D sinogram are ready to be reconstructed by conventional 2D reconstruction algorithms. Given the locally piece-wise constant nature of PET images, we introduce the total variation (TV based reconstruction schemes. More specifically, we formulate the 2D PET reconstruction problem as an optimization problem, whose objective function consists of TV norm of the reconstructed image and the data fidelity term measuring the consistency between the reconstructed image and sinogram. To solve the resulting minimization problem, we apply an efficient methods called the Bregman operator splitting algorithm with variable step size (BOSVS. Experiments based on Monte Carlo simulated data and real data are conducted as validations. The experiment results show that the proposed method produces higher accuracy than conventional direct Fourier (DF (bias in BOSVS is 70% of ones in DF, variance of BOSVS is 80% of ones in DF.

  5. An audit of mandibular defect reconstruction methods in a Nigerian Tertiary Hospital.

    Science.gov (United States)

    Arotiba, J T; Obimakinde, O S; Ogunlade, S O; Fasola, A O; Okoje, V N; Akinmoladun, V I; Sotunmbi, P T; Obiechina, E A

    2011-09-01

    To audit methods of mandibular defect reconstruction used in our institution. A retrospective study of mandibular bone reconstruction at the University College Hospital Ibadan between January 2001 and December 2007. Relevant records were retrieved from patients' case notes and operation register. Comparative analysis of various methods of reconstruction was done by assessing treatment outcomes such as restoration of continuity and stability, graft infection, extrusion and fractures. Only 65 of the 82 patients that had mandibular continuity defect during the study period had reconstruction. Ameloblastoma accounted for 67% [n=55] of pathologies that required mandibular resection. Methods of reconstruction included non vascularised iliac bone anchored with either stainless steel wire (NVIBw) [n=38] or titanium plate (NVIBp) [n=9], titanium reconstruction plate [n=4] Steinman pin [n=12], rib graft [1] and acrylic plate temporisation [n=1]. The findings showed that titanium plate and NVIBp had the least complications in terms of infection, graft extrusion, fracture and wound dehiscence. NVIBw and Steinman pin had the highest infection rates. We recommend the use of NVIBp and titanium reconstruction plate as they have the least complication rate. We also advocate future prospective study.

  6. Image reconstruction of the location of macro-inhomogeneity in random turbid medium by using artificial neural networks

    Science.gov (United States)

    Veksler, Boris A.; Maksimova, Irina L.; Meglinski, Igor V.

    2007-07-01

    Nowadays the artificial neural network (ANN), an effective powerful technique that is able denoting complex input and output relationships, is widely used in different biomedical applications. In present study the applying of ANN for the determination of characteristics of random highly scattering medium (like bio-tissue) is considered. Spatial distribution of the backscattered light calculated by Monte Carlo method is used to train ANN for multiply scattering regimes. The potential opportunities of use of ANN for image reconstruction of an absorbing macro inhomogeneity located in topical layers of random scattering medium are presented. This is especially of high priority because of new diagnostics/treatment developing that is based on the applying gold nano-particles for labeling cancer cells.

  7. Reconstruction of vibroacoustic responses of a highly nonspherical structure using Helmholtz equation least-squares method.

    Science.gov (United States)

    Lu, Huancai; Wu, Sean F

    2009-03-01

    The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined.

  8. Practical aspects of complex permittivity reconstruction with neural-network-controlled FDTD modeling of a two-port fixture.

    Science.gov (United States)

    Eves, E Eugene; Murphy, Ethan K; Yakovlev, Vadim V

    2007-01-01

    The paper discusses characteristics of a new modeling-based technique for determining dielectric properties of materials. Complex permittivity is found with an optimization algorithm designed to match complex S-parameters obtained from measurements and from 3D FDTD simulation. The method is developed on a two-port (waveguide-type) fixture and deals with complex reflection and transmission characteristics at the frequency of interest. A computational part is constructed as an inverse-RBF-network-based procedure that reconstructs dielectric constant and the loss factor of the sample from the FDTD modeling data sets and the measured reflection and transmission coefficients. As such, it is applicable to samples and cavities of arbitrary configurations provided that the geometry of the experimental setup is adequately represented by the FDTD model. The practical implementation of the method considered in this paper is a section of a WR975 waveguide containing a sample of a liquid in a cylindrical cutout of a rectangular Teflon cup. The method is run in two stages and employs two databases--first, built for a sparse grid on the complex permittivity plane, in order to locate a domain with an anticipated solution and, second, made as a denser grid covering the determined domain, for finding an exact location of the complex permittivity point. Numerical tests demonstrate that the computational part of the method is highly accurate even when the modeling data is represented by relatively small data sets. When working with reflection and transmission coefficients measured in an actual experimental fixture and reconstructing a low dielectric constant and the loss factor the technique may be less accurate. It is shown that the employed neural network is capable of finding complex permittivity of the sample when experimental data on the reflection and transmission coefficients are numerically dispersive (noise-contaminated). A special modeling test is proposed for validating the

  9. Within-host bacterial diversity hinders accurate reconstruction of transmission networks from genomic distance data.

    Science.gov (United States)

    Worby, Colin J; Lipsitch, Marc; Hanage, William P

    2014-03-01

    The prospect of using whole genome sequence data to investigate bacterial disease outbreaks has been keenly anticipated in many quarters, and the large-scale collection and sequencing of isolates from cases is becoming increasingly feasible. While sequence data can provide many important insights into disease spread and pathogen adaptation, it remains unclear how successfully they may be used to estimate individual routes of transmission. Several studies have attempted to reconstruct transmission routes using genomic data; however, these have typically relied upon restrictive assumptions, such as a shared topology of the phylogenetic tree and a lack of within-host diversity. In this study, we investigated the potential for bacterial genomic data to inform transmission network reconstruction. We used simulation models to investigate the origins, persistence and onward transmission of genetic diversity, and examined the impact of such diversity on our estimation of the epidemiological relationship between carriers. We used a flexible distance-based metric to provide a weighted transmission network, and used receiver-operating characteristic (ROC) curves and network entropy to assess the accuracy and uncertainty of the inferred structure. Our results suggest that sequencing a single isolate from each case is inadequate in the presence of within-host diversity, and is likely to result in misleading interpretations of transmission dynamics--under many plausible conditions, this may be little better than selecting transmission links at random. Sampling more frequently improves accuracy, but much uncertainty remains, even if all genotypes are observed. While it is possible to discriminate between clusters of carriers, individual transmission routes cannot be resolved by sequence data alone. Our study demonstrates that bacterial genomic distance data alone provide only limited information on person-to-person transmission dynamics.

  10. Systematized methods of surface reconstruction from the serial sectioned images of a cadaver head.

    Science.gov (United States)

    Shin, Dong Sun; Chung, Min Suk; Park, Jin Seo

    2012-01-01

    Three-dimensional models have played important roles in medical simulation and education. Surface models can be manipulated in real time and even online; surface models have significant features for an interactive simulation system. The objective surface models are obtainable from accumulation of each structure's outlines, followed by surface reconstruction. The aim of this research was to suggest the arranged methods of surface reconstruction, which might be applied to building surface models from serial images, such as computed tomographic scans and magnetic resonance images. We used recent state-of-the-art sectioned images of a cadaver head in which several structures were delineated. Four reconstruction methods were regulated according to the structure's morphology: all outlines of a structure are overlapped and singular (method 1), overlapped and not singular (method 2), not overlapped but singular (method 3), and neither overlapped nor singular (method 4). From the trials with various kinds of head structures, we strongly suggested methods 1 and 2, in which volume reconstruction before surface reconstruction accelerated the processing speed on 3D-DOCTOR. So as to use methods 1 and 2, how to make the neighboring outlines overlapped in advance was discussed. The surface models of detailed head structures prepared in this investigation will hopefully contribute to various simulations for clinical practice. The value of the surface models are enhanced if they are placed over the original sectioned images, outlined images, and magnetic resonance images of the same cadaver.

  11. A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets

    Directory of Open Access Journals (Sweden)

    Vilius Matiukas

    2011-08-01

    Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English

  12. A Robust Method for Inferring Network Structures.

    Science.gov (United States)

    Yang, Yang; Luo, Tingjin; Li, Zhoujun; Zhang, Xiaoming; Yu, Philip S

    2017-07-12

    Inferring the network structure from limited observable data is significant in molecular biology, communication and many other areas. It is challenging, primarily because the observable data are sparse, finite and noisy. The development of machine learning and network structure study provides a great chance to solve the problem. In this paper, we propose an iterative smoothing algorithm with structure sparsity (ISSS) method. The elastic penalty in the model is introduced for the sparse solution, identifying group features and avoiding over-fitting, and the total variation (TV) penalty in the model can effectively utilize the structure information to identify the neighborhood of the vertices. Due to the non-smoothness of the elastic and structural TV penalties, an efficient algorithm with the Nesterov's smoothing optimization technique is proposed to solve the non-smooth problem. The experimental results on both synthetic and real-world networks show that the proposed model is robust against insufficient data and high noise. In addition, we investigate many factors that play important roles in identifying the performance of ISSS.

  13. Shortening method for optical reconstruction distance in digital holographic display with phase hologram

    Science.gov (United States)

    Mori, Yutaka; Nomura, Takanori

    2013-12-01

    We present a method to solve a distance issue. In digital holography, the reconstruction distance is different from the original one because of the difference of the pixel size between an imaging device and a display device. In general, the distance is larger because the pixel size of the display is larger than that of the imaging device. This makes it hard to recognize perception of the stereoscopic effect when holographic reconstructed images are used for the stereopsis system. Typically, a numerical propagation method and a spherical phase addition are used to shorten the distance. However, each method is not shortened enough. To clear the criterion, the limitation of each method is verified. By combining two methods, the reconstruction distance is shortened from 4440 to 547 mm. In addition, it is successfully shown that the proposed combining method is useful for the stereopsis by visual perception evaluation.

  14. Computationally rapid method of estimating signal-to-noise ratio for phased array image reconstructions.

    Science.gov (United States)

    Wiens, Curtis N; Kisch, Shawn J; Willig-Onwuachi, Jacob D; McKenzie, Charles A

    2011-10-01

    Measuring signal-to-noise ratio (SNR) for parallel MRI reconstructions is difficult due to spatially dependent noise amplification. Existing approaches for measuring parallel MRI SNR are limited because they are not applicable to all reconstructions, require significant computation time, or rely on repeated image acquisitions. A new SNR estimation approach is proposed, a hybrid of the repeated image acquisitions method detailed in the National Electrical Manufacturers Association (NEMA) standard and the Monte Carlo based pseudo-multiple replica method, in which the difference between images reconstructed from the unaltered acquired data and that same data reconstructed after the addition of calibrated pseudo-noise is used to estimate the noise in the parallel MRI image reconstruction. This new noise estimation method can be used to rapidly compute the pixel-wise SNR of the image generated from any parallel MRI reconstruction of a single acquisition. SNR maps calculated with the new method are validated against existing SNR calculation techniques. Copyright © 2011 Wiley-Liss, Inc.

  15. Analysis on the reconstruction accuracy of the Fitch method for inferring ancestral states

    Directory of Open Access Journals (Sweden)

    Grünewald Stefan

    2011-01-01

    Full Text Available Abstract Background As one of the most widely used parsimony methods for ancestral reconstruction, the Fitch method minimizes the total number of hypothetical substitutions along all branches of a tree to explain the evolution of a character. Due to the extensive usage of this method, it has become a scientific endeavor in recent years to study the reconstruction accuracies of the Fitch method. However, most studies are restricted to 2-state evolutionary models and a study for higher-state models is needed since DNA sequences take the format of 4-state series and protein sequences even have 20 states. Results In this paper, the ambiguous and unambiguous reconstruction accuracy of the Fitch method are studied for N-state evolutionary models. Given an arbitrary phylogenetic tree, a recurrence system is first presented to calculate iteratively the two accuracies. As complete binary tree and comb-shaped tree are the two extremal evolutionary tree topologies according to balance, we focus on the reconstruction accuracies on these two topologies and analyze their asymptotic properties. Then, 1000 Yule trees with 1024 leaves are generated and analyzed to simulate real evolutionary scenarios. It is known that more taxa not necessarily increase the reconstruction accuracies under 2-state models. The result under N-state models is also tested. Conclusions In a large tree with many leaves, the reconstruction accuracies of using all taxa are sometimes less than those of using a leaf subset under N-state models. For complete binary trees, there always exists an equilibrium interval [a, b] of conservation probability, in which the limiting ambiguous reconstruction accuracy equals to the probability of randomly picking a state. The value b decreases with the increase of the number of states, and it seems to converge. When the conservation probability is greater than b, the reconstruction accuracies of the Fitch method increase rapidly. The reconstruction

  16. A New Method for Superresolution Image Reconstruction Based on Surveying Adjustment

    Directory of Open Access Journals (Sweden)

    Jianjun Zhu

    2014-01-01

    Full Text Available A new method for superresolution image reconstruction based on surveying adjustment method is described in this paper. The main idea of such new method is that a sequence of low-resolution images are taken firstly as observations, and then observation equations are established for the superresolution image reconstruction. The gray function of the object surface can be found by using surveying adjustment method from the observation equations. High-resolution pixel value of the corresponding area can be calculated by using the gray function. The results show that the proposed algorithm converges much faster than that of conventional superresolution image reconstruction method. By using the new method, the visual feeling of reconstructed image can be greatly improved compared to that of iterative back projection algorithm, and its peak signal-to-noise ratio can also be improved by nearly 1 dB higher than the projection onto convex sets algorithm. Furthermore, this method can successfully avoid the ill-posed problems in reconstruction process.

  17. A Reconstructed Discontinuous Galerkin Method for the Compressible Euler Equations on Arbitrary Grids

    Energy Technology Data Exchange (ETDEWEB)

    Hong Luo; Luquing Luo; Robert Nourgaliev; Vincent Mousseau

    2009-06-01

    A reconstruction-based discontinuous Galerkin (DG) method is presented for the solution of the compressible Euler equations on arbitrary grids. By taking advantage of handily available and yet invaluable information, namely the derivatives, in the context of the discontinuous Galerkin methods, a solution polynomial of one degree higher is reconstructed using a least-squares method. The stencils used in the reconstruction involve only the van Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The resulting DG method can be regarded as an improvement of a recovery-based DG method in the sense that it shares the same nice features as the recovery-based DG method, such as high accuracy and efficiency, and yet overcomes some of its shortcomings such as a lack of flexibility, compactness, and robustness. The developed DG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate the accuracy and efficiency of the method. The numerical results indicate that this reconstructed DG method is able to obtain a third-order accurate solution at a slightly higher cost than its second-order DG method and provide an increase in performance over the third order DG method in terms of computing time and storage requirement.

  18. The transformation of trust in China's alternative food networks: disruption, reconstruction, and development

    Directory of Open Access Journals (Sweden)

    Raymond Yu. Wang

    2015-06-01

    Full Text Available Food safety issues in China have received much scholarly attention, yet few studies systematically examined this matter through the lens of trust. More importantly, little is known about the transformation of different types of trust in the dynamic process of food production, provision, and consumption. We consider trust as an evolving interdependent relationship between different actors. We used the Beijing County Fair, a prominent ecological farmers' market in China, as an example to examine the transformation of trust in China's alternative food networks. We argue that although there has been a disruption of institutional trust among the general public since 2008 when the melamine-tainted milk scandal broke out, reconstruction of individual trust and development of organizational trust have been observed, along with the emergence and increasing popularity of alternative food networks. Based on more than six months of fieldwork on the emerging ecological agriculture sector in 13 provinces across China as well as monitoring of online discussions and posts, we analyze how various social factors - including but not limited to direct and indirect reciprocity, information, endogenous institutions, and altruism - have simultaneously contributed to the transformation of trust in China's alternative food networks. The findings not only complement current social theories of trust, but also highlight an important yet understudied phenomenon whereby informal social mechanisms have been partially substituting for formal institutions and gradually have been building trust against the backdrop of the food safety crisis in China.

  19. A three-step reconstruction method for fluorescence molecular tomography based on compressive sensing

    Science.gov (United States)

    Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman

    2017-02-01

    Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.

  20. The reconstruction of sound speed in the Marmousi model by the boundary control method

    CERN Document Server

    Ivanov, I B; Semenov, V S

    2016-01-01

    We present the results on numerical testing of the Boundary Control Method in the sound speed determination for the acoustic equation on semiplane. This method for solving multidimensional inverse problems requires no a priory information about the parameters under reconstruction. The application to the realistic Marmousi model demonstrates that the boundary control method is workable in the case of complicated and irregular field of acoustic rays. By the use of the chosen boundary controls, an `averaged' profile of the sound speed is recovered (the relative error is about $10-15\\%$). Such a profile can be further utilized as a starting approximation for high resolution iterative reconstruction methods.

  1. Reconstructed imaging of acoustic cloak using time-lapse reversal method

    Science.gov (United States)

    Zhou, Chen; Cheng, Ying; Xu, Jian-yi; Li, Bo; Liu, Xiao-jun

    2014-08-01

    We proposed and investigated a solution to the inverse acoustic cloak problem, an anti-stealth technology to make cloaks visible, using the time-lapse reversal (TLR) method. The TLR method reconstructs the image of an unknown acoustic cloak by utilizing scattered acoustic waves. Compared to previous anti-stealth methods, the TLR method can determine not only the existence of a cloak but also its exact geometric information like definite shape, size, and position. Here, we present the process for TLR reconstruction based on time reversal invariance. This technology may have potential applications in detecting various types of cloaks with different geometric parameters.

  2. Maximum entropy reconstructions of dynamic signaling networks from quantitative proteomics data.

    Directory of Open Access Journals (Sweden)

    Jason W Locasale

    2009-08-01

    Full Text Available Advances in mass spectrometry among other technologies have allowed for quantitative, reproducible, proteome-wide measurements of levels of phosphorylation as signals propagate through complex networks in response to external stimuli under different conditions. However, computational approaches to infer elements of the signaling network strictly from the quantitative aspects of proteomics data are not well established. We considered a method using the principle of maximum entropy to infer a network of interacting phosphotyrosine sites from pairwise correlations in a mass spectrometry data set and derive a phosphorylation-dependent interaction network solely from quantitative proteomics data. We first investigated the applicability of this approach by using a simulation of a model biochemical signaling network whose dynamics are governed by a large set of coupled differential equations. We found that in a simulated signaling system, the method detects interactions with significant accuracy. We then analyzed a growth factor mediated signaling network in a human mammary epithelial cell line that we inferred from mass spectrometry data and observe a biologically interpretable, small-world structure of signaling nodes, as well as a catalog of predictions regarding the interactions among previously uncharacterized phosphotyrosine sites. For example, the calculation places a recently identified tumor suppressor pathway through ARHGEF7 and Scribble, in the context of growth factor signaling. Our findings suggest that maximum entropy derived network models are an important tool for interpreting quantitative proteomics data.

  3. 3-D ultrasound volume reconstruction using the direct frame interpolation method.

    Science.gov (United States)

    Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin

    2010-11-01

    A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce

  4. An adaptive total variation image reconstruction method for speckles through disordered media

    Science.gov (United States)

    Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei

    2013-09-01

    Multiple scattering of light in highly disordered medium can break the diffraction limit of conventional optical system combined with image reconstruction method. Once the transmission matrix of the imaging system is obtained, the target image can be reconstructed from its speckle pattern by image reconstruction algorithm. Nevertheless, the restored image attained by common image reconstruction algorithms such as Tikhonov regularization has a relatively low signal-tonoise ratio (SNR) due to the experimental noise and reconstruction noise, greatly reducing the quality of the result image. In this paper, the speckle pattern of the test image is simulated by the combination of light propagation theories and statistical optics theories. Subsequently, an adaptive total variation (ATV) algorithm—the TV minimization by augmented Lagrangian and alternating direction algorithms (TVAL3), which is based on augmented Lagrangian and alternating direction algorithm, is utilized to reconstruct the target image. Numerical simulation experimental results show that, the TVAL3 algorithm can effectively suppress the noise of the restored image and preserve more image details, thus greatly boosts the SNR of the restored image. It also indicates that, compared with the image directly formed by `clean' system, the reconstructed results can overcoming the diffraction limit of the `clean' system, therefore being conductive to the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.

  5. Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods

    Directory of Open Access Journals (Sweden)

    David S. Smith

    2012-01-01

    Full Text Available Compressive sensing (CS has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 40962 or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 10242 and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images.

  6. A Reconstructed Discontinuous Galerkin Method for the Compressible Navier-Stokes Equations on Arbitrary Grids

    Energy Technology Data Exchange (ETDEWEB)

    Hong Luo; Luqing Luo; Robert Nourgaliev; Vincent A. Mousseau

    2010-01-01

    A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi-Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier-Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier-Stokes equations.

  7. Driver Injury Risk Variability in Finite Element Reconstructions of Crash Injury Research and Engineering Network (CIREN) Frontal Motor Vehicle Crashes.

    Science.gov (United States)

    Gaewsky, James P; Weaver, Ashley A; Koya, Bharath; Stitzel, Joel D

    2015-01-01

    A 3-phase real-world motor vehicle crash (MVC) reconstruction method was developed to analyze injury variability as a function of precrash occupant position for 2 full-frontal Crash Injury Research and Engineering Network (CIREN) cases. Phase I: A finite element (FE) simplified vehicle model (SVM) was developed and tuned to mimic the frontal crash characteristics of the CIREN case vehicle (Camry or Cobalt) using frontal New Car Assessment Program (NCAP) crash test data. Phase II: The Toyota HUman Model for Safety (THUMS) v4.01 was positioned in 120 precrash configurations per case within the SVM. Five occupant positioning variables were varied using a Latin hypercube design of experiments: seat track position, seat back angle, D-ring height, steering column angle, and steering column telescoping position. An additional baseline simulation was performed that aimed to match the precrash occupant position documented in CIREN for each case. Phase III: FE simulations were then performed using kinematic boundary conditions from each vehicle's event data recorder (EDR). HIC15, combined thoracic index (CTI), femur forces, and strain-based injury metrics in the lung and lumbar vertebrae were evaluated to predict injury. Tuning the SVM to specific vehicle models resulted in close matches between simulated and test injury metric data, allowing the tuned SVM to be used in each case reconstruction with EDR-derived boundary conditions. Simulations with the most rearward seats and reclined seat backs had the greatest HIC15, head injury risk, CTI, and chest injury risk. Calculated injury risks for the head, chest, and femur closely correlated to the CIREN occupant injury patterns. CTI in the Camry case yielded a 54% probability of Abbreviated Injury Scale (AIS) 2+ chest injury in the baseline case simulation and ranged from 34 to 88% (mean = 61%) risk in the least and most dangerous occupant positions. The greater than 50% probability was consistent with the case occupant's AIS 2

  8. Methods and Simulations of Muon Tomography and Reconstruction

    Science.gov (United States)

    Schreiner, Henry Fredrick, III

    This dissertation investigates imaging with cosmic ray muons using scintillator-based portable particle detectors, and covers a variety of the elements required for the detectors to operate and take data, from the detector internal communications and software algorithms to a measurement to allow accurate predictions of the attenuation of physical targets. A discussion of the tracking process for the three layer helical design developed at UT Austin is presented, with details of the data acquisition system, and the highly efficient data format. Upgrades to this system provide a stable system for taking images in harsh or inaccessible environments, such as in a remote jungle in Belize. A Geant4 Monte Carlo simulation was used to develop our understanding of the efficiency of the system, as well as to make predictions for a variety of different targets. The projection process is discussed, with a high-speed algorithm for sweeping a plane through data in near real time, to be used in applications requiring a search through space for target discovery. Several other projections and a foundation of high fidelity 3D reconstructions are covered. A variable binning scheme for rapidly varying statistics over portions of an image plane is also presented and used. A discrepancy in our predictions and the observed attenuation through smaller targets is shown, and it is resolved with a new measurement of low energy spectrum, using a specially designed enclosure to make a series of measurements underwater. This provides a better basis for understanding the images of small amounts of materials, such as for thin cover materials.

  9. Experimental and computational tools useful for (re)construction of dynamic kinase-substrate networks

    DEFF Research Database (Denmark)

    Tan, Chris Soon Heng; Linding, Rune

    2009-01-01

    The explosion of site- and context-specific in vivo phosphorylation events presents a potentially rich source of biological knowledge and calls for novel data analysis and modeling paradigms. Perhaps the most immediate challenge is delineating detected phosphorylation sites to their effector...... kinases. This is important for (re)constructing transient kinase-substrate interaction networks that are essential for mechanistic understanding of cellular behaviors and therapeutic intervention, but has largely eluded high-throughput protein-interaction studies due to their transient nature and strong...... dependencies on cellular context. Here, we surveyed some of the computational approaches developed to dissect phosphorylation data detected in systematic proteomic experiments and reviewed some experimental and computational approaches used to map phosphorylation sites to their effector kinases in efforts...

  10. Reconstruction and in silico analysis of metabolic network for an oleaginous yeast, Yarrowia lipolytica.

    Directory of Open Access Journals (Sweden)

    Pengcheng Pan

    Full Text Available With the emergence of energy scarcity, the use of renewable energy sources such as biodiesel is becoming increasingly necessary. Recently, many researchers have focused their minds on Yarrowia lipolytica, a model oleaginous yeast, which can be employed to accumulate large amounts of lipids that could be further converted to biodiesel. In order to understand the metabolic characteristics of Y. lipolytica at a systems level and to examine the potential for enhanced lipid production, a genome-scale compartmentalized metabolic network was reconstructed based on a combination of genome annotation and the detailed biochemical knowledge from multiple databases such as KEGG, ENZYME and BIGG. The information about protein and reaction associations of all the organisms in KEGG and Expasy-ENZYME database was arranged into an EXCEL file that can then be regarded as a new useful database to generate other reconstructions. The generated model iYL619_PCP accounts for 619 genes, 843 metabolites and 1,142 reactions including 236 transport reactions, 125 exchange reactions and 13 spontaneous reactions. The in silico model successfully predicted the minimal media and the growing abilities on different substrates. With flux balance analysis, single gene knockouts were also simulated to predict the essential genes and partially essential genes. In addition, flux variability analysis was applied to design new mutant strains that will redirect fluxes through the network and may enhance the production of lipid. This genome-scale metabolic model of Y. lipolytica can facilitate system-level metabolic analysis as well as strain development for improving the production of biodiesels and other valuable products by Y. lipolytica and other closely related oleaginous yeasts.

  11. Reconstruction and In Silico Analysis of Metabolic Network for an Oleaginous Yeast, Yarrowia lipolytica

    Science.gov (United States)

    Pan, Pengcheng; Hua, Qiang

    2012-01-01

    With the emergence of energy scarcity, the use of renewable energy sources such as biodiesel is becoming increasingly necessary. Recently, many researchers have focused their minds on Yarrowia lipolytica, a model oleaginous yeast, which can be employed to accumulate large amounts of lipids that could be further converted to biodiesel. In order to understand the metabolic characteristics of Y. lipolytica at a systems level and to examine the potential for enhanced lipid production, a genome-scale compartmentalized metabolic network was reconstructed based on a combination of genome annotation and the detailed biochemical knowledge from multiple databases such as KEGG, ENZYME and BIGG. The information about protein and reaction associations of all the organisms in KEGG and Expasy-ENZYME database was arranged into an EXCEL file that can then be regarded as a new useful database to generate other reconstructions. The generated model iYL619_PCP accounts for 619 genes, 843 metabolites and 1,142 reactions including 236 transport reactions, 125 exchange reactions and 13 spontaneous reactions. The in silico model successfully predicted the minimal media and the growing abilities on different substrates. With flux balance analysis, single gene knockouts were also simulated to predict the essential genes and partially essential genes. In addition, flux variability analysis was applied to design new mutant strains that will redirect fluxes through the network and may enhance the production of lipid. This genome-scale metabolic model of Y. lipolytica can facilitate system-level metabolic analysis as well as strain development for improving the production of biodiesels and other valuable products by Y. lipolytica and other closely related oleaginous yeasts. PMID:23236514

  12. Comparative analysis of quantitative efficiency evaluation methods for transportation networks.

    Science.gov (United States)

    He, Yuxin; Qin, Jin; Hong, Jian

    2017-01-01

    An effective evaluation of transportation network efficiency could offer guidance for the optimal control of urban traffic. Based on the introduction and related mathematical analysis of three quantitative evaluation methods for transportation network efficiency, this paper compares the information measured by them, including network structure, traffic demand, travel choice behavior and other factors which affect network efficiency. Accordingly, the applicability of various evaluation methods is discussed. Through analyzing different transportation network examples it is obtained that Q-H method could reflect the influence of network structure, traffic demand and user route choice behavior on transportation network efficiency well. In addition, the transportation network efficiency measured by this method and Braess's Paradox can be explained with each other, which indicates a better evaluation of the real operation condition of transportation network. Through the analysis of the network efficiency calculated by Q-H method, it can also be drawn that a specific appropriate demand is existed to a given transportation network. Meanwhile, under the fixed demand, both the critical network structure that guarantees the stability and the basic operation of the network and a specific network structure contributing to the largest value of the transportation network efficiency can be identified.

  13. Assessing respiratory mechanics using pressure reconstruction method in mechanically ventilated spontaneous breathing patient.

    Science.gov (United States)

    Damanhuri, Nor Salwa; Chiew, Yeong Shiong; Othman, Nor Azlan; Docherty, Paul D; Pretty, Christopher G; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey

    2016-07-01

    Respiratory system modelling can aid clinical decision making during mechanical ventilation (MV) in intensive care. However, spontaneous breathing (SB) efforts can produce entrained "M-wave" airway pressure waveforms that inhibit identification of accurate values for respiratory system elastance and airway resistance. A pressure wave reconstruction method is proposed to accurately identify respiratory mechanics, assess the level of SB effort, and quantify the incidence of SB effort without uncommon measuring devices or interruption to care. Data from 275 breaths aggregated from all mechanically ventilated patients at Christchurch Hospital were used in this study. The breath specific respiratory elastance is calculated using a time-varying elastance model. A pressure reconstruction method is proposed to reconstruct pressure waves identified as being affected by SB effort. The area under the curve of the time-varying respiratory elastance (AUC Edrs) are calculated and compared, where unreconstructed waves yield lower AUC Edrs. The difference between the reconstructed and unreconstructed pressure is denoted as a surrogate measure of SB effort. The pressure reconstruction method yielded a median AUC Edrs of 19.21 [IQR: 16.30-22.47]cmH2Os/l. In contrast, the median AUC Edrs for unreconstructed M-wave data was 20.41 [IQR: 16.68-22.81]cmH2Os/l. The pressure reconstruction method had the least variability in AUC Edrs assessed by the robust coefficient of variation (RCV)=0.04 versus 0.05 for unreconstructed data. Each patient exhibited different levels of SB effort, independent from MV setting, indicating the need for non-invasive, real time assessment of SB effort. A simple reconstruction method enables more consistent real-time estimation of the true, underlying respiratory system mechanics of a SB patient and provides the surrogate of SB effort, which may be clinically useful for clinicians in determining optimal ventilator settings to improve patient care. Copyright

  14. Nonlinear PET parametric image reconstruction with MRI information using kernel method

    Science.gov (United States)

    Gong, Kuang; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2017-03-01

    Positron Emission Tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neurology. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information. Previously we have used kernel learning to embed MR information in static PET reconstruction and direct Patlak reconstruction. Here we extend this method to direct reconstruction of nonlinear parameters in a compartment model by using the alternating direction of multiplier method (ADMM) algorithm. Simulation studies show that the proposed method can produce superior parametric images compared with existing methods.

  15. The approximate inversion as a reconstruction method in X-ray computerized tomography

    CERN Document Server

    Dietz, R L

    1999-01-01

    The mathematical model of the X-ray computerized tomography will be developed in the first chapter, the approximate inversion will be introduced, and the Radon Transform will be used as an example to demonstrate calculation of a reconstruction cone. In the second chapter, a reconstruction method for the parallel geometry is discussed, leading to derivation of the method for a fan-beam geometry. The approximate inversion calculated for the limited-angle case is presented as an example of incomplete data problems. As with complete data problems, numerical examples are given and the method is compared with existing other methods. 3D reconstruction is the topic of the third chapter. Although of no relevance in practice, a parallel geometry will be examined. No problems are encountered in transferring the reconstruction cone to the cone beam geometry, but only for a scanning curve which also is of no relevance in practice. A further reconstruction method is presented for curves fulfilling the so-called Tuy conditi...

  16. Multicore Performance of Block Algebraic Iterative Reconstruction Methods

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik B.; Hansen, Per Christian

    2014-01-01

    , and those that compute a result for each block in parallel and then combine these results before the next iteration. The goal of this work is to demonstrate which block methods are best suited for implementation on modern multicore computers. To compare the performance of the different block methods, we use...

  17. THREE-DIMENSIONAL RECONSTRUCTION BY SART METHOD WITH MINIMIZATION OF THE TOTAL VARIATION

    Directory of Open Access Journals (Sweden)

    S. A. Zolotarev

    2015-01-01

    Full Text Available Computed tomography is still being intensively studied and widely used to solve a number of industrial and medical applications. The algebraic reconstruction method with simultaneous iterations SART considered in this work as one of the most promising of iterative methods, suitable for the tomographic problems. Graphics processor is used to accelerate the speed of the reconstruction. The method of minimizing the total variation (TV is used as a priori support for the regularization of the iterative process and to overcome the incompleteness of the information.

  18. A Penalized Linear and Nonlinear Combined Conjugate Gradient Method for the Reconstruction of Fluorescence Molecular Tomography

    OpenAIRE

    Shang Shang; Jing Bai; Xiaolei Song; Hongkai Wang; Jaclyn Lau

    2007-01-01

    Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and...

  19. A Reconstructed Discontinuous Galerkin Method for the Compressible Flows on Unstructured Tetrahedral Grids

    Energy Technology Data Exchange (ETDEWEB)

    Hong Luo; Yidong Xia; Robert Nourgaliev; Chunpei Cai

    2011-06-01

    A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on unstructured tetrahedral grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on unstructured grids. The preliminary results indicate that this RDG method is stable on unstructured tetrahedral grids, and provides a viable and attractive alternative for the discretization of the viscous and heat fluxes in the Navier-Stokes equations.

  20. Artificial Neural Networks for reconstruction of energy losses in dead materials between barrel LAr and Tile calorimeters exploration and results

    CERN Document Server

    Budagov, Yu A; Kulchitskii, Yu A; Rusakovitch, N A; Shigaev, V N; Tsiareshka, P V

    2008-01-01

    In the course of computational experiments with Monte-Carlo events for ATLAS Combined Test Beam 2004 setup Artificial Neural Networks (ANN) technique was applied for reconstruction of energy losses in dead materials between barrel LAr and Tile calorimeters (Edm). The constructed ANN procedures exploit as their input vectors the information content of different sets of variables (parameters) which describe particular features of the hadronic shower of an event in ATLAS calorimeters. It was shown that application of ANN procedures allows one to reach 40% reduction of the Edm reconstruction error compared to the conventional procedure used in ATLAS collaboration. Impact of various features of a shower on the precision of $Edm$ reconstruction is presented in detail. It was found that longitudinal shower profile information brings greater improvement in $Edm$ reconstruction accuracy than cell energies information in LAr3 and Tile1 samplings.

  1. A new method to reconstruct intra-fractional prostate motion in volumetric modulated arc therapy

    Science.gov (United States)

    Chi, Y.; Rezaeian, N. H.; Shen, C.; Zhou, Y.; Lu, W.; Yang, M.; Hannan, R.; Jia, X.

    2017-07-01

    Intra-fractional motion is a concern during prostate radiation therapy, as it may cause deviations between planned and delivered radiation doses. Because accurate motion information during treatment delivery is critical to address dose deviation, we developed the projection marker matching method (PM3), a novel method for prostate motion reconstruction in volumetric modulated arc therapy. The purpose of this method is to reconstruct in-treatment prostate motion trajectory using projected positions of implanted fiducial markers measured in kV x-ray projection images acquired during treatment delivery. We formulated this task as a quadratic optimization problem. The objective function penalized the distance from the reconstructed 3D position of each fiducial marker to the corresponding straight line, defined by the x-ray projection of the marker. Rigid translational motion of the prostate and motion smoothness along the temporal dimension were assumed and incorporated into the optimization model. We tested the motion reconstruction method in both simulation and phantom experimental studies. We quantified the accuracy using 3D normalized root-mean-square (RMS) error defined as the norm of a vector containing ratios between the absolute RMS errors and corresponding motion ranges in three dimensions. In the simulation study with realistic prostate motion trajectories, the 3D normalized RMS error was on average ~0.164 (range from 0.097 to 0.333 ). In an experimental study, a prostate phantom was driven to move along a realistic prostate motion trajectory. The 3D normalized RMS error was ~0.172 . We also examined the impact of the model parameters on reconstruction accuracy, and found that a single set of parameters can be used for all the tested cases to accurately reconstruct the motion trajectories. The motion trajectory derived by PM3 may be incorporated into novel strategies, including 4D dose reconstruction and adaptive treatment replanning to address motion

  2. L{sub 1/2} regularization based numerical method for effective reconstruction of bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an 710071 (China); Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education (China)

    2014-05-14

    Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.

  3. A physics-based intravascular ultrasound image reconstruction method for lumen segmentation.

    Science.gov (United States)

    Mendizabal-Ruiz, Gerardo; Kakadiaris, Ioannis A

    2016-08-01

    Intravascular ultrasound (IVUS) refers to the medical imaging technique consisting of a miniaturized ultrasound transducer located at the tip of a catheter that can be introduced in the blood vessels providing high-resolution, cross-sectional images of their interior. Current methods for the generation of an IVUS image reconstruction from radio frequency (RF) data do not account for the physics involved in the interaction between the IVUS ultrasound signal and the tissues of the vessel. In this paper, we present a novel method to generate an IVUS image reconstruction based on the use of a scattering model that considers the tissues of the vessel as a distribution of three-dimensional point scatterers. We evaluated the impact of employing the proposed IVUS image reconstruction method in the segmentation of the lumen/wall interface on 40MHz IVUS data using an existing automatic lumen segmentation method. We compared the results with those obtained using the B-mode reconstruction on 600 randomly selected frames from twelve pullback sequences acquired from rabbit aortas and different arteries of swine. Our results indicate the feasibility of employing the proposed IVUS image reconstruction for the segmentation of the lumen. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Towards a d-bar reconstruction method for three-dimensional EIT

    DEFF Research Database (Denmark)

    Cornean, Horia Decebal; Knudsen, Kim

    here. It is shown that exponentially growing solutions exist for low complex frequencies without imposing any regularity assumption on the conductivity. Further, a reconstruction method for conductivities close to a constant is given. In this method the complex frequency is taken to zero instead...

  5. Vascular blood flow reconstruction from tomographic projections with the adjoint method and receding optimal control strategy

    Science.gov (United States)

    Sixou, B.; Boissel, L.; Sigovan, M.

    2017-10-01

    In this work, we study the measurement of blood velocity with contrast enhanced computed tomography. The inverse problem is formulated as an optimal control problem with the transport equation as constraint. The velocity field is reconstructed with a receding optimal control strategy and the adjoint method. The convergence of the method is fast.

  6. Phase microscopy using light-field reconstruction method for cell observation.

    Science.gov (United States)

    Xiu, Peng; Zhou, Xin; Kuang, Cuifang; Xu, Yingke; Liu, Xu

    2015-08-01

    The refractive index (RI) distribution can serve as a natural label for undyed cell imaging. However, the majority of images obtained through quantitative phase microscopy is integrated along the illumination angle and cannot reflect additional information about the refractive map on a certain plane. Herein, a light-field reconstruction method to image the RI map within a depth of 0.2 μm is proposed. It records quantitative phase-delay images using a four-step phase shifting method in different directions and then reconstructs a similar scattered light field for the refractive sample on the focus plane. It can image the RI of samples, transparent cell samples in particular, in a manner similar to the observation of scattering characteristics. The light-field reconstruction method is therefore a powerful tool for use in cytobiology studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Method and tool for network vulnerability analysis

    Science.gov (United States)

    Swiler, Laura Painton [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM

    2006-03-14

    A computer system analysis tool and method that will allow for qualitative and quantitative assessment of security attributes and vulnerabilities in systems including computer networks. The invention is based on generation of attack graphs wherein each node represents a possible attack state and each edge represents a change in state caused by a single action taken by an attacker or unwitting assistant. Edges are weighted using metrics such as attacker effort, likelihood of attack success, or time to succeed. Generation of an attack graph is accomplished by matching information about attack requirements (specified in "attack templates") to information about computer system configuration (contained in a configuration file that can be updated to reflect system changes occurring during the course of an attack) and assumed attacker capabilities (reflected in "attacker profiles"). High risk attack paths, which correspond to those considered suited to application of attack countermeasures given limited resources for applying countermeasures, are identified by finding "epsilon optimal paths."

  8. Reconstruction of transcription control networks in Mollicutes by high-throughput identification of promoters

    Directory of Open Access Journals (Sweden)

    Gleb Y Fisunov

    2016-12-01

    Full Text Available Bacteria of the class Mollicutes have significantly reduced genomes and gene expression control systems. They are also efficient pathogens that can colonize a broad range of hosts including plants and animals. Despite their simplicity, Mollicutes demonstrate complex transcriptional responses to various conditions, which contradicts their reduction in gene expression regulation mechanisms. We analyzed the conservation and distribution of transcription regulators across the 50 Mollicutes species. The majority of the transcription factors regulate transport and metabolism, and there are four transcription factors that demonstrate significant conservation across the analyzed bacteria. These factors include repressors of chaperone HrcA, cell cycle regulator MraZ and two regulators with unclear function from the WhiA and YebC/PmpR families. We then used three representative species of the major clades of Mollicutes (Acholeplasma laidlawii, Spiroplasma melliferum and Mycoplasma gallisepticum to perform promoters mapping and activity quantitation. We revealed that Mollicutes evolved towards a promoter architecture simplification that correlates with a diminishing role of transcription regulation and an increase in transcriptional noise. Using the identified operons structure and a comparative genomics approach, we reconstructed the transcription control networks for these three species. The organization of the networks reflects the adaptation of bacteria to specific conditions and hosts.

  9. Deep Convolutional Networks for Event Reconstruction and Particle Tagging on NOvA and DUNE

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep Convolutional Neural Networks (CNNs) have been widely applied in computer vision to solve complex problems in image recognition and analysis. In recent years many efforts have emerged to extend the use of this technology to HEP applications, including the Convolutional Visual Network (CVN), our implementation for identification of neutrino events. In this presentation I will describe the core concepts of CNNs, the details of our particular implementation in the Caffe framework and our application to identify NOvA events. NOvA is a long baseline neutrino experiment whose main goal is the measurement of neutrino oscillations. This relies on the accurate identification and reconstruction of the neutrino flavor in the interactions we observe. In 2016 the NOvA experiment released results for the observation of oscillations in the ν μ → ν e channel, the first HEP result employing CNNs. I will also discuss our approach at event identification on NOvA as well as recent developments in the application of CNN...

  10. What can genome-scale metabolic network reconstructions do for prokaryotic systematics?

    Science.gov (United States)

    Barona-Gómez, Francisco; Cruz-Morales, Pablo; Noda-García, Lianet

    2012-01-01

    It has recently been proposed that in addition to Nomenclature, Classification and Identification, Comprehending Microbial Diversity may be considered as the fourth tenet of microbial systematics [Staley JT (2010) The Bulletin of BISMiS, 1(1): 1-5]. As this fourth goal implies a fundamental understanding of microbial speciation, this perspective article argues that translation of bacterial genome sequences into metabolic features may contribute to the development of modern polyphasic taxonomic approaches. Genome-scale metabolic network reconstructions (GSMRs), which are the result of computationally predicted and experimentally confirmed stoichiometric matrices incorporating all enzyme and metabolite components encoded by a genome sequence, provide a platform that can illustrate bacterial speciation. As the topology and the composition of GSMRs are expected to be the result of adaptive evolution, the features of these networks may provide the prokaryotic taxonomist with novel tools for reaching the fourth tenet of microbial systematics. Through selected examples from the Actinobacteria, which have been inferred from GSMRs and experimentally confirmed after phenotypic characterisation, it will be shown that this level of information can be incorporated into modern polyphasic taxonomic approaches. In conclusion, three specific examples are illustrated to show how GSMRs will revolutionize prokaryotic systematics, as has previously occurred in many other fields of microbiology.

  11. Comprehensive reconstruction and visualization of non-coding regulatory networks in human.

    Science.gov (United States)

    Bonnici, Vincenzo; Russo, Francesco; Bombieri, Nicola; Pulvirenti, Alfredo; Giugno, Rosalba

    2014-01-01

    Research attention has been powered to understand the functional roles of non-coding RNAs (ncRNAs). Many studies have demonstrated their deregulation in cancer and other human disorders. ncRNAs are also present in extracellular human body fluids such as serum and plasma, giving them a great potential as non-invasive biomarkers. However, non-coding RNAs have been relatively recently discovered and a comprehensive database including all of them is still missing. Reconstructing and visualizing the network of ncRNAs interactions are important steps to understand their regulatory mechanism in complex systems. This work presents ncRNA-DB, a NoSQL database that integrates ncRNAs data interactions from a large number of well established on-line repositories. The interactions involve RNA, DNA, proteins, and diseases. ncRNA-DB is available at http://ncrnadb.scienze.univr.it/ncrnadb/. It is equipped with three interfaces: web based, command-line, and a Cytoscape app called ncINetView. By accessing only one resource, users can search for ncRNAs and their interactions, build a network annotated with all known ncRNAs and associated diseases, and use all visual and mining features available in Cytoscape.

  12. Reconstruction and Analysis of Human Kidney-Specific Metabolic Network Based on Omics Data

    Directory of Open Access Journals (Sweden)

    Ai-Di Zhang

    2013-01-01

    Full Text Available With the advent of the high-throughput data production, recent studies of tissue-specific metabolic networks have largely advanced our understanding of the metabolic basis of various physiological and pathological processes. However, for kidney, which plays an essential role in the body, the available kidney-specific model remains incomplete. This paper reports the reconstruction and characterization of the human kidney metabolic network based on transcriptome and proteome data. In silico simulations revealed that house-keeping genes were more essential than kidney-specific genes in maintaining kidney metabolism. Importantly, a total of 267 potential metabolic biomarkers for kidney-related diseases were successfully explored using this model. Furthermore, we found that the discrepancies in metabolic processes of different tissues are directly corresponding to tissue's functions. Finally, the phenotypes of the differentially expressed genes in diabetic kidney disease were characterized, suggesting that these genes may affect disease development through altering kidney metabolism. Thus, the human kidney-specific model constructed in this study may provide valuable information for the metabolism of kidney and offer excellent insights into complex kidney diseases.

  13. Comprehensive Reconstruction and Visualization of Non-Coding Regulatory Networks in Human

    Science.gov (United States)

    Bonnici, Vincenzo; Russo, Francesco; Bombieri, Nicola; Pulvirenti, Alfredo; Giugno, Rosalba

    2014-01-01

    Research attention has been powered to understand the functional roles of non-coding RNAs (ncRNAs). Many studies have demonstrated their deregulation in cancer and other human disorders. ncRNAs are also present in extracellular human body fluids such as serum and plasma, giving them a great potential as non-invasive biomarkers. However, non-coding RNAs have been relatively recently discovered and a comprehensive database including all of them is still missing. Reconstructing and visualizing the network of ncRNAs interactions are important steps to understand their regulatory mechanism in complex systems. This work presents ncRNA-DB, a NoSQL database that integrates ncRNAs data interactions from a large number of well established on-line repositories. The interactions involve RNA, DNA, proteins, and diseases. ncRNA-DB is available at http://ncrnadb.scienze.univr.it/ncrnadb/. It is equipped with three interfaces: web based, command-line, and a Cytoscape app called ncINetView. By accessing only one resource, users can search for ncRNAs and their interactions, build a network annotated with all known ncRNAs and associated diseases, and use all visual and mining features available in Cytoscape. PMID:25540777

  14. Potential benefit of the CT adaptive statistical iterative reconstruction method for pediatric cardiac diagnosis

    Science.gov (United States)

    Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2010-04-01

    Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.

  15. Control and estimation methods over communication networks

    CERN Document Server

    Mahmoud, Magdi S

    2014-01-01

    This book provides a rigorous framework in which to study problems in the analysis, stability and design of networked control systems. Four dominant sources of difficulty are considered: packet dropouts, communication bandwidth constraints, parametric uncertainty, and time delays. Past methods and results are reviewed from a contemporary perspective, present trends are examined, and future possibilities proposed. Emphasis is placed on robust and reliable design methods. New control strategies for improving the efficiency of sensor data processing and reducing associated time delay are presented. The coverage provided features: ·        an overall assessment of recent and current fault-tolerant control algorithms; ·        treatment of several issues arising at the junction of control and communications; ·        key concepts followed by their proofs and efficient computational methods for their implementation; and ·        simulation examples (including TrueTime simulations) to...

  16. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method

    Science.gov (United States)

    Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing

    2017-05-01

    Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach's feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method.

  17. Methods of bronchial tree reconstruction and camera distortion corrections for virtual endoscopic environments.

    Science.gov (United States)

    Socha, Mirosław; Duplaga, Mariusz; Turcza, Paweł

    2004-01-01

    The use of three-dimensional visualization of anatomical structures in diagnostics and medical training is growing. The main components of virtual respiratory tract environments include reconstruction and simulation algorithms as well as correction methods of endoscope camera distortions in the case of virtually-enhanced navigation systems. Reconstruction methods rely usually on initial computer tomography (CT) image segmentation to trace contours of the tracheobronchial tree, which in turn are used in the visualization process. The main segmentation methods, including relatively simple approaches such as adaptive region-growing algorithms and more complex methods, e.g. hybrid algorithms based on region growing and mathematical morphology methods, are described in this paper. The errors and difficulties in the process of tracheobronchial tree reconstruction depend on the occurrence of distortions during CT image acquisition. They are usually related to the inability to exactly fulfil the sampling theorem's conditions. Other forms of distortions and noise such as additive white Gaussian noise, may also appear. The impact of these distortions on the segmentation and reconstruction may be diminished through the application of appropriately selected image prefiltering, which is also demonstrated in this paper. Methods of surface rendering (ray-casting, ray-tracing techniques) and volume rendering will be shown, with special focus on aspects of hardware and software implementations. Finally, methods of camera distortions correction and simulation are presented. The mathematical camera models, the scope of their applications and types of distortions were have also been indicated.

  18. Convergence analysis for column-action methods in image reconstruction

    DEFF Research Database (Denmark)

    Elfving, Tommy; Hansen, Per Christian; Nikazad, Touraj

    2016-01-01

    Column-oriented versions of algebraic iterative methods are interesting alternatives to their row-version counterparts: they converge to a least squares solution, and they provide a basis for saving computational work by skipping small updates. In this paper we consider the case of noise-free data...

  19. Cosmic web reconstruction through density ridges: method and algorithm

    Science.gov (United States)

    Chen, Yen-Chi; Ho, Shirley; Freeman, Peter E.; Genovese, Christopher R.; Wasserman, Larry

    2015-11-01

    The detection and characterization of filamentary structures in the cosmic web allows cosmologists to constrain parameters that dictate the evolution of the Universe. While many filament estimators have been proposed, they generally lack estimates of uncertainty, reducing their inferential power. In this paper, we demonstrate how one may apply the subspace constrained mean shift (SCMS) algorithm (Ozertem & Erdogmus 2011; Genovese et al. 2014) to uncover filamentary structure in galaxy data. The SCMS algorithm is a gradient ascent method that models filaments as density ridges, one-dimensional smooth curves that trace high-density regions within the point cloud. We also demonstrate how augmenting the SCMS algorithm with bootstrap-based methods of uncertainty estimation allows one to place uncertainty bands around putative filaments. We apply the SCMS first to the data set generated from the Voronoi model. The density ridges show strong agreement with the filaments from Voronoi method. We then apply the SCMS method data sets sampled from a P3M N-body simulation, with galaxy number densities consistent with SDSS and WFIRST-AFTA, and to LOWZ and CMASS data from the Baryon Oscillation Spectroscopic Survey (BOSS). To further assess the efficacy of SCMS, we compare the relative locations of BOSS filaments with galaxy clusters in the redMaPPer catalogue, and find that redMaPPer clusters are significantly closer (with p-values filaments than to randomly selected galaxies.

  20. A new method to reconstruct the structure from crystal images

    NARCIS (Netherlands)

    Li, Y

    2017-01-01

    Biological molecules, especially the proteins, have a special and important function. We study their structure to understand their functions, and further make application, like the medical research. The routine method is diffraction, but not work for molecules which cannot grow into crystal and

  1. Irrigation network design and reconstruction and its analysis by simulation model

    Directory of Open Access Journals (Sweden)

    Čistý Milan

    2014-06-01

    Full Text Available There are many problems related to pipe network rehabilitation, the main one being how to provide an increase in the hydraulic capacity of a system. Because of its complexity the conventional optimizations techniques are poorly suited for solving this task. In recent years some successful attempts to apply modern heuristic methods to this problem have been published. The main part of the paper deals with applying such technique, namely the harmony search methodology, to network rehabilitation optimization considering both technical and economic aspects of the problem. A case study of the sprinkler irrigation system is presented in detail. Two alternatives of the rehabilitation design are compared. The modified linear programming method is used first with new diameters proposed in the existing network so it could satisfy the increased demand conditions with the unchanged topology. This solution is contrasted to the looped one obtained using a harmony search algorithm

  2. Development of a method for reconstruction of crowded NMR spectra from undersampled time-domain data

    Energy Technology Data Exchange (ETDEWEB)

    Ueda, Takumi; Yoshiura, Chie; Matsumoto, Masahiko; Kofuku, Yutaka; Okude, Junya; Kondo, Keita; Shiraishi, Yutaro [The University of Tokyo, Graduate School of Pharmaceutical Sciences (Japan); Takeuchi, Koh [Japan Science and Technology Agency, Precursory Research for Embryonic Science and Technology (Japan); Shimada, Ichio, E-mail: shimada@iw-nmr.f.u-tokyo.ac.jp [The University of Tokyo, Graduate School of Pharmaceutical Sciences (Japan)

    2015-05-15

    NMR is a unique methodology for obtaining information about the conformational dynamics of proteins in heterogeneous biomolecular systems. In various NMR methods, such as transferred cross-saturation, relaxation dispersion, and paramagnetic relaxation enhancement experiments, fast determination of the signal intensity ratios in the NMR spectra with high accuracy is required for analyses of targets with low yields and stabilities. However, conventional methods for the reconstruction of spectra from undersampled time-domain data, such as linear prediction, spectroscopy with integration of frequency and time domain, and analysis of Fourier, and compressed sensing were not effective for the accurate determination of the signal intensity ratios of the crowded two-dimensional spectra of proteins. Here, we developed an NMR spectra reconstruction method, “conservation of experimental data in analysis of Fourier” (Co-ANAFOR), to reconstruct the crowded spectra from the undersampled time-domain data. The number of sampling points required for the transferred cross-saturation experiments between membrane proteins, photosystem I and cytochrome b{sub 6}f, and their ligand, plastocyanin, with Co-ANAFOR was half of that needed for linear prediction, and the peak height reduction ratios of the spectra reconstructed from truncated time-domain data by Co-ANAFOR were more accurate than those reconstructed from non-uniformly sampled data by compressed sensing.

  3. MBVCNN: Joint convolutional neural networks method for image recognition

    Science.gov (United States)

    Tong, Tong; Mu, Xiaodong; Zhang, Li; Yi, Zhaoxiang; Hu, Pei

    2017-05-01

    Aiming at the problem of objects in image recognition rectangle, but objects which are input into convolutional neural networks square, the object recognition model was put forward which was based on BING method to realize object estimate, used vectorization of convolutional neural networks to realize input square image in convolutional networks, therefore, built joint convolution neural networks, which achieve multiple size image input. Verified by experiments, the accuracy of multi-object image recognition was improved by 6.70% compared with single vectorization of convolutional neural networks. Therefore, image recognition method of joint convolutional neural networks can enhance the accuracy in image recognition, especially for target in rectangular shape.

  4. Reconstruction of the yeast protein-protein interaction network involved in nutrient sensing and global metabolic regulation

    DEFF Research Database (Denmark)

    Nandy, Subir Kumar; Jouhten, Paula; Nielsen, Jens

    2010-01-01

    BACKGROUND: Several protein-protein interaction studies have been performed for the yeast Saccharomyces cerevisiae using different high-throughput experimental techniques. All these results are collected in the BioGRID database and the SGD database provide detailed annotation of the different......-sensing and metabolic regulatory signal transduction pathways (STP) operating in Saccharomyces cerevisiae. The reconstructed STP network includes a full protein-protein interaction network including the key nodes Snf1, Tor1, Hog1 and Pka1. The network includes a total of 623 structural open reading frames (ORFs...

  5. [Molecular mechanism of brain regeneration and reconstruction of dopaminergic neural network in planarians].

    Science.gov (United States)

    Nishimura, Kaneyasu; Kitamura, Yoshihisa; Agata, Kiyokazu

    2008-04-01

    Recently, planarians have received much attention because of their contributions to research on the basic science of stem cell systems, neural regeneration, and regenerative medicine. Planarians can regenerate complete organs, including a well-organized central nervous system (CNS), within about 7 days. This high regenerative capacity is supported by pluripotent stem cells present in the mesenchymal space throughout the body. Interestingly, planarians can regenerate their brain via a molecular mechanism similar to that of mammalian brain development. The regeneration process of the planarian brain can be divided into five steps: (1) anterior blastema formation, (2) brain rudiment formation, (3) brain pattern formation, (4) neural network formation, and (5) functional recovery, with several kinds of genes and molecular cascades acting at each step. Recently, we have identified a planarian tyrosine hydroxylase (TH) gene, a rate-limiting enzyme for dopamine (DA) biosynthesis, and produced TH-knockdown planarians by the RNA interference technique. Studies of TH-knockdown planarians showed that DA has an important role of the modification in behavioral movement in planarians. Using monoclonal anti-planarian TH antibody, we also found that dopaminergic neurons are mainly localized in the planarian brain. When the planarian body was amputated, newly generated TH-immunopositive neurons were detected in the anterior region at day 3 of regeneration (i.e., the period of neural network formation), and the TH-immunopositive axonal and dendritic neural network in the CNS was reconstructed during day 5-7 of regeneration. In this article, recent advances in elucidating the molecular mechanism of planarian brain regeneration and dopaminergic neurons are reviewed, and its future prospects for contribution of this system to basic science and medical science research are described.

  6. An improved method for network congestion control

    Science.gov (United States)

    Qiao, Xiaolin

    2013-03-01

    The rapid progress of the wireless network technology has great convenience on the people's life and work. However, because of its openness, the mobility of the terminal and the changing topology, the wireless network is more susceptible to security attacks. Authentication and key agreement is the base of the network security. The authentication and key agreement mechanism can prevent the unauthorized user from accessing the network, resist malicious network to deceive the lawful user, encrypt the session data by using the exchange key and provide the identification of the data origination. Based on characteristics of the wireless network, this paper proposed a key agreement protocol for wireless network. The authentication of protocol is based on Elliptic Curve Cryptosystems and Diffie-Hellman.

  7. One step linear reconstruction method for continuous wave diffuse optical tomography

    Science.gov (United States)

    Ukhrowiyah, N.; Yasin, M.

    2017-09-01

    The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.

  8. A Data-Model Comparison over Europe using a new 2000-yr Summer Temperature Reconstruction from the PAGES 2k Regional Network and Last-Millennium GCM Simulations

    Science.gov (United States)

    Smerdon, Jason; Werner, Johannes; Fernandez-Donado, Laura; Buntgen, Ulf; Charpentier Ljungqvist, Fredrik; Esper, Jan; Fidel Gonzalez-Rouco, J.; Luterbacher, Juerg; McCarroll, Danny; Wagner, Sebastian; Wahl, Eugene; Wanner, Heinz; Zorita, Eduardo

    2013-04-01

    A new reconstruction of European summer (JJA) land temperatures is presented and compared to 37 forced transient simulations of the last millennium from coupled General Circulation Models (CGCMs). The reconstructions are derived from eleven annually resolved tree-ring and documentary records from ten European countries/regions, compiled as part of the Euro_Med working group contribution to the PAGES 2k Regional Network. Records were selected based upon their summer temperature signal, annual resolution, and time-continuous sampling. All tree-ring data were detrended using the Regional Curve Standardization (RCS) method to retain low-frequency variance in the resulting mean chronologies. A nested Composite-Plus-Scale (CPS) mean temperature reconstruction extending from 138 B.C.E. to 2003 C.E. was derived using nine nests reflecting the availability of predictors back in time. Each nest was calculated using a weighted composite based on the correlation of each proxy with the CRUTEM4v mean European JJA land temperature (35°-70°N, 10°W-40°E). The CPS methodology was implemented using a sliding calibration period, initially extending from 1850-1953 C.E. and incrementing by one year until reaching the final period of 1900-2003 C.E. Within each calibration step, the 50 years excluded from calibration were used for validation. Validation statistics across all reconstruction ensemble members within each nest indicate skillful reconstructions (RE: 0.42-0.64; CE: 0.26-0.54) and are all above the maximum validation statistics achieved in an ensemble of red noise benchmarking experiments. A gridded (5°x5°) European summer (JJA) temperature reconstruction back to 750 C.E. was derived using Bayesian inference together with a localized stochastic description of the underlying processes. Instrumental data are JJA means from the 5° European land grid cells in the CRUTEM4v dataset. Predictive experiments using the full proxy data were made, resulting in a multivariate

  9. Reconstruction from Uniformly Attenuated SPECT Projection Data Using the DBH Method

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Qiu; You, Jiangsheng; Zeng, Gengsheng L.; Gullberg, Grant T.

    2008-03-20

    An algorithm was developed for the two-dimensional (2D) reconstruction of truncated and non-truncated uniformly attenuated data acquired from single photon emission computed tomography (SPECT). The algorithm is able to reconstruct data from half-scan (180o) and short-scan (180?+fan angle) acquisitions for parallel- and fan-beam geometries, respectively, as well as data from full-scan (360o) acquisitions. The algorithm is a derivative, backprojection, and Hilbert transform (DBH) method, which involves the backprojection of differentiated projection data followed by an inversion of the finite weighted Hilbert transform. The kernel of the inverse weighted Hilbert transform is solved numerically using matrix inversion. Numerical simulations confirm that the DBH method provides accurate reconstructions from half-scan and short-scan data, even when there is truncation. However, as the attenuation increases, finer data sampling is required.

  10. Research on assessment and improvement method of remote sensing image reconstruction

    Science.gov (United States)

    Sun, Li; Hua, Nian; Yu, Yanbo; Zhao, Zhanping

    2018-01-01

    Remote sensing image quality assessment and improvement is an important part of image processing. Generally, the use of compressive sampling theory in remote sensing imaging system can compress images while sampling which can improve efficiency. A method of two-dimensional principal component analysis (2DPCA) is proposed to reconstruct the remote sensing image to improve the quality of the compressed image in this paper, which contain the useful information of image and can restrain the noise. Then, remote sensing image quality influence factors are analyzed, and the evaluation parameters for quantitative evaluation are introduced. On this basis, the quality of the reconstructed images is evaluated and the different factors influence on the reconstruction is analyzed, providing meaningful referential data for enhancing the quality of remote sensing images. The experiment results show that evaluation results fit human visual feature, and the method proposed have good application value in the field of remote sensing image processing.

  11. Application of information theory methods to food web reconstruction

    Science.gov (United States)

    Moniz, L.J.; Cooch, E.G.; Ellner, S.P.; Nichols, J.D.; Nichols, J.M.

    2007-01-01

    In this paper we use information theory techniques on time series of abundances to determine the topology of a food web. At the outset, the food web participants (two consumers, two resources) are known; in addition we know that each consumer prefers one of the resources over the other. However, we do not know which consumer prefers which resource, and if this preference is absolute (i.e., whether or not the consumer will consume the non-preferred resource). Although the consumers and resources are identified at the beginning of the experiment, we also provide evidence that the consumers are not resources for each other, and the resources do not consume each other. We do show that there is significant mutual information between resources; the model is seasonally forced and some shared information between resources is expected. Similarly, because the model is seasonally forced, we expect shared information between consumers as they respond to the forcing of the resources. The model that we consider does include noise, and in an effort to demonstrate that these methods may be of some use in other than model data, we show the efficacy of our methods with decreasing time series size; in this particular case we obtain reasonably clear results with a time series length of 400 points. This approaches ecological time series lengths from real systems.

  12. An improved schlieren method for measurement and automatic reconstruction of the far-field focal spot.

    Directory of Open Access Journals (Sweden)

    Zhengzhou Wang

    Full Text Available The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision.

  13. Comparative analysis of module-based versus direct methods for reverse-engineering transcriptional regulatory networks

    Directory of Open Access Journals (Sweden)

    Joshi Anagha

    2009-05-01

    Full Text Available Abstract Background A myriad of methods to reverse-engineer transcriptional regulatory networks have been developed in recent years. Direct methods directly reconstruct a network of pairwise regulatory interactions while module-based methods predict a set of regulators for modules of coexpressed genes treated as a single unit. To date, there has been no systematic comparison of the relative strengths and weaknesses of both types of methods. Results We have compared a recently developed module-based algorithm, LeMoNe (Learning Module Networks, to a mutual information based direct algorithm, CLR (Context Likelihood of Relatedness, using benchmark expression data and databases of known transcriptional regulatory interactions for Escherichia coli and Saccharomyces cerevisiae. A global comparison using recall versus precision curves hides the topologically distinct nature of the inferred networks and is not informative about the specific subtasks for which each method is most suited. Analysis of the degree distributions and a regulator specific comparison show that CLR is 'regulator-centric', making true predictions for a higher number of regulators, while LeMoNe is 'target-centric', recovering a higher number of known targets for fewer regulators, with limited overlap in the predicted interactions between both methods. Detailed biological examples in E. coli and S. cerevisiae are used to illustrate these differences and to prove that each method is able to infer parts of the network where the other fails. Biological validation of the inferred networks cautions against over-interpreting recall and precision values computed using incomplete reference networks. Conclusion Our results indicate that module-based and direct methods retrieve largely distinct parts of the underlying transcriptional regulatory networks. The choice of algorithm should therefore be based on the particular biological problem of interest and not on global metrics which cannot be

  14. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, California 90095 (United States); Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-11-15

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method

  15. Spectrum reconstruction method based on the detector response model calibrated by x-ray fluorescence.

    Science.gov (United States)

    Li, Ruizhe; Li, Liang; Chen, Zhiqiang

    2017-02-07

    Accurate estimation of distortion-free spectra is important but difficult in various applications, especially for spectral computed tomography. Two key problems must be solved to reconstruct the incident spectrum. One is the acquisition of the detector energy response. It can be calculated by Monte Carlo simulation, which requires detailed modeling of the detector system and a high computational power. It can also be acquired by establishing a parametric response model and be calibrated using monochromatic x-ray sources, such as synchrotron sources or radioactive isotopes. However, these monochromatic sources are difficult to obtain. Inspired by x-ray fluorescence (XRF) spectrum modeling, we propose a feasible method to obtain the detector energy response based on an optimized parametric model for CdZnTe or CdTe detectors. The other key problem is the reconstruction of the incident spectrum with the detector response. Directly obtaining an accurate solution from noisy data is difficult because the reconstruction problem is severely ill-posed. Different from the existing spectrum stripping method, a maximum likelihood-expectation maximization iterative algorithm is developed based on the Poisson noise model of the system. Simulation and experiment results show that our method is effective for spectrum reconstruction and markedly increases the accuracy of XRF spectra compared with the spectrum stripping method. The applicability of the proposed method is discussed, and promising results are presented.

  16. Image reconstruction for ultrasound computed tomography by use of the regularized dual averaging method

    Science.gov (United States)

    Matthews, Thomas P.; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A.

    2017-03-01

    Waveform inversion methods can produce high-resolution reconstructed sound speed images for ultrasound computed tomography; however, they are very computational expensive. Source encoding methods can reduce this computational cost by formulating the image reconstruction problem as a stochastic optimization problem. Here, we solve this optimization problem by the regularized dual averaging method instead of the more commonly used stochastic gradient descent. This new optimization method allows use of non-smooth regularization functions and treats the stochastic data fidelity term in the objective function separately from the deterministic regularization function. This allows noise to be mitigated more effectively. The method further exhibits lower variance in the estimated sound speed distributions across iterations when line search methods are employed.

  17. Phase derivative method for reconstruction of slightly off-axis digital holograms.

    Science.gov (United States)

    Guo, Cheng-Shan; Wang, Ben-Yi; Sha, Bei; Lu, Yu-Jie; Xu, Ming-Yuan

    2014-12-15

    A phase derivative (PD) method is proposed for reconstruction of off-axis holograms. In this method, a phase distribution of the tested object wave constrained within 0 to pi radian is firstly worked out by a simple analytical formula; then it is corrected to its right range from -pi to pi according to the sign characteristics of its first-order derivative. A theoretical analysis indicates that this PD method is particularly suitable for reconstruction of slightly off-axis holograms because it only requires the spatial frequency of the reference beam larger than spatial frequency of the tested object wave in principle. In addition, because the PD method belongs to a pure local method with no need of any integral operation or phase shifting algorithm in process of the phase retrieval, it could have some advantages in reducing computer load and memory requirements to the image processing system. Some experimental results are given to demonstrate the feasibility of the method.

  18. Two non-probabilistic methods for uncertainty analysis in accident reconstruction.

    Science.gov (United States)

    Zou, Tiefang; Yu, Zhi; Cai, Ming; Liu, Jike

    2010-05-20

    There are many uncertain factors in traffic accidents, it is necessary to study the influence of these uncertain factors to improve the accuracy and confidence of accident reconstruction results. It is difficult to evaluate the uncertainty of calculation results if the expression of the reconstruction model is implicit and/or the distributions of the independent variables are unknown. Based on interval mathematics, convex models and design of experiment, two non-probabilistic methods were proposed. These two methods are efficient under conditions where existing uncertainty analysis methods can hardly work because the accident reconstruction model is implicit and/or the distributions of independent variables are unknown; and parameter sensitivity can be obtained from them too. An accident case is investigated by the methods proposed in the paper. Results show that the convex models method is the most conservative method, and the solution of interval analysis method is very close to the other methods. These two methods are a beneficial supplement to the existing uncertainty analysis methods.

  19. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system.

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan

    2015-11-01

    To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter

  20. Sensor Network Information Analytical Methods: Analysis of Similarities and Differences

    Directory of Open Access Journals (Sweden)

    Chen Jian

    2014-04-01

    Full Text Available In the Sensor Network information engineering literature, few references focus on the definition and design of Sensor Network information analytical methods. Among those that do are Munson, et al. and the ISO standards on functional size analysis. To avoid inconsistent vocabulary and potentially incorrect interpretation of data, Sensor Network information analytical methods must be better designed, including definitions, analysis principles, analysis rules, and base units. This paper analyzes the similarities and differences across three different views of analytical methods, and uses a process proposed for the design of Sensor Network information analytical methods to analyze two examples of such methods selected from the literature.

  1. The Analog-Method: Reconstruction of highly resolved Atmospheric Forcing Fields

    Science.gov (United States)

    Schenk, Frederik; Zorita, Eduardo

    2010-05-01

    In this study we test and apply a new method to reconstruct highly resolved atmospheric forcing fields for Northern Europe since 1850 AD. As a simple statistical upscaling method, the analog-method is used to find best fitting atmospheric fields (predictand) to a given variable from long historical station measurements (predictor). The atmospheric fields are taken from a regional climate simulation and serve as a pool of analogs to match the local climate information of station measurements. An important advantage of this non-linear approach is the conservation of full variability in the reconstruction which is generally underestimated by linear regression methods by more than a factor of two. In a first step, the analog method is tested within the model domain serving as surrogate climate. Atmospheric fields (0.25° x 0.25°) of SLP, wind components, temperature, relative humidity, total cloud cover and precipitation are reconstructed by the analog-method using different amounts of grid points (5-25) as synthetic stations. Within the regional model, reconstructions of atmospheric forcing fields for the variables SLP and wind components show excellent skills (r ~ 0.8) when being reconstructed by daily SLP as predictor. For temperature, rel. humidity, total cloud cover and precipitation, SLP is a weak physical predictor leading to lower skills but significantly better than their climatological mean. Using daily air temperature as predictor instead of SLP, temperature fields are reconstructed with very good skills (r > 0.5 for summer and >0.7 in winter). In a second step, the same approach is repeated with real daily SLP data from 23 stations as predictor spanning the period from 1850 to 2009. Limited by the length of the model simulations, 25 years (1958-1982, 1983-2007) are used as calibration and validation periods. The forcing fields for SLP and wind (u,v) show correlations >0.7 for the validation periods. Although the physical link between SLP and relative

  2. Dynamic analysis of biochemical network using complex network method

    Directory of Open Access Journals (Sweden)

    Wang Shuqiang

    2015-01-01

    Full Text Available In this study, the stochastic biochemical reaction model is proposed based on the law of mass action and complex network theory. The dynamics of biochemical reaction system is presented as a set of non-linear differential equations and analyzed at the molecular-scale. Given the initial state and the evolution rules of the biochemical reaction system, the system can achieve homeostasis. Compared with random graph, the biochemical reaction network has larger information capacity and is more efficient in information transmission. This is consistent with theory of evolution.

  3. Anatomic and histological characteristics of vagina reconstructed by McIndoe method

    Directory of Open Access Journals (Sweden)

    Kozarski Jefta

    2009-01-01

    Full Text Available Background/Aim. Congenital absence of vagina is known from ancient times of Greek. According to the literature data, incidence is 1/4 000 to 1/20 000. Treatment of this anomaly includes non-operative and operative procedures. McIndoe procedure uses split skin graft by Thiersch. The aim of this study was to establish anatomic and histological characteristics of vagina reconstructed by McIndoe method in Mayer Küster-Rockitansky Hauser (MKRH syndrome and compare them with normal vagina. Methods. The study included 21 patients of 18 and more years with congenital anomaly known as aplasio vaginae within the Mayer Küster-Rockitansky Hauser syndrome. The patients were operated on by the plastic surgeon using the McIndoe method. The study was a retrospective review of the data from the history of the disease, objective and gynecological examination and cytological analysis of native preparations of vaginal stain (Papanicolau. Comparatively, 21 females of 18 and more years with normal vaginas were also studied. All the subjects were divided into the groups R (reconstructed and C (control and the subgroups according to age up to 30 years (1 R, 1C, from 30 to 50 (2R, 2C, and over 50 (3R, 3C. Statistical data processing was performed by using the Student's t-test and Mann-Writney U-test. A value of p < 0.05 was considered statistically significant. Results. The results show that there are differences in the depth and the wideness of reconstructed vagina, but the obtained values are still in the range of normal ones. Cytological differences between a reconstructed and the normal vagina were found. Conclusion. A reconstructed vagina is smaller than the normal one regarding depth and width, but within the range of normal values. A split skin graft used in the reconstruction, keeps its own cytological, i.e. histological and, so, biological characteristics.

  4. Reconstruction of cellular signal transduction networks using perturbation assays and linear programming.

    Science.gov (United States)

    Knapp, Bettina; Kaderali, Lars

    2013-01-01

    Perturbation experiments for example using RNA interference (RNAi) offer an attractive way to elucidate gene function in a high throughput fashion. The placement of hit genes in their functional context and the inference of underlying networks from such data, however, are challenging tasks. One of the problems in network inference is the exponential number of possible network topologies for a given number of genes. Here, we introduce a novel mathematical approach to address this question. We formulate network inference as a linear optimization problem, which can be solved efficiently even for large-scale systems. We use simulated data to evaluate our approach, and show improved performance in particular on larger networks over state-of-the art methods. We achieve increased sensitivity and specificity, as well as a significant reduction in computing time. Furthermore, we show superior performance on noisy data. We then apply our approach to study the intracellular signaling of human primary nave CD4(+) T-cells, as well as ErbB signaling in trastuzumab resistant breast cancer cells. In both cases, our approach recovers known interactions and points to additional relevant processes. In ErbB signaling, our results predict an important role of negative and positive feedback in controlling the cell cycle progression.

  5. Combining inferred regulatory and reconstructed metabolic networks enhances phenotype prediction in yeast.

    Science.gov (United States)

    Wang, Zhuo; Danziger, Samuel A; Heavner, Benjamin D; Ma, Shuyi; Smith, Jennifer J; Li, Song; Herricks, Thurston; Simeonidis, Evangelos; Baliga, Nitin S; Aitchison, John D; Price, Nathan D

    2017-05-01

    Gene regulatory and metabolic network models have been used successfully in many organisms, but inherent differences between them make networks difficult to integrate. Probabilistic Regulation Of Metabolism (PROM) provides a partial solution, but it does not incorporate network inference and underperforms in eukaryotes. We present an Integrated Deduced And Metabolism (IDREAM) method that combines statistically inferred Environment and Gene Regulatory Influence Network (EGRIN) models with the PROM framework to create enhanced metabolic-regulatory network models. We used IDREAM to predict phenotypes and genetic interactions between transcription factors and genes encoding metabolic activities in the eukaryote, Saccharomyces cerevisiae. IDREAM models contain many fewer interactions than PROM and yet produce significantly more accurate growth predictions. IDREAM consistently outperformed PROM using any of three popular yeast metabolic models and across three experimental growth conditions. Importantly, IDREAM's enhanced accuracy makes it possible to identify subtle synthetic growth defects. With experimental validation, these novel genetic interactions involving the pyruvate dehydrogenase complex suggested a new role for fatty acid-responsive factor Oaf1 in regulating acetyl-CoA production in glucose grown cells.

  6. Reconstruction of cellular signal transduction networks using perturbation assays and linear programming.

    Directory of Open Access Journals (Sweden)

    Bettina Knapp

    Full Text Available Perturbation experiments for example using RNA interference (RNAi offer an attractive way to elucidate gene function in a high throughput fashion. The placement of hit genes in their functional context and the inference of underlying networks from such data, however, are challenging tasks. One of the problems in network inference is the exponential number of possible network topologies for a given number of genes. Here, we introduce a novel mathematical approach to address this question. We formulate network inference as a linear optimization problem, which can be solved efficiently even for large-scale systems. We use simulated data to evaluate our approach, and show improved performance in particular on larger networks over state-of-the art methods. We achieve increased sensitivity and specificity, as well as a significant reduction in computing time. Furthermore, we show superior performance on noisy data. We then apply our approach to study the intracellular signaling of human primary nave CD4(+ T-cells, as well as ErbB signaling in trastuzumab resistant breast cancer cells. In both cases, our approach recovers known interactions and points to additional relevant processes. In ErbB signaling, our results predict an important role of negative and positive feedback in controlling the cell cycle progression.

  7. Reconstruction of nonstationary sound fields based on the time domain plane wave superposition method.

    Science.gov (United States)

    Zhang, Xiao-Zheng; Thomas, Jean-Hugh; Bi, Chuan-Xing; Pascal, Jean-Claude

    2012-10-01

    A time-domain plane wave superposition method is proposed to reconstruct nonstationary sound fields. In this method, the sound field is expressed as a superposition of time convolutions between the estimated time-wavenumber spectrum of the sound pressure on a virtual source plane and the time-domain propagation kernel at each wavenumber. By discretizing the time convolutions directly, the reconstruction can be carried out iteratively in the time domain, thus providing the advantage of continuously reconstructing time-dependent pressure signals. In the reconstruction process, the Tikhonov regularization is introduced at each time step to obtain a relevant estimate of the time-wavenumber spectrum on the virtual source plane. Because the double infinite integral of the two-dimensional spatial Fourier transform is discretized directly in the wavenumber domain in the proposed method, it does not need to perform the two-dimensional spatial fast Fourier transform that is generally used in time domain holography and real-time near-field acoustic holography, and therefore it avoids some errors associated with the two-dimensional spatial fast Fourier transform in theory and makes possible to use an irregular microphone array. The feasibility of the proposed method is demonstrated by numerical simulations and an experiment with two speakers.

  8. 3D Ultrasound Reconstruction of Spinal Images using an Improved Olympic Hole-Filling Method

    NARCIS (Netherlands)

    Dewi, D.E.O.; Wilkinson, M.H.F.; Mengko, T.L.R.; Purnama, I.K.E.; Ooijen, P.M.A. van; Veldhuizen, A.G.; Maurits, N.M.; Verkerke, G.J.

    2009-01-01

    We propose a new Hole-filling algorithm by improving the Olympic operator, and we also apply it to generate the volume in our freehand 3D ultrasound reconstruction of the spine. First, the ultrasound frames and position information are compounded into a 3D volume using the Bin-filling method. Then,

  9. Algorithms and software for total variation image reconstruction via first-order methods

    DEFF Research Database (Denmark)

    Dahl, Joahim; Hansen, Per Christian; Jensen, Søren Holdt

    2010-01-01

    This paper describes new algorithms and related software for total variation (TV) image reconstruction, more specifically: denoising, inpainting, and deblurring. The algorithms are based on one of Nesterov's first-order methods, tailored to the image processing applications in such a way that...

  10. Modern Community Detection Methods in Social Networks

    Directory of Open Access Journals (Sweden)

    V. O. Chesnokov

    2017-01-01

    Full Text Available Social network structure is not homogeneous. Groups of vertices which have a lot of links between them are called communities. A survey of algorithms discovering such groups is presented in the article.A popular approach to community detection is to use an graph clustering algorithm.  Methods based on inner metric optimization are common. 5 groups of algorithms are listed: based on optimization, joining vertices into clusters by some closeness measure, special subgraphs discovery, partitioning graph by deleting edges,  and based on a dynamic process or generative model.Overlapping community detection algorithms are usually just modified graph clustering algorithms. Other approaches do exist, e.g. ones based on edges clustering or constructing communities around randomly chosen vertices. Methods based on nonnegative matrix factorization are also used, but they have high computational complexity. Algorithms based on label propagation lack this disadvantage. Methods based on affiliation model are perspective. This model claims that communities define the structure of a graph.Algorithms which use node attributes are considered: ones based on latent Dirichlet allocation, initially used for text clustering, and CODICIL, where edges of node content relevance are added to the original edge set. 6 classes are listed for algorithms for graphs with node attributes: changing egdes’ weights, changing vertex distance function, building augmented graph with nodes and attributes, based on stochastic  models, partitioning attribute space and others.Overlapping community detection algorithms which effectively use node attributes are just started to appear. Methods based on partitioning attribute space,  latent Dirichlet allocation,  stochastic  models and  nonnegative matrix factorization are considered. The most effective algorithm on real datasets is CESNA. It is based on affiliation model. However, it gives results which are far from ground truth

  11. A FIB-nanotomography method for accurate 3D reconstruction of open nanoporous structures

    Energy Technology Data Exchange (ETDEWEB)

    Mangipudi, K.R., E-mail: mangipudi@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Radisch, V., E-mail: vradisch@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Holzer, L., E-mail: holz@zhaw.ch [Züricher Hochschule für Angewandte Wissenschaften, Institute of Computational Physics, Wildbachstrasse 21, CH-8400 Winterthur (Switzerland); Volkert, C.A., E-mail: volkert@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany)

    2016-04-15

    We present an automated focused ion beam nanotomography method for nanoporous microstructures with open porosity, and apply it to reconstruct nanoporous gold (np-Au) structures with ligament sizes on the order of a few tens of nanometers. This method uses serial sectioning of a well-defined wedge-shaped geometry to determine the thickness of individual slices from the changes in the sample width in successive cross-sectional images. The pore space of a selected region of the np-Au is infiltrated with ion-beam-deposited Pt composite before serial sectioning. The cross-sectional images are binarized and stacked according to the individual slice thicknesses, and then processed using standard reconstruction methods. For the image conditions and sample geometry used here, we are able to determine the thickness of individual slices with an accuracy much smaller than a pixel. The accuracy of the new method based on actual slice thickness is assessed by comparing it with (i) a reconstruction using the same cross-sectional images but assuming a constant slice thickness, and (ii) a reconstruction using traditional FIB-tomography method employing constant slice thickness. The morphology and topology of the structures are characterized using ligament and pore size distributions, interface shape distribution functions, interface normal distributions, and genus. The results suggest that the morphology and topology of the final reconstructions are significantly influenced when a constant slice thickness is assumed. The study reveals grain-to-grain variations in the morphology and topology of np-Au. - Highlights: • FIB nanotomography of nanoporous structure with features sizes of ∼40 nm or less. • Accurate determination of individual slice thickness with subpixel precision. • The method preserves surface topography. • Quantitative 3D microstructural analysis of materials with open porosity.

  12. PRIMITIVE-BASED 3D BUILDING RECONSTRUCTION METHOD TESTED BY REFERENCE AIRBORNE DATA

    Directory of Open Access Journals (Sweden)

    W. Zhang

    2012-07-01

    Full Text Available Airborne LiDAR data and optical imagery are two datasets used for 3D building reconstruction. By study of the complementarities of these two datasets, we proposed a primitive-based 3D building reconstruction method, which can use LiDAR data and optical imagery at the same time. The proposed method comprises following steps: (1 recognize primitives from LiDAR point cloud and roughly measure primitives’ parameters as initial values, and (2 select primitives' features on the imagery, and (3 optimize primitives' parameters by the constraints of LiDAR point cloud and imagery, and (4 represent 3D building model by these optimized primitives. Compared with other model-based or CSG-based methods, the proposed method has some advantages. It is simpler, because it only uses the most straightforward features, i.e. planes of LiDAR point cloud and points of optical imagery. And it can tightly integrate LiDAR point cloud and optical imagery, that is to say, all primitives' parameters are optimized with all constraints in one step. Recently, an ISPRS Test Project on Urban Classification and 3D Building Reconstruction was launched, two datasets both with airborne LiDAR data and images are provided. The proposed method was applied to Area 3 of Dataset 1 Vaihingen, in which there are some buildings with plane roofs or gable roofs. The organizer of this test project evaluated the submitted reconstructed 3D model using reference data. The result shows the feasibility of the proposed 3D building reconstruction method.

  13. New Image Reconstruction Methods for Accelerated Quantitative Parameter Mapping and Magnetic Resonance Angiography

    Science.gov (United States)

    Velikina, J. V.; Samsonov, A. A.

    2016-02-01

    Advanced MRI techniques often require sampling in additional (non-spatial) dimensions such as time or parametric dimensions, which significantly elongate scan time. Our purpose was to develop novel iterative image reconstruction methods to reduce amount of acquired data in such applications using prior knowledge about signal in the extra dimensions. The efforts have been made to accelerate two applications, namely, time resolved contrast enhanced MR angiography and T1 mapping. Our result demonstrate that significant acceleration (up to 27x times) may be achieved using our proposed iterative reconstruction techniques.

  14. Three-dimensional Reconstruction Method Study Based on Interferometric Circular SAR

    Directory of Open Access Journals (Sweden)

    Hou Liying

    2016-10-01

    Full Text Available Circular Synthetic Aperture Radar (CSAR can acquire targets’ scattering information in all directions by a 360° observation, but a single-track CSAR cannot efficiently obtain height scattering information for a strong directive scatter. In this study, we examine the typical target of the three-dimensional circular SAR interferometry theoryand validate the theory in a darkroom experiment. We present a 3D reconstruction of the actual tank metal model of interferometric CSAR for the first time, verify the validity of the method, and demonstrate the important potential applications of combining 3D reconstruction with omnidirectional observation.

  15. Improved security monitoring method for network bordary

    Science.gov (United States)

    Gao, Liting; Wang, Lixia; Wang, Zhenyan; Qi, Aihua

    2013-03-01

    This paper proposes a network bordary security monitoring system based on PKI. The design uses multiple safe technologies, analysis deeply the association between network data flow and system log, it can detect the intrusion activities and position invasion source accurately in time. The experiment result shows that it can reduce the rate of false alarm or missing alarm of the security incident effectively.

  16. Three-dimensional tomographic reconstruction through two-dimensional multiresolution backprojection steps according to Marr's method

    Science.gov (United States)

    Stephanakis, Ioannis M.; Anastassopoulos, George C.

    2009-03-01

    A novel algorithm for 3-D tomographic reconstruction is proposed. The proposed algorithm is based on multiresolution techniques for local inversion of the 3-D Radon transform in confined subvolumes within the entire object space. Directional wavelet functions of the form ψm,nj(x)=2j/2ψ(2jwm,nx) are employed in a sequel of double filtering and 2-D backprojection operations performed on vertical and horizontal reconstruction planes using the method suggested by Marr and others. The densities of the 3-D object are found initially as backprojections of coarse wavelet functions of this form at directions on vertical and horizontal planes that intersect the object. As the algorithm evolves, finer planar wavelets intersecting a subvolume of medical interest within the original object may be used to reconstruct its details by double backprojection steps on vertical and horizontal planes in a similar fashion. Reduction in the complexity of the reconstruction algorithm is achieved due to the good localization properties of planar wavelets that render the details of the projections with small errors. Experimental results that illustrate multiresolution reconstruction at four successive levels of resolution are given for wavelets belonging to the Daubechies family.

  17. Performance study of Lagrangian methods: reconstruction of large scale peculiar velocities and baryonic acoustic oscillations

    Science.gov (United States)

    Keselman, J. A.; Nusser, A.

    2017-05-01

    No Action Method (NoAM) is a framework for reconstructing the past orbits of observed tracers of the large-scale mass density field. It seeks exact solutions of the equations of motion (EoM), satisfying initial homogeneity and the final observed particle (tracer) positions. The solutions are found iteratively reaching a specified tolerance defined as the RMS of the distance between reconstructed and observed positions. Starting from a guess for the initial conditions, NoAM advances particles using standard N-body techniques for solving the EoM. Alternatively, the EoM can be replaced by any approximation such as Zel'dovich and second-order perturbation theory (2LPT). NoAM is suitable for billions of particles and can easily handle non-regular volumes, redshift space and other constraints. We implement NoAM to systematically compare Zel'dovich, 2LPT, and N-body dynamics over diverse configurations ranging from an idealized high-res periodic simulation box to realistic galaxy mocks. Our findings are: (i) non-linear reconstructions with Zel'dovich, 2LPT, and full dynamics perform better than linear theory only for idealized catalogues in real space. For realistic catalogues, linear theory is the optimal choice for reconstructing velocity fields smoothed on scales ≳ 5 h- 1 Mpc; (ii) all non-linear back-in-time reconstructions tested here produce comparable enhancement of the baryonic oscillation signal in the correlation function.

  18. Rehanging Reynolds at the British Institution: Methods for Reconstructing Ephemeral Displays

    Directory of Open Access Journals (Sweden)

    Catherine Roach

    2016-11-01

    Full Text Available Reconstructions of historic exhibitions made with current technologies can present beguiling illusions, but they also put us in danger of recreating the past in our own image. This article and the accompanying reconstruction explore methods for representing lost displays, with an emphasis on visualizing uncertainty, illuminating process, and understanding the mediated nature of period images. These issues are highlighted in a partial recreation of a loan show held at the British Institution, London, in 1823, which featured the works of Sir Joshua Reynolds alongside continental old masters. This recreation demonstrates how speculative reconstructions can nonetheless shed light on ephemeral displays, revealing powerful visual and conceptual dialogues that took place on the crowded walls of nineteenth-century exhibitions.

  19. An effective method for network module extraction from microarray data

    Directory of Open Access Journals (Sweden)

    Mahanta Priyakshi

    2012-08-01

    Full Text Available Abstract Background The development of high-throughput Microarray technologies has provided various opportunities to systematically characterize diverse types of computational biological networks. Co-expression network have become popular in the analysis of microarray data, such as for detecting functional gene modules. Results This paper presents a method to build a co-expression network (CEN and to detect network modules from the built network. We use an effective gene expression similarity measure called NMRS (Normalized mean residue similarity to construct the CEN. We have tested our method on five publicly available benchmark microarray datasets. The network modules extracted by our algorithm have been biologically validated in terms of Q value and p value. Conclusions Our results show that the technique is capable of detecting biologically significant network modules from the co-expression network. Biologist can use this technique to find groups of genes with similar functionality based on their expression information.

  20. Historical reconstruction of floodplain inundation in the Pantanal (Brazil) using neural networks

    Science.gov (United States)

    Fantin-Cruz, Ibraim; Pedrollo, Olavo; Castro, Nilza M. R.; Girard, Pierre; Zeilhofer, Peter; Hamilton, Stephen K.

    2011-03-01

    SummaryThe relations between hydrology, geomorphology and vegetation provide the basis for understanding the ecological processes in the floodplains of rivers. In this paper, long-term (1969-2009) local flood characteristics (magnitude, duration, frequency, return period) are quantified for the floodplain of the Cuiabá River (Pantanal wetland in Brazil), using a predictive model based on artificial neural networks (ANNs) with input given by the historic river-level record. Morphological features were then described and their relation to vegetation analyzed and used to reconstruct floods. In addition to the river-level data used to train and validate a three-layer ANN, data were also used from 11 floodplain gauges across a 12 km transect lateral to the river, which were monitored during the annual floods of 2004-2007. The resulting neural network gave satisfactory estimates of water depth over the floodplain, with an absolute error of 0.09 m not exceeded in 95% of occurrences. The ANN showed that in almost all years, the level reached by the river was high enough to flood the profile completely: the exceptions were 1971 and 2001 when only 50% and 58% was flooded, to mean depths of 0.34 and 0.48 m, respectively. The highest flood was that of 1995 when the floodplain was flooded to a mean depth of 2.56 m. The median flood event (return period 2 years) produced a mean flood depth 1.80 m and lasted 119 days. Among the most important morphological features is the presence of palaeo-channels which provide hydrological connectivity between the river and the floodplain. The distribution of phytophysiognomic units is significantly influenced by local geomorphology, which determines spatial variation in the magnitude, duration, and frequency of flooding.

  1. Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method.

    Directory of Open Access Journals (Sweden)

    Xingjian Yu

    Full Text Available In this paper, a total variation (TV minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance. In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC.

  2. A Method for Upper Bounding on Network Access Speed

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip; Patel, A.; Pedersen, Jens Myrup

    2004-01-01

    This paper presents a method for calculating an upper bound on network access speed growth and gives guidelines for further research experiments and simulations. The method is aimed at providing a basis for simulation of long term network development and resource management.......This paper presents a method for calculating an upper bound on network access speed growth and gives guidelines for further research experiments and simulations. The method is aimed at providing a basis for simulation of long term network development and resource management....

  3. A Stochastic Geometry Method for Pylon Reconstruction from Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Bo Guo

    2016-03-01

    Full Text Available Object detection and reconstruction from remotely sensed data are active research topic in photogrammetric and remote sensing communities. Power engineering device monitoring by detecting key objects is important for power safety. In this paper, we introduce a novel method for the reconstruction of self-supporting pylons widely used in high voltage power-line systems from airborne LiDAR data. Our work constructs pylons from a library of 3D parametric models, which are represented using polyhedrons based on stochastic geometry. Firstly, laser points of pylons are extracted from the dataset using an automatic classification method. An energy function made up of two terms is then defined: the first term measures the adequacy of the objects with respect to the data, and the second term has the ability to favor or penalize certain configurations based on prior knowledge. Finally, estimation is undertaken by minimizing the energy using simulated annealing. We use a Markov Chain Monte Carlo sampler, leading to an optimal configuration of objects. Two main contributions of this paper are: (1 building a framework for automatic pylon reconstruction; and (2 efficient global optimization. The pylons can be precisely reconstructed through energy optimization. Experiments producing convincing results validated the proposed method using a dataset of complex structure.

  4. Numerical reconstruction of unknown Robin inclusions inside a heat conductor by a non-iterative method

    Science.gov (United States)

    Nakamura, Gen; Wang, Haibing

    2017-05-01

    Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.

  5. Compartmentalized metabolic network reconstruction of microbial communities to determine the effect of agricultural intervention on soils.

    Directory of Open Access Journals (Sweden)

    María Camila Alvarez-Silva

    Full Text Available Soil microbial communities are responsible for a wide range of ecological processes and have an important economic impact in agriculture. Determining the metabolic processes performed by microbial communities is crucial for understanding and managing ecosystem properties. Metagenomic approaches allow the elucidation of the main metabolic processes that determine the performance of microbial communities under different environmental conditions and perturbations. Here we present the first compartmentalized metabolic reconstruction at a metagenomics scale of a microbial ecosystem. This systematic approach conceives a meta-organism without boundaries between individual organisms and allows the in silico evaluation of the effect of agricultural intervention on soils at a metagenomics level. To characterize the microbial ecosystems, topological properties, taxonomic and metabolic profiles, as well as a Flux Balance Analysis (FBA were considered. Furthermore, topological and optimization algorithms were implemented to carry out the curation of the models, to ensure the continuity of the fluxes between the metabolic pathways, and to confirm the metabolite exchange between subcellular compartments. The proposed models provide specific information about ecosystems that are generally overlooked in non-compartmentalized or non-curated networks, like the influence of transport reactions in the metabolic processes, especially the important effect on mitochondrial processes, as well as provide more accurate results of the fluxes used to optimize the metabolic processes within the microbial community.

  6. Compartmentalized metabolic network reconstruction of microbial communities to determine the effect of agricultural intervention on soils.

    Science.gov (United States)

    Alvarez-Silva, María Camila; Álvarez-Yela, Astrid Catalina; Gómez-Cano, Fabio; Zambrano, María Mercedes; Husserl, Johana; Danies, Giovanna; Restrepo, Silvia; González-Barrios, Andrés Fernando

    2017-01-01

    Soil microbial communities are responsible for a wide range of ecological processes and have an important economic impact in agriculture. Determining the metabolic processes performed by microbial communities is crucial for understanding and managing ecosystem properties. Metagenomic approaches allow the elucidation of the main metabolic processes that determine the performance of microbial communities under different environmental conditions and perturbations. Here we present the first compartmentalized metabolic reconstruction at a metagenomics scale of a microbial ecosystem. This systematic approach conceives a meta-organism without boundaries between individual organisms and allows the in silico evaluation of the effect of agricultural intervention on soils at a metagenomics level. To characterize the microbial ecosystems, topological properties, taxonomic and metabolic profiles, as well as a Flux Balance Analysis (FBA) were considered. Furthermore, topological and optimization algorithms were implemented to carry out the curation of the models, to ensure the continuity of the fluxes between the metabolic pathways, and to confirm the metabolite exchange between subcellular compartments. The proposed models provide specific information about ecosystems that are generally overlooked in non-compartmentalized or non-curated networks, like the influence of transport reactions in the metabolic processes, especially the important effect on mitochondrial processes, as well as provide more accurate results of the fluxes used to optimize the metabolic processes within the microbial community.

  7. A novel community detection method in bipartite networks

    Science.gov (United States)

    Zhou, Cangqi; Feng, Liang; Zhao, Qianchuan

    2018-02-01

    Community structure is a common and important feature in many complex networks, including bipartite networks, which are used as a standard model for many empirical networks comprised of two types of nodes. In this paper, we propose a two-stage method for detecting community structure in bipartite networks. Firstly, we extend the widely-used Louvain algorithm to bipartite networks. The effectiveness and efficiency of the Louvain algorithm have been proved by many applications. However, there lacks a Louvain-like algorithm specially modified for bipartite networks. Based on bipartite modularity, a measure that extends unipartite modularity and that quantifies the strength of partitions in bipartite networks, we fill the gap by developing the Bi-Louvain algorithm that iteratively groups the nodes in each part by turns. This algorithm in bipartite networks often produces a balanced network structure with equal numbers of two types of nodes. Secondly, for the balanced network yielded by the first algorithm, we use an agglomerative clustering method to further cluster the network. We demonstrate that the calculation of the gain of modularity of each aggregation, and the operation of joining two communities can be compactly calculated by matrix operations for all pairs of communities simultaneously. At last, a complete hierarchical community structure is unfolded. We apply our method to two benchmark data sets and a large-scale data set from an e-commerce company, showing that it effectively identifies community structure in bipartite networks.

  8. A fast method based on NESTA to accurately reconstruct CT image from highly undersampled projection measurements.

    Science.gov (United States)

    He, Zhijie; Qiao, Quanbang; Li, Jun; Huang, Meiping; Zhu, Shouping; Huang, Liyu

    2016-11-22

    The CT image reconstruction algorithm based compressed sensing (CS) can be formulated as an optimization problem that minimizes the total-variation (TV) term constrained by the data fidelity and image nonnegativity. There are a lot of solutions to this problem, but the computational efficiency and reconstructed image quality of these methods still need to be improved. To investigate a faster and more accurate mathematical algorithm to settle TV term minimization problem of CT image reconstruction. A Nesterov's algorithm (NESTA) is a fast and accurate algorithm for solving TV minimization problem, which can be ascribed to the use of most notably Nesterov's smoothing technique and a subtle averaging of sequences of iterates, which has been shown to improve the convergence properties of standard gradient-descent algorithms. In order to demonstrate the superior performance of NESTA on computational efficiency and image quality, a comparison with Simultaneous Algebraic Reconstruction Technique-TV (SART-TV) and Split-Bregman (SpBr) algorithm is made using a digital phantom study and two physical phantom studies from highly undersampled projection measurements. With only 25% of conventional full-scan dose and, NESTA method reduces the average CT number error from 51.76HU to 9.98HU on Shepp-Logan phantom and reduces the average CT number error from 50.13HU to 0.32HU on Catphan 600 phantom. On an anthropomorphic head phantom, the average CT number error is reduced from 84.21HU to 1.01HU in the central uniform area. To the best of our knowledge this is the first work that apply the NESTA method into CT reconstruction based CS. Research shows that this method is of great potential, further studies and optimization are necessary.

  9. Institutional Problems in Urban Planning and Modern Methods of Reconstruction for Siberian Cities

    Science.gov (United States)

    Dayneko, A. I.; Dayneko, D. V.

    2017-11-01

    The work presents institutional problems in Russian urban planning. The institutional structure of the current system for the territories development is discussed. The necessity to conduct research is substantiated and methods and tools for evaluation of the institutional changes effectiveness are suggested. The article suggests the program and tested methods of reconstruction which are to be adopted for Siberia considering climatic, seismic and ecological peculiarities of the regions.

  10. System and method for image reconstruction, analysis, and/or de-noising

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2015-11-12

    A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter of the Schrödinger operator, analyzing discrete spectra of the Schrödinger operator and combining the analysis of the discrete spectra to construct the image.

  11. AID/APOBEC-network reconstruction identifies pathways associated with survival in ovarian cancer.

    Science.gov (United States)

    Svoboda, Martin; Meshcheryakova, Anastasia; Heinze, Georg; Jaritz, Markus; Pils, Dietmar; Castillo-Tong, Dan Cacsire; Hager, Gudrun; Thalhammer, Theresia; Jensen-Jarolim, Erika; Birner, Peter; Braicu, Ioana; Sehouli, Jalid; Lambrechts, Sandrina; Vergote, Ignace; Mahner, Sven; Zimmermann, Philip; Zeillinger, Robert; Mechtcheriakova, Diana

    2016-08-16

    Building up of pathway-/disease-relevant signatures provides a persuasive tool for understanding the functional relevance of gene alterations and gene network associations in multifactorial human diseases. Ovarian cancer is a highly complex heterogeneous malignancy in respect of tumor anatomy, tumor microenvironment including pro-/antitumor immunity and inflammation; still, it is generally treated as single disease. Thus, further approaches to investigate novel aspects of ovarian cancer pathogenesis aiming to provide a personalized strategy to clinical decision making are of high priority. Herein we assessed the contribution of the AID/APOBEC family and their associated genes given the remarkable ability of AID and APOBECs to edit DNA/RNA, and as such, providing tools for genetic and epigenetic alterations potentially leading to reprogramming of tumor cells, stroma and immune cells. We structured the study by three consecutive analytical modules, which include the multigene-based expression profiling in a cohort of patients with primary serous ovarian cancer using a self-created AID/APOBEC-associated gene signature, building up of multivariable survival models with high predictive accuracy and nomination of top-ranked candidate/target genes according to their prognostic impact, and systems biology-based reconstruction of the AID/APOBEC-driven disease-relevant mechanisms using transcriptomics data from ovarian cancer samples. We demonstrated that inclusion of the AID/APOBEC signature-based variables significantly improves the clinicopathological variables-based survival prognostication allowing significant patient stratification. Furthermore, several of the profiling-derived variables such as ID3, PTPRC/CD45, AID, APOBEC3G, and ID2 exceed the prognostic impact of some clinicopathological variables. We next extended the signature-/modeling-based knowledge by extracting top genes co-regulated with target molecules in ovarian cancer tissues and dissected potential

  12. Restoration of the analytically reconstructed OpenPET images by the method of convex projections

    Energy Technology Data Exchange (ETDEWEB)

    Tashima, Hideaki; Murayama, Hideo; Yamaya, Taiga [National Institute of Radiological Sciences, Chiba (Japan); Katsunuma, Takayuki; Suga, Mikio [Chiba Univ. (Japan). Graduate School of Engineering; Kinouchi, Shoko [National Institute of Radiological Sciences, Chiba (Japan); Chiba Univ. (Japan). Graduate School of Engineering; Obi, Takashi [Tokyo Institute of Technology (Japan). Interdisciplinary Graduate School of Science and Engineering; Kudo, Hiroyuki [Tsukuba Univ. (Japan). Graduate School of Systems and Information Engineering

    2011-07-01

    We have proposed the OpenPET geometry which has gaps between detector rings and physically opened field-of-view. The image reconstruction of the OpenPET is classified into an incomplete problem because it does not satisfy the Orlov's condition. Even so, the simulation and experimental studies have shown that applying iterative methods such as the maximum likelihood expectation maximization (ML-EM) algorithm successfully reconstruct images in the gap area. However, the imaging process of the iterative methods in the OpenPET imaging is not clear. Therefore, the aim of this study is to analytically analyze the OpenPET imaging and estimate implicit constraints involved in the iterative methods. To apply explicit constraints in the OpenPET imaging, we used the method of convex projections for restoration of the images reconstructed by the analytical way in which low-frequency components are lost. Numerical simulations showed that the similar restoration effects are involved both in the ML-EM and the method of convex projections. Therefore, the iterative methods have advantageous effect of restoring lost frequency components of the OpenPET imaging. (orig.)

  13. A Reconstructed Discontinuous Galerkin Method for the Compressible Navier-Stokes Equations on Hybrid Grids

    Energy Technology Data Exchange (ETDEWEB)

    Xiaodong Liu; Lijun Xuan; Hong Luo; Yidong Xia

    2001-01-01

    A reconstructed discontinuous Galerkin (rDG(P1P2)) method, originally introduced for the compressible Euler equations, is developed for the solution of the compressible Navier- Stokes equations on 3D hybrid grids. In this method, a piecewise quadratic polynomial solution is obtained from the underlying piecewise linear DG solution using a hierarchical Weighted Essentially Non-Oscillatory (WENO) reconstruction. The reconstructed quadratic polynomial solution is then used for the computation of the inviscid fluxes and the viscous fluxes using the second formulation of Bassi and Reay (Bassi-Rebay II). The developed rDG(P1P2) method is used to compute a variety of flow problems to assess its accuracy, efficiency, and robustness. The numerical results demonstrate that the rDG(P1P2) method is able to achieve the designed third-order of accuracy at a cost slightly higher than its underlying second-order DG method, outperform the third order DG method in terms of both computing costs and storage requirements, and obtain reliable and accurate solutions to the large eddy simulation (LES) and direct numerical simulation (DNS) of compressible turbulent flows.

  14. Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images

    Energy Technology Data Exchange (ETDEWEB)

    Nobili, Flavio E-mail: fnobili@smartino.ge.it; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido

    2001-08-01

    Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with {sup 99m}Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2{+-}6.5) with mild (Mini-Mental Status Examination score {>=}15, mean 20.3{+-}3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0

  15. Landscapes of human evolution: models and methods of tectonic geomorphology and the reconstruction of hominin landscapes.

    Science.gov (United States)

    Bailey, Geoffrey N; Reynolds, Sally C; King, Geoffrey C P

    2011-03-01

    This paper examines the relationship between complex and tectonically active landscapes and patterns of human evolution. We show how active tectonics can produce dynamic landscapes with geomorphological and topographic features that may be critical to long-term patterns of hominin land use, but which are not typically addressed in landscape reconstructions based on existing geological and paleoenvironmental principles. We describe methods of representing topography at a range of scales using measures of roughness based on digital elevation data, and combine the resulting maps with satellite imagery and ground observations to reconstruct features of the wider landscape as they existed at the time of hominin occupation and activity. We apply these methods to sites in South Africa, where relatively stable topography facilitates reconstruction. We demonstrate the presence of previously unrecognized tectonic effects and their implications for the interpretation of hominin habitats and land use. In parts of the East African Rift, reconstruction is more difficult because of dramatic changes since the time of hominin occupation, while fossils are often found in places where activity has now almost ceased. However, we show that original, dynamic landscape features can be assessed by analogy with parts of the Rift that are currently active and indicate how this approach can complement other sources of information to add new insights and pose new questions for future investigation of hominin land use and habitats. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Comparison of mammographic image quality in various methods of reconstructive breast surgery

    Energy Technology Data Exchange (ETDEWEB)

    Lindbichler, F. [University Hospital, Graz (Austria). Dept. of Radiology; Hoflehner, H. [University Hospital, Graz (Austria). Dept. of Plastic and Reconstructive Surgery; Schmidt, F. [University Hospital, Graz (Austria). Dept. of Radiology; Pierer, G.R. [University Hospital, Graz (Austria). Dept. of Plastic and Reconstructive Surgery; Raith, J. [University Hospital, Graz (Austria). Dept. of Radiology; Umschaden, J. [University Hospital, Graz (Austria). Dept. of Plastic and Reconstructive Surgery; Preidler, K.W. [University Hospital, Graz (Austria). Dept. of Radiology

    1996-12-01

    The purpose of our study was to evaluate mammographic image quality of various methods of reconstructive breast surgery with specific reference to the possibility of diagnosis of recurrent tumors. A total of 39 patients who underwent breast reconstruction following modified radical mastectomy were subject to clinical and mammographic examination. Three groups were formed: (a) autonomous tissue reconstruction (TRAM-flap; n=9), (b) submuscular silicon gel prostheses (n=21), and (c) supramuscular silicon gel prostheses (n=9). Mammographic images quality of the groups was compared by two radiologists working together using a point system where five specific criteria were valued and scored. The result was tabulated into three quality levels: good, acceptable, and limited. Mammograms were assessed as good, acceptable, or limited, respectively, as follows: group I: 7 (77.8%), 1 (11.1%), 1 (11.1%); group II; 4 (19%), 11 (52.4%), 6 (28.6%); group III: 3 (33.3%), 4 (44.5%), 2 (22.2%). The TRAM-flap method of reconstruction displays a high degree of mammographic image quality and therefore is preferable with respect to early diagnosis of recurrent tumors. (orig.)

  17. Comparison of mammographic image quality in various methods of reconstructive breast surgery.

    Science.gov (United States)

    Lindbichler, F; Hoflehner, H; Schmidt, F; Pierer, G R; Raith, J; Umschaden, J; Preidler, K W

    1996-01-01

    The purpose of our study was to evaluate mammographic image quality of various methods of reconstructive breast surgery with specific reference to the possibility of diagnosis of recurrent tumors. A total of 39 patients who underwent breast reconstruction following modified radical mastectomy were subject to clinical and mammographic examination. Three groups were formed: (a) autonomous tissue reconstruction (TRAM-flap; n = 9), (b) submuscular silicon gel prostheses (n = 21), and (c) supramuscular silicon gel prostheses (n = 9). Mammographic image quality of the groups was compared by two radiologists working together using a point system where five specific criteria were valued and scored. The result was tabulated into three quality levels: good, acceptable, and limited. Mammograms were assessed as good, acceptable, or limited, respectively, as follows: group I: 7 (77.8%), 1 (11.1%), 1 (11.1%); group II: 4 (19%), 11 (52.4%), 6 (28.6%); group III: 3 (33.3%), 4 (44.5%), 2 (22.2%). The TRAM-flap method of reconstruction displays a high degree of mammographic image quality and therefore is preferable with respect to early diagnosis of recurrent tumors.

  18. Weak-lensing Power Spectrum Reconstruction by Counting Galaxies. I. The ABS Method

    Science.gov (United States)

    Yang, Xinjuan; Zhang, Jun; Yu, Yu; Zhang, Pengjie

    2017-08-01

    We propose an analytical method of blind separation (ABS) of cosmic magnification from the intrinsic fluctuations of galaxy number density in the observed galaxy number density distribution. The ABS method utilizes the different dependences of the signal (cosmic magnification) and contamination (galaxy intrinsic clustering) on galaxy flux to separate the two. It works directly on the measured cross-galaxy angular power spectra between different flux bins. It determines/reconstructs the lensing power spectrum analytically, without assumptions of galaxy intrinsic clustering and cosmology. It is unbiased in the limit of an infinite number of galaxies. In reality, the lensing reconstruction accuracy depends on survey configurations, galaxy biases, and other complexities due to a finite number of galaxies and the resulting shot noise fluctuations in the cross-galaxy power spectra. We estimate its performance (systematic and statistical errors) in various cases. We find that stage IV dark energy surveys such as Square Kilometre Array and Large Synoptic Survey Telescope are capable of reconstructing the lensing power spectrum at z≃ 1 and {\\ell }≲ 5000 accurately. This lensing reconstruction only requires counting galaxies and is therefore highly complementary to cosmic shear measurement by the same surveys.

  19. A new algorithm for $H\\rightarrow\\tau\\bar{\\tau}$ invariant mass reconstruction using Deep Neural Networks

    CERN Document Server

    Dietrich, Felix

    2017-01-01

    Reconstructing the invariant mass in a Higgs boson decay event containing tau leptons turns out to be a challenging endeavour. The aim of this summer student project is to implement a new algorithm for this task, using deep neural networks and machine learning. The results are compared to SVFit, an existing algorithm that uses dynamical likelihood techniques. A neural network is found that reaches the accuracy of SVFit at low masses and even surpasses it at higher masses, while at the same time providing results a thousand times faster.

  20. Reconstruction of metabolic networks from high-throughput metabolite profiling data: in silico analysis of red blood cell metabolism.

    Science.gov (United States)

    Nemenman, Ilya; Escola, G Sean; Hlavacek, William S; Unkefer, Pat J; Unkefer, Clifford J; Wall, Michael E

    2007-12-01

    We investigate the ability of algorithms developed for reverse engineering of transcriptional regulatory networks to reconstruct metabolic networks from high-throughput metabolite profiling data. For benchmarking purposes, we generate synthetic metabolic profiles based on a well-established model for red blood cell metabolism. A variety of data sets are generated, accounting for different properties of real metabolic networks, such as experimental noise, metabolite correlations, and temporal dynamics. These data sets are made available online. We use ARACNE, a mainstream algorithm for reverse engineering of transcriptional regulatory networks from gene expression data, to predict metabolic interactions from these data sets. We find that the performance of ARACNE on metabolic data is comparable to that on gene expression data.

  1. Accident or homicide--virtual crime scene reconstruction using 3D methods.

    Science.gov (United States)

    Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J

    2013-02-10

    The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. Pathway-Consensus Approach to Metabolic Network Reconstruction for Pseudomonas putida KT2440 by Systematic Comparison of Published Models.

    Directory of Open Access Journals (Sweden)

    Qianqian Yuan

    Full Text Available Over 100 genome-scale metabolic networks (GSMNs have been published in recent years and widely used for phenotype prediction and pathway design. However, GSMNs for a specific organism reconstructed by different research groups usually produce inconsistent simulation results, which makes it difficult to use the GSMNs for precise optimal pathway design. Therefore, it is necessary to compare and identify the discrepancies among networks and build a consensus metabolic network for an organism. Here we proposed a process for systematic comparison of metabolic networks at pathway level. We compared four published GSMNs of Pseudomonas putida KT2440 and identified the discrepancies leading to inconsistent pathway calculation results. The mistakes in the models were corrected based on information from literature so that all the calculated synthesis and uptake pathways were the same. Subsequently we built a pathway-consensus model and then further updated it with the latest genome annotation information to obtain modelPpuQY1140 for P. putida KT2440, which includes 1140 genes, 1171 reactions and 1104 metabolites. We found that even small errors in a GSMN could have great impacts on the calculated optimal pathways and thus may lead to incorrect pathway design strategies. Careful investigation of the calculated pathways during the metabolic network reconstruction process is essential for building proper GSMNs for pathway design.

  3. Efficient Optimization Methods for Communication Network Planning and Assessment

    OpenAIRE

    Kiese, Moritz

    2010-01-01

    In this work, we develop efficient mathematical planning methods to design communication networks. First, we examine future technologies for optical backbone networks. As new, more intelligent nodes cause higher dynamics in the transport networks, fast planning methods are required. To this end, we develop a heuristic planning algorithm. The evaluation of the cost-efficiency of new, adapative transmission techniques comprises the second topic of this section. In the second part of this work, ...

  4. Methods for the reconstruction of large scale anisotropies of the cosmic ray flux

    Energy Technology Data Exchange (ETDEWEB)

    Over, Sven

    2010-01-15

    In cosmic ray experiments the arrival directions, among other properties, of cosmic ray particles from detected air shower events are reconstructed. The question of uniformity in the distribution of arrival directions is of large importance for models that try to explain cosmic radiation. In this thesis, methods for the reconstruction of parameters of a dipole-like flux distribution of cosmic rays from a set of recorded air shower events are studied. Different methods are presented and examined by means of detailed Monte Carlo simulations. Particular focus is put on the implications of spurious experimental effects. Modifications of existing methods and new methods are proposed. The main goal of this thesis is the development of the horizontal Rayleigh analysis method. Unlike other methods, this method is based on the analysis of local viewing directions instead of global sidereal directions. As a result, the symmetries of the experimental setup can be better utilised. The calculation of the sky coverage (exposure function) is not necessary in this analysis. The performance of the method is tested by means of further Monte Carlo simulations. The new method performs similarly good or only marginally worse than established methods in case of ideal measurement conditions. However, the simulation of certain experimental effects can cause substantial misestimations of the dipole parameters by the established methods, whereas the new method produces no systematic deviations. The invulnerability to certain effects offers additional advantages, as certain data selection cuts become dispensable. (orig.)

  5. A multi-thread scheduling method for 3D CT image reconstruction using multi-GPU.

    Science.gov (United States)

    Zhu, Yining; Zhao, Yunsong; Zhao, Xing

    2012-01-01

    As a whole process, we present a concept that the complete reconstruction of CT image should include the computation part on GPUs and the data storage part on hard disks. From this point of view, we propose a Multi-Thread Scheduling (MTS) method to implement the 3D CT image reconstruction such as using FDK algorithm, to trade off the computing and storage time. In this method we use Multi-Threads to control GPUs and a separate thread to accomplish data storage, so that we make the calculation and data storage simultaneously. In addition, we use the 4-channel texture to maintain symmetrical projection data in CUDA framework, which can reduce the calculation time significantly. Numerical experiment shows that the time for the whole process with our method is almost the same as the data storage time.

  6. Reconstruction methods for sound visualization based on acousto-optic tomography

    DEFF Research Database (Denmark)

    Torras Rosell, Antoni; Lylloff, Oliver; Barrera Figueroa, Salvador

    2013-01-01

    , the interaction between sound and light, over an aperture where the acoustic field is to be investigated. By identifying the relationship between the apparent velocity of the LDV and the Radon transform of the acoustic field, it is possible to reconstruct the sound pressure distribution of the scanned area using...... tomographic techniques. The filtered back projection (FBP) method is the most popular reconstruction algorithm used for tomography in many fields of science. The present study takes the performance of the FBP method in sound visualization as a reference and investigates the use of alternative methods commonly......The visualization of acoustic fields using acousto-optic tomography has recently proved to yield satisfactory results in the audible frequency range. The current implementation of this visualization technique uses a laser Doppler vibrometer (LDV) to measure the acousto-optic effect, that is...

  7. Response surface methodology and improved interval analysis method--for analyzing uncertainty in accident reconstruction.

    Science.gov (United States)

    Zou, Tiefang; Cai, Ming; Shu, Xiong

    2012-10-10

    Methods used to calculate intervals of accident reconstruction results are research hotspot in the word. The response surface methodology-interval analysis method (RSM-IAM) is a useful method for analyzing uncertainty of simulation results in this field, but there are two problems in this method because of the interval extension problem and inaccurate response surface models. In order to tackle these two problems, based on subinterval analysis thought and response surface methodology, an improved interval analysis method (RSM-IIAM) is proposed. In RSM-IIAM, the stepwise regression technique is used to obtain a reasonable response surface mode of the simulation model; and then, intervals of uncertain parameters are divided into several subintervals; after that, intervals of simulation results in accident reconstruction are calculated according to these subintervals. Finally, four numerical cases were given. Results showed that the RSM-IIAM is simple and high accuracy, which will be useful in analyzing uncertainty of simulation results in accident reconstruction. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. Evaluation of two methods for using MR information in PET reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Caldeira, L., E-mail: llcaldeira@fc.ul.pt [University of Lisbon, Faculty of Sciences, Institute of Biophysics and Biomedical Engineering (IBEB), Campo Grande 1749-016 Lisboa (Portugal); Siemens Healthcare Portugal (Portugal); Scheins, J. [Institute of Neuroscience and Medicine, Forschungszentrum Juelich GmbH, D-52425 Juelich (Germany); Almeida, P. [University of Lisbon, Faculty of Sciences, Institute of Biophysics and Biomedical Engineering (IBEB), Campo Grande 1749-016 Lisboa (Portugal); Herzog, H. [Institute of Neuroscience and Medicine, Forschungszentrum Juelich GmbH, D-52425 Juelich (Germany)

    2013-02-21

    Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed.

  9. A singular-value method for reconstruction of nonradial and lossy objects.

    Science.gov (United States)

    Jiang, Wei; Astheimer, Jeffrey; Waag, Robert

    2012-03-01

    Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.

  10. Wavefront reconstruction method based on wavelet fractal interpolation for coherent free space optical communication

    Science.gov (United States)

    Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng

    2018-03-01

    Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.

  11. Reconstruction and analysis of the genetic and metabolic regulatory networks of the central metabolism of Bacillus subtilis

    Directory of Open Access Journals (Sweden)

    Aymerich Stéphane

    2008-02-01

    Full Text Available Abstract Background Few genome-scale models of organisms focus on the regulatory networks and none of them integrates all known levels of regulation. In particular, the regulations involving metabolite pools are often neglected. However, metabolite pools link the metabolic to the genetic network through genetic regulations, including those involving effectors of transcription factors or riboswitches. Consequently, they play pivotal roles in the global organization of the genetic and metabolic regulatory networks. Results We report the manually curated reconstruction of the genetic and metabolic regulatory networks of the central metabolism of Bacillus subtilis (transcriptional, translational and post-translational regulations and modulation of enzymatic activities. We provide a systematic graphic representation of regulations of each metabolic pathway based on the central role of metabolites in regulation. We show that the complex regulatory network of B. subtilis can be decomposed as sets of locally regulated modules, which are coordinated by global regulators. Conclusion This work reveals the strong involvement of metabolite pools in the general regulation of the metabolic network. Breaking the metabolic network down into modules based on the control of metabolite pools reveals the functional organization of the genetic and metabolic regulatory networks of B. subtilis.

  12. A Method for Automated Planning of FTTH Access Network Infrastructures

    DEFF Research Database (Denmark)

    Riaz, Muhammad Tahir; Pedersen, Jens Myrup; Madsen, Ole Brun

    2005-01-01

    In this paper a method for automated planning of Fiber to the Home (FTTH) access networks is proposed. We introduced a systematic approach for planning access network infrastructure. The GIS data and a set of algorithms were employed to make the planning process more automatic. The method explains...

  13. DETECTING NETWORK ATTACKS IN COMPUTER NETWORKS BY USING DATA MINING METHODS

    OpenAIRE

    Platonov, V. V.; Semenov, P. O.

    2016-01-01

    The article describes an approach to the development of an intrusion detection system for computer networks. It is shown that the usage of several data mining methods and tools can improve the efficiency of protection computer networks against network at-tacks due to the combination of the benefits of signature detection and anomalies detection and the opportunity of adaptation the sys-tem for hardware and software structure of the computer network.

  14. Anomaly-based Network Intrusion Detection Methods

    Directory of Open Access Journals (Sweden)

    Pavel Nevlud

    2013-01-01

    Full Text Available The article deals with detection of network anomalies. Network anomalies include everything that is quite different from the normal operation. For detection of anomalies were used machine learning systems. Machine learning can be considered as a support or a limited type of artificial intelligence. A machine learning system usually starts with some knowledge and a corresponding knowledge organization so that it can interpret, analyse, and test the knowledge acquired. There are several machine learning techniques available. We tested Decision tree learning and Bayesian networks. The open source data-mining framework WEKA was the tool we used for testing the classify, cluster, association algorithms and for visualization of our results. The WEKA is a collection of machine learning algorithms for data mining tasks.

  15. Automatic Calibration of Hydrological Models in the Newly Reconstructed Catchments: Issues, Methods and Uncertainties

    Science.gov (United States)

    Nazemi, Alireza; Elshorbagy, Amin

    2010-05-01

    The use of optimisation methods has a long tradition in the calibration of conceptual hydrological models; nevertheless, most of the previous investigations have been made in the catchments with long period of data collection and only with respect to the runoff information. The present study focuses on the automatic calibration of hydrological models using the states (i.e. soil moisture) as well as the fluxes (i.e., AET) in a prototype catchment, in which intensive gauging network collects variety of catchment variables; yet only a short period of data is available. First, the characteristics of such a calibration attempt are highlighted and discussed and a number of research questions are proposed. Then, four different optimisation methods, i.e. Latin Hypercube Sampling, Shuffled Complex Evolution Metropolis, Multi-Objective Shuffled Complex Evolution Metropolis and Non-dominated Sort Genetic Algorithm II, have been considered and applied for the automatic calibration of the GSDW model in a newly oil-sand reconstructed catchment in northern Alberta, Canada. It is worthwhile to mention that the original GSDW model had to be translated into MATLAB in order to enable the model to be automatically calibrated. Different conceptualisation scenarios are generated and calibrated. The calibration results have been analysed and compared in terms of the optimality and the quality of solutions. The concepts of multi-objectivity and lack of identifiability are addressed in the calibration solutions and the best calibration algorithm is selected based on the error of representing the soil moisture content in different layers. The current study also considers uncertainties, which might occur in the formulation of calibration process by considering different calibration scenarios using the same model and dataset. The interactions among accuracy, identifiability, and the model parsimony are addressed and discussed. The present investigation concludes that the calibration of

  16. [Anatomic and histological characteristics of vagina reconstructed by McIndoe method].

    Science.gov (United States)

    Kozarski, Jefta; Vesanović, Svetlana; Bogdanović, Zoran

    2009-02-01

    Congenital absence of vagina is known from ancient times of Greek. According to the literature data, incidence is 1/4 000 to 1/20 000. Treatment of this anomaly includes non-operative and operative procedures. McIndoe procedure uses split skin graft by Thiersch. The aim of this study was to establish anatomic and histological characteristics of vagina reconstructed by McIndoe method in Mayer Küster-Rockitansky Hauser (MKRH) syndrome and compare them with normal vagina. The study included 21 patients of 18 and more years with congenital anomaly known as aplasio vaginae within the Mayer Küster-Rockitansky Hauser syndrome. The patients were operated on by the plastic surgeon using the McIndoe method. The study was a retrospective review of the data from the history of the disease, objective and gynecological examination and cytological analysis of native preparations of vaginal stain (Papanicolau). Comparatively, 21 females of 18 and more years with normal vaginas were also studed. All the subjects were divided into the groups R (reconstructed) and C (control) and the subgroups according to age up to 30 years (1 R, 1C), from 30 to 50 (2R, 2C), and over 50 (3R, 3C). Statistical data processing was performed by using the Student's t-test and Mann-Writney U-test. A value of p vagina, but the obtained values are still in the range of normal ones. Cytological differences between a reconstructed and the normal vagina were found. A reconstructed vagina is smaller than the normal one regarding depth and width, but within the range of normal values. A split skin graft used in the reconstruction, keeps its own cytological, i.e. histological and, so, biological characteristics.

  17. A new method for reconstruction of the larynx after vertical partial resections.

    Science.gov (United States)

    Eló, J; Horváth, E; Késmárszky, R

    2000-01-01

    The indications and problems of organ-preserving vertical partial laryngectomy (VPL) in cases of T1b glottic or T2 glottic and subglottic cancers are well known. The first and imperative requirement for the surgeon is adequate resection of tumor while the second prerequisite is the safe and successful correction of the excised portion of the anterolateral wall of the larynx. Since reconstruction of the defect can cause significant challenges for surgeons, the main requirements are an adequate lumen for breathing, a smooth surface for epithelialization, voice restoration and good deglution. Krajina's method for reconstruction of the larynx utilizes pedicled sternohyoid fascia, which is thin, elastic, well adaptable to defects, and resistent to infection or saliva. By providing a large surface for covering defects, granulations and synechiae can be prevented. We now use the superficial fascia colli as a new method for reconstruction of laryngeal defects after frontolateral partial resections. The technique was first refined experimentally in dogs. A Leroux-Robert partial laryngectomy was carried aut on five animals and laterally pedicled fascia was sutured to the edge of the defect created. At 2-week intervals through 8 weeks after the operation fixation, vascularization and epithelialization were examined histologically. To date, clinical reconstruction with the fascial flap has been used in 29 cases. Because the flap has a very low metabolism, no necrosis was seen. Functional results of respiration, phonation and swallowing have been good. These findings show that laterally pedicled fascia with the bipedicled sternohyoid muscles can play an important role in laryngeal reconstruction.

  18. Reconstructing uniformly attenuated rotating slant-hole SPECT projection data using the DBH method

    Science.gov (United States)

    Huang, Qiu; Xu, Jingyan; Tsui, Benjamin M. W.; Gullberg, Grant T.

    2009-07-01

    This work applies a previously developed analytical algorithm to the reconstruction problem in a rotating multi-segment slant-hole (RMSSH) SPECT system. The RMSSH collimator has greater detection efficiency than the parallel-hole collimator with comparable spatial resolution at the expense of limited common volume-of-view (CVOV) and is therefore suitable for detecting low-contrast lesions in breast, cardiac and brain imaging. The absorption of gamma photons in both the human breast and brain can be assu- med to follow an exponential rule with a constant attenuation coefficient. In this work, the RMSSH SPECT data of a digital NCAT phantom with breast attachment are modeled as the uniformly attenuated Radon transform of the activity distribution. These data are reconstructed using an analytical algorithm called the DBH method, which is an acronym for the procedure of differentiation backprojection followed by a finite weighted inverse Hilbert transform. The projection data are first differentiated along a specific direction in the projection space and then backprojected to the image space. The result from this first step is equal to a one-dimensional finite weighted Hilbert transform of the object; this transform is then numerically inverted to obtain the reconstructed image. With the limited CVOV of the RMSSH collimator, the detector captures gamma photon emissions from the breast and from parts of the torso. The simulation results show that the DBH method is capable of exactly reconstructing the activity within a well-defined region-of-interest (ROI) within the breast if the activity is confined to the breast or if the activity outside the CVOV is uniformly attenuated for each measured projection, while a conventional filtered backprojection algorithm only reconstructs the high frequency components of the activity function in the same geometry.

  19. Mean field methods for cortical network dynamics

    DEFF Research Database (Denmark)

    Hertz, J.; Lerchner, Alexander; Ahmadi, M.

    2004-01-01

    We review the use of mean field theory for describing the dynamics of dense, randomly connected cortical circuits. For a simple network of excitatory and inhibitory leaky integrate- and-fire neurons, we can show how the firing irregularity, as measured by the Fano factor, increases...... with the strength of the synapses in the network and with the value to which the membrane potential is reset after a spike. Generalizing the model to include conductance-based synapses gives insight into the connection between the firing statistics and the high- conductance state observed experimentally in visual...

  20. Evaluating image reconstruction methods for tumor detection performance in whole-body PET oncology imaging

    Science.gov (United States)

    Lartizien, Carole; Kinahan, Paul E.; Comtat, Claude; Lin, Michael; Swensson, Richard G.; Trebossen, Regine; Bendriem, Bernard

    2000-04-01

    This work presents initial results from observer detection performance studies using the same volume visualization software tools that are used in clinical PET oncology imaging. Research into the FORE+OSEM and FORE+AWOSEM statistical image reconstruction methods tailored to whole- body 3D PET oncology imaging have indicated potential improvements in image SNR compared to currently used analytic reconstruction methods (FBP). To assess the resulting impact of these reconstruction methods on the performance of human observers in detecting and localizing tumors, we use a non- Monte Carlo technique to generate multiple statistically accurate realizations of 3D whole-body PET data, based on an extended MCAT phantom and with clinically realistic levels of statistical noise. For each realization, we add a fixed number of randomly located 1 cm diam. lesions whose contrast is varied among pre-calibrated values so that the range of true positive fractions is well sampled. The observer is told the number of tumors and, similar to the AFROC method, asked to localize all of them. The true positive fraction for the three algorithms (FBP, FORE+OSEM, FORE+AWOSEM) as a function of lesion contrast is calculated, although other protocols could be compared. A confidence level for each tumor is also recorded for incorporation into later AFROC analysis.

  1. A Reconstruction Method of Blood Flow Velocity in Left Ventricle Using Color Flow Ultrasound

    Directory of Open Access Journals (Sweden)

    Jaeseong Jang

    2015-01-01

    Full Text Available Vortex flow imaging is a relatively new medical imaging method for the dynamic visualization of intracardiac blood flow, a potentially useful index of cardiac dysfunction. A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color flow images compiled from ultrasound measurements. In this paper, a 2D incompressible Navier-Stokes equation with a mass source term is proposed to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. The boundary conditions to solve the system of equations are derived from the dimensions of the ventricle extracted from 2D echocardiography data. The performance of the proposed method is evaluated numerically using synthetic flow data acquired from simulating left ventricle flows. The numerical simulations show the feasibility and potential usefulness of the proposed method of reconstructing the intracardiac flow fields. Of particular note is the finding that the mass source term in the proposed model improves the reconstruction performance.

  2. Energy reconstruction methods for large coplanar quad-grid CdZnTe detectors

    Energy Technology Data Exchange (ETDEWEB)

    Arling, Jan-Hendrik; Goessling, Claus; Kroeninger, Kevin [TU Dortmund, Experimentelle Physik IV, Dortmund (Germany)

    2016-07-01

    The COBRA experiment will search for neutrinoless double beta-decay (0νββ) using CdZnTe semiconductor detectors. Currently a demonstrator setup consisting of 64 coplanar-grid (CPG) CdZnTe detectors with a volume of (1 x 1 x 1) cm{sup 3} each is under operation at the Gran Sasso Underground Laboratory (LNGS). The next step for the experiment will be the installation of an array of nine CdZnTe detectors with a volume of (2 x 2 x 1.5) cm{sup 3} and four CPG sectors with parallel readout each. Advantages of these larger detectors are a higher full-energy detection efficiency and a better surface-to-volume ratio. Up to now, the reconstruction schemes developed for the 1 cm{sup 3} detectors are also used for the 6 cm{sup 3} detectors. Consequentially the potential of improvements on the energy reconstruction will be investigated. An important topic in this context is the reconstruction of the interaction depth which is possible due to the coplanar-grid design. In this talk the newest results of the investigation of the reconstruction methods for 6 cm{sup 3} CdZnTe detectors are presented and discussed.

  3. A comparison of reconstruction methods for undersampled atomic force microscopy images

    Science.gov (United States)

    Luo, Yufan; Andersson, Sean B.

    2015-12-01

    Non-raster scanning and undersampling of atomic force microscopy (AFM) images is a technique for improving imaging rate and reducing the amount of tip-sample interaction needed to produce an image. Generation of the final image can be done using a variety of image processing techniques based on interpolation or optimization. The choice of reconstruction method has a large impact on the quality of the recovered image and the proper choice depends on the sample under study. In this work we compare interpolation through the use of inpainting algorithms with reconstruction based on optimization through the use of the basis pursuit algorithm commonly used for signal recovery in compressive sensing. Using four different sampling patterns found in non-raster AFM, namely row subsampling, spiral scanning, Lissajous scanning, and random scanning, we subsample data from existing images and compare reconstruction performance against the original image. The results illustrate that inpainting generally produces superior results when the image contains primarily low frequency content while basis pursuit is better when the images have mixed, but sparse, frequency content. Using support vector machines, we then classify images based on their frequency content and sparsity and, from this classification, develop a fast decision strategy to select a reconstruction algorithm to be used on subsampled data. The performance of the classification and decision test are demonstrated on test AFM images.

  4. Streaming video-based 3D reconstruction method compatible with existing monoscopic and stereoscopic endoscopy systems

    Science.gov (United States)

    Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul

    2012-06-01

    Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.

  5. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  6. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    Science.gov (United States)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  7. An image segmentation method based on network clustering model

    Science.gov (United States)

    Jiao, Yang; Wu, Jianshe; Jiao, Licheng

    2018-01-01

    Network clustering phenomena are ubiquitous in nature and human society. In this paper, a method involving a network clustering model is proposed for mass segmentation in mammograms. First, the watershed transform is used to divide an image into regions, and features of the image are computed. Then a graph is constructed from the obtained regions and features. The network clustering model is applied to realize clustering of nodes in the graph. Compared with two classic methods, the algorithm based on the network clustering model performs more effectively in experiments.

  8. Mixed Methods Analysis of Enterprise Social Networks

    DEFF Research Database (Denmark)

    Behrendt, Sebastian; Richter, Alexander; Trier, Matthias

    2014-01-01

    The increasing use of enterprise social networks (ESN) generates vast amounts of data, giving researchers and managerial decision makers unprecedented opportunities for analysis. However, more transparency about the available data dimensions and how these can be combined is needed to yield accurate...

  9. Paediatric cardiac CT examinations: impact of the iterative reconstruction method ASIR on image quality - preliminary findings

    Energy Technology Data Exchange (ETDEWEB)

    Mieville, Frederic A. [University Hospital Center and University of Lausanne, Institute of Radiation Physics, Lausanne (Switzerland); University Hospital Center and University of Lausanne, Institute of Radiation Physics - Medical Radiology, Lausanne (Switzerland); Gudinchet, Francois; Rizzo, Elena [University Hospital Center and University of Lausanne, Department of Radiology, Lausanne (Switzerland); Ou, Phalla; Brunelle, Francis [Necker Children' s Hospital, Department of Radiology, Paris (France); Bochud, Francois O.; Verdun, Francis R. [University Hospital Center and University of Lausanne, Institute of Radiation Physics, Lausanne (Switzerland)

    2011-09-15

    Radiation dose exposure is of particular concern in children due to the possible harmful effects of ionizing radiation. The adaptive statistical iterative reconstruction (ASIR) method is a promising new technique that reduces image noise and produces better overall image quality compared with routine-dose contrast-enhanced methods. To assess the benefits of ASIR on the diagnostic image quality in paediatric cardiac CT examinations. Four paediatric radiologists based at two major hospitals evaluated ten low-dose paediatric cardiac examinations (80 kVp, CTDI{sub vol} 4.8-7.9 mGy, DLP 37.1-178.9 mGy.cm). The average age of the cohort studied was 2.6 years (range 1 day to 7 years). Acquisitions were performed on a 64-MDCT scanner. All images were reconstructed at various ASIR percentages (0-100%). For each examination, radiologists scored 19 anatomical structures using the relative visual grading analysis method. To estimate the potential for dose reduction, acquisitions were also performed on a Catphan phantom and a paediatric phantom. The best image quality for all clinical images was obtained with 20% and 40% ASIR (p < 0.001) whereas with ASIR above 50%, image quality significantly decreased (p < 0.001). With 100% ASIR, a strong noise-free appearance of the structures reduced image conspicuity. A potential for dose reduction of about 36% is predicted for a 2- to 3-year-old child when using 40% ASIR rather than the standard filtered back-projection method. Reconstruction including 20% to 40% ASIR slightly improved the conspicuity of various paediatric cardiac structures in newborns and children with respect to conventional reconstruction (filtered back-projection) alone. (orig.)

  10. A simple method for reconstruction of severely damaged primary anterior teeth

    OpenAIRE

    Eshghi, Alireza; Esfahan, Raha Kowsari; Khoroushi, Maryam

    2011-01-01

    Restoration of severely decayed primary anterior teeth is often considered as a special challenge by pedodontists. This case report presents a 5-year-old boy with severely damaged maxillary right canine. Subsequent to root canal treatment, a reversed (upside-down) metal post was put into the canal and composite build-up was performed. This new method offers a simple, practical and effective procedure for reconstruction of severely decayed primary anterior teeth, which re-establishes function ...

  11. Quantitative comparison of reconstruction methods for intra-voxel fiber recovery from diffusion MRI

    OpenAIRE

    Daducci, Alessandro; Canales-Rodríguez, Erick Jorge; Descoteaux, Maxime; Garyfallidis, Eleftherios; Gur, Yaniv; Lin, Ying-Chia; Mani, Merry; Merlet, Sylvain; Paquette, Michael; Ramirez-Manzanares, Alonso; Reisert, Marco; Rodrigues, Paulo Reis; Sepehrband, Farshid; Jacob, Mathews; Caruyer, Emmanuel

    2014-01-01

    Validation is arguably the bottleneck in the diffusion MRI community. This paper evaluates and compares 20 algorithms for recovering the local intra-voxel fiber structure from diffusion MRI data and is based on the results of the "HARDI reconstruction challenge" organized in the context of the "ISBI 2012" conference. Evaluated methods encompass a mixture of classical techniques well-known in the literature such as Diffusion Tensor, Q-Ball and Diffusion Spectrum imaging, algorithms inspired by...

  12. Quantitative comparison of reconstruction methods for intra-voxel fiber recovery from diffusion {MRI}

    OpenAIRE

    Daducci Alessandro; Canales-Rodríguez Erick Jorge; Descoteaux Maxime; Garyfallidis Eleftherios; Gur Yaniv; Lin Ying-Chia; Mani Merry; Merlet Sylvain; Paquette Michael; Ramirez-Manzanares Alonso; Reisert Marco; Rodrigues Paulo Reis; Sepehrband Farshid; Jacob Mathews; Caruyer Emmanuel

    2014-01-01

    Validation is arguably the bottleneck in the diffusion MRI community. This paper evaluates and compares 20 algorithms for recovering the local intra voxel fiber structure from diffusion MRI data and is based on the results of the "HARDI reconstruction challenge" organized in the context of the "ISBI 2012" conference. Evaluated methods encompass a mixture of classical techniques well known in the literature such as Diffusion Tensor Q Ball and Diffusion Spectrum imaging algorithms inspired by t...

  13. Dynamic baseline detection method for power data network service

    Science.gov (United States)

    Chen, Wei

    2017-08-01

    This paper proposes a dynamic baseline Traffic detection Method which is based on the historical traffic data for the Power data network. The method uses Cisco's NetFlow acquisition tool to collect the original historical traffic data from network element at fixed intervals. This method uses three dimensions information including the communication port, time, traffic (number of bytes or number of packets) t. By filtering, removing the deviation value, calculating the dynamic baseline value, comparing the actual value with the baseline value, the method can detect whether the current network traffic is abnormal.

  14. Four-dimensional cone beam CT reconstruction and enhancement using a temporal nonlocal means method

    Energy Technology Data Exchange (ETDEWEB)

    Jia Xun; Tian Zhen; Lou Yifei; Sonke, Jan-Jakob; Jiang, Steve B. [Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037 (United States); School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30318 (United States); Department of Radiation Oncology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands); Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037 (United States)

    2012-09-15

    Purpose: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase-resolved volumetric imaging in image guided radiation therapy. Conventionally, it is reconstructed by first sorting the x-ray projections into multiple respiratory phase bins according to a breathing signal extracted either from the projection images or some external surrogates, and then reconstructing a 3D CBCT image in each phase bin independently using FDK algorithm. This method requires adequate number of projections for each phase, which can be achieved using a low gantry rotation or multiple gantry rotations. Inadequate number of projections in each phase bin results in low quality 4D-CBCT images with obvious streaking artifacts. 4D-CBCT images at different breathing phases share a lot of redundant information, because they represent the same anatomy captured at slightly different temporal points. Taking this redundancy along the temporal dimension into account can in principle facilitate the reconstruction in the situation of inadequate number of projection images. In this work, the authors propose two novel 4D-CBCT algorithms: an iterative reconstruction algorithm and an enhancement algorithm, utilizing a temporal nonlocal means (TNLM) method. Methods: The authors define a TNLM energy term for a given set of 4D-CBCT images. Minimization of this term favors those 4D-CBCT images such that any anatomical features at one spatial point at one phase can be found in a nearby spatial point at neighboring phases. 4D-CBCT reconstruction is achieved by minimizing a total energy containing a data fidelity term and the TNLM energy term. As for the image enhancement, 4D-CBCT images generated by the FDK algorithm are enhanced by minimizing the TNLM function while keeping the enhanced images close to the FDK results. A forward-backward splitting algorithm and a Gauss-Jacobi iteration method are employed to solve the problems. The algorithms implementation on

  15. A new method for constructing networks from binary data

    Science.gov (United States)

    van Borkulo, Claudia D.; Borsboom, Denny; Epskamp, Sacha; Blanken, Tessa F.; Boschloo, Lynn; Schoevers, Robert A.; Waldorp, Lourens J.

    2014-08-01

    Network analysis is entering fields where network structures are unknown, such as psychology and the educational sciences. A crucial step in the application of network models lies in the assessment of network structure. Current methods either have serious drawbacks or are only suitable for Gaussian data. In the present paper, we present a method for assessing network structures from binary data. Although models for binary data are infamous for their computational intractability, we present a computationally efficient model for estimating network structures. The approach, which is based on Ising models as used in physics, combines logistic regression with model selection based on a Goodness-of-Fit measure to identify relevant relationships between variables that define connections in a network. A validation study shows that this method succeeds in revealing the most relevant features of a network for realistic sample sizes. We apply our proposed method to estimate the network of depression and anxiety symptoms from symptom scores of 1108 subjects. Possible extensions of the model are discussed.

  16. Progress toward the development and testing of source reconstruction methods for NIF neutron imaging.

    Science.gov (United States)

    Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D

    2010-10-01

    Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.

  17. In Silico Genome-Scale Reconstruction and Validation of the Corynebacterium glutamicum Metabolic Network

    DEFF Research Database (Denmark)

    Kjeldsen, Kjeld Raunkjær; Nielsen, J.

    2009-01-01

    A genome-scale metabolic model of the Gram-positive bacteria Corynebacterium glutamicum ATCC 13032 was constructed comprising 446 reactions and 411 metabolite, based on the annotated genome and available biochemical information. The network was analyzed using constraint based methods. The model...... was extensively validated against published flux data, and flux distribution values were found to correlate well between simulations and experiments. The split pathway of the lysine synthesis pathway of C. glutamicum was investigated, and it was found that the direct dehydrogenase variant gave a higher lysine...

  18. A DATA DRIVEN METHOD FOR BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    M. Sajadian

    2014-10-01

    Full Text Available Airborne laser scanning, commonly referred to as LiDAR, is a superior technology for three-dimensional data acquisition from Earth's surface with high speed and density. Building reconstruction is one of the main applications of LiDAR system which is considered in this study. For a 3D reconstruction of the buildings, the buildings points should be first separated from the other points such as; ground and vegetation. In this paper, a multi-agent strategy has been proposed for simultaneous extraction and segmentation of buildings from LiDAR point clouds. Height values, number of returned pulse, length of triangles, direction of normal vectors, and area are five criteria which have been utilized in this step. Next, the building edge points are detected using a new method named "Grid Erosion". A RANSAC based technique has been employed for edge line extraction. Regularization constraints are performed to achieve the final lines. Finally, by modelling of the roofs and walls, 3D building model is reconstructed. The results indicate that the proposed method could successfully extract the building from LiDAR data and generate the building models automatically. A qualitative and quantitative assessment of the proposed method is then provided.

  19. Space-Varying Iterative Restoration of Diffuse Optical Tomograms Reconstructed by the Photon Average Trajectories Method

    Directory of Open Access Journals (Sweden)

    Kravtsenyuk Olga V

    2007-01-01

    Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a gain in spatial resolution can be obtained.

  20. Iterative reconstruction method for light emitting sources based on the diffusion equation.

    Science.gov (United States)

    Slavine, Nikolai V; Lewis, Matthew A; Richer, Edmond; Antich, Peter P

    2006-01-01

    Bioluminescent imaging (BLI) of luciferase-expressing cells in live small animals is a powerful technique for investigating tumor growth, metastasis, and specific biological molecular events. Three-dimensional imaging would greatly enhance applications in biomedicine since light emitting cell populations could be unambiguously associated with specific organs or tissues. Any imaging approach must account for the main optical properties of biological tissue because light emission from a distribution of sources at depth is strongly attenuated due to optical absorption and scattering in tissue. Our image reconstruction method for interior sources is based on the deblurring expectation maximization method and takes into account both of these effects. To determine the boundary of the object we use the standard iterative algorithm-maximum likelihood reconstruction method with an external source of diffuse light. Depth-dependent corrections were included in the reconstruction procedure to obtain a quantitative measure of light intensity by using the diffusion equation for light transport in semi-infinite turbid media with extrapolated boundary conditions.

  1. Space-Varying Iterative Restoration of Diffuse Optical Tomograms Reconstructed by the Photon Average Trajectories Method

    Directory of Open Access Journals (Sweden)

    Vladimir V. Lyubimov

    2007-01-01

    Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a 27% gain in spatial resolution can be obtained.

  2. Reconstruction method with data from a multiple-site continuous-wave source for three-dimensional optical tomography.

    Science.gov (United States)

    Su, Jianzhong; Shan, Hua; Liu, Hanli; Klibanov, Michael V

    2006-10-01

    A method is presented for reconstruction of the optical absorption coefficient from transmission near-infrared data with a cw source. As it is distinct from other available schemes such as optimization or Newton's iterative method, this method resolves the inverse problem by solving a boundary value problem for a Volterra-type integral-differential equation. It is demonstrated in numerical studies that this technique has a better than average stability with respect to the discrepancy between the initial guess and the actual unknown absorption coefficient. The method is particularly useful for reconstruction from a large data set obtained from a CCD camera. Several numerical reconstruction examples are presented.

  3. A new multi-planar reconstruction method using voxel based beamforming for 3D ultrasound imaging

    Science.gov (United States)

    Ju, Hyunseok; Kang, Jinbum; Song, Ilseob; Yoo, Yangmo

    2015-03-01

    For multi-planar reconstruction in 3D ultrasound imaging, direct and separable 3D scan conversion (SC) have been used for transforming the ultrasound data acquired in the 3D polar coordinate system to the 3D Cartesian coordinate system. These 3D SC methods can visualize an arbitrary plane for 3D ultrasound volume data. However, they suffer from blurring and blocking artifacts due to resampling during SC. In this paper, a new multi-planar reconstruction method based on voxel based beamforming (VBF) is proposed for reducing blurring and blocking artifacts. In VBF, unlike direct and separable 3D SC, each voxel on an arbitrary imaging plane is directly reconstructed by applying the focusing delay to radio-frequency (RF) data so that the blurring and blocking artifacts can be removed. From the phantom study, the proposed VBF method showed the higher contrast and less blurring compared to the separable and direct 3D SC methods. This result is consistent with the measured information entropy contrast (IEC) values, i.e., 98.9 vs. 42.0 vs. 47.9, respectively. In addition, the 3D SC methods and VBF method were implemented on a high-end GPU by using CUDA programming. The execution times for the VBF and direct 3D SC methods are 1656.1ms, 1633.3ms and 1631.4ms, which are I/O bounded. These results indicate that the proposed VBF method can improve image quality of 3D ultrasound B-mode imaging by removing blurring and blocking artifacts associated with 3D scan conversion and show the feasibility of pseudo-real-time operation.

  4. The research on user behavior evaluation method for network state

    Science.gov (United States)

    Zhang, Chengyuan; Xu, Haishui

    2017-08-01

    Based on the correlation between user behavior and network running state, this paper proposes a method of user behavior evaluation based on network state. Based on the analysis and evaluation methods in other fields of study, we introduce the theory and tools of data mining. Based on the network status information provided by the trusted network view, the user behavior data and the network state data are analysed. Finally, we construct the user behavior evaluation index and weight, and on this basis, we can accurately quantify the influence degree of the specific behavior of different users on the change of network running state, so as to provide the basis for user behavior control decision.

  5. Evolutionary method for finding communities in bipartite networks

    Science.gov (United States)

    Zhan, Weihua; Zhang, Zhongzhi; Guan, Jihong; Zhou, Shuigeng

    2011-06-01

    An important step in unveiling the relation between network structure and dynamics defined on networks is to detect communities, and numerous methods have been developed separately to identify community structure in different classes of networks, such as unipartite networks, bipartite networks, and directed networks. Here, we show that the finding of communities in such networks can be unified in a general framework—detection of community structure in bipartite networks. Moreover, we propose an evolutionary method for efficiently identifying communities in bipartite networks. To this end, we show that both unipartite and directed networks can be represented as bipartite networks, and their modularity is completely consistent with that for bipartite networks, the detection of modular structure on which can be reformulated as modularity maximization. To optimize the bipartite modularity, we develop a modified adaptive genetic algorithm (MAGA), which is shown to be especially efficient for community structure detection. The high efficiency of the MAGA is based on the following three improvements we make. First, we introduce a different measure for the informativeness of a locus instead of the standard deviation, which can exactly determine which loci mutate. This measure is the bias between the distribution of a locus over the current population and the uniform distribution of the locus, i.e., the Kullback-Leibler divergence between them. Second, we develop a reassignment technique for differentiating the informative state a locus has attained from the random state in the initial phase. Third, we present a modified mutation rule which by incorporating related operations can guarantee the convergence of the MAGA to the global optimum and can speed up the convergence process. Experimental results show that the MAGA outperforms existing methods in terms of modularity for both bipartite and unipartite networks.

  6. Reduction Method for Active Distribution Networks

    DEFF Research Database (Denmark)

    Raboni, Pietro; Chen, Zhe

    2013-01-01

    On-line security assessment is traditionally performed by Transmission System Operators at the transmission level, ignoring the effective response of distributed generators and small loads. On the other hand the required computation time and amount of real time data for including Distribution Net...... by comparing the results obtained in PSCAD® with the detailed network model and with the reduced one. Moreover the control schemes of a wind turbine and a photovoltaic plant included in the detailed network model are described.......On-line security assessment is traditionally performed by Transmission System Operators at the transmission level, ignoring the effective response of distributed generators and small loads. On the other hand the required computation time and amount of real time data for including Distribution...

  7. Classification Method in Integrated Information Network Using Vector Image Comparison

    Directory of Open Access Journals (Sweden)

    Zhou Yuan

    2014-05-01

    Full Text Available Wireless Integrated Information Network (WMN consists of integrated information that can get data from its surrounding, such as image, voice. To transmit information, large resource is required which decreases the service time of the network. In this paper we present a Classification Approach based on Vector Image Comparison (VIC for WMN that improve the service time of the network. The available methods for sub-region selection and conversion are also proposed.

  8. Spectral Methods for Immunization of Large Networks

    Directory of Open Access Journals (Sweden)

    Muhammad Ahmad

    2017-11-01

    Full Text Available Given a network of nodes, minimizing the spread of a contagion using a limited budget is a well-studied problem with applications in network security, viral marketing, social networks, and public health. In real graphs, virus may infect a node which in turn infects its neighbour nodes and this may trigger an epidemic in the whole graph. The goal thus is to select the best k nodes (budget constraint that are immunized (vaccinated, screened, filtered so as the remaining graph is less prone to the epidemic. It is known that the problem is, in all practical models, computationally intractable even for moderate sized graphs. In this paper we employ ideas from spectral graph theory to define relevance and importance of nodes. Using novel graph theoretic techniques, we then design an efficient approximation algorithm to immunize the graph. Theoretical guarantees on the running time of our algorithm show that it is more efficient than any other known solution in the literature. We test the performance of our algorithm on several real world graphs. Experiments show that our algorithm scales well for large graphs and outperforms state of the art algorithms both in quality (containment of epidemic and efficiency (runtime and space complexity.

  9. Semigroup methods for evolution equations on networks

    CERN Document Server

    Mugnolo, Delio

    2014-01-01

    This concise text is based on a series of lectures held only a few years ago and originally intended as an introduction to known results on linear hyperbolic and parabolic equations.  Yet the topic of differential equations on graphs, ramified spaces, and more general network-like objects has recently gained significant momentum and, well beyond the confines of mathematics, there is a lively interdisciplinary discourse on all aspects of so-called complex networks. Such network-like structures can be found in virtually all branches of science, engineering and the humanities, and future research thus calls for solid theoretical foundations.      This book is specifically devoted to the study of evolution equations – i.e., of time-dependent differential equations such as the heat equation, the wave equation, or the Schrödinger equation (quantum graphs) – bearing in mind that the majority of the literature in the last ten years on the subject of differential equations of graphs has been devoted to ellip...

  10. Reconstruction of an integrated genome-scale co-expression network reveals key modules involved in lung adenocarcinoma.

    Science.gov (United States)

    Bidkhori, Gholamreza; Narimani, Zahra; Hosseini Ashtiani, Saman; Moeini, Ali; Nowzari-Dalini, Abbas; Masoudi-Nejad, Ali

    2013-01-01

    Our goal of this study was to reconstruct a "genome-scale co-expression network" and find important modules in lung adenocarcinoma so that we could identify the genes involved in lung adenocarcinoma. We integrated gene mutation, GWAS, CGH, array-CGH and SNP array data in order to identify important genes and loci in genome-scale. Afterwards, on the basis of the identified genes a co-expression network was reconstructed from the co-expression data. The reconstructed network was named "genome-scale co-expression network". As the next step, 23 key modules were disclosed through clustering. In this study a number of genes have been identified for the first time to be implicated in lung adenocarcinoma by analyzing the modules. The genes EGFR, PIK3CA, TAF15, XIAP, VAPB, Appl1, Rab5a, ARF4, CLPTM1L, SP4, ZNF124, LPP, FOXP1, SOX18, MSX2, NFE2L2, SMARCC1, TRA2B, CBX3, PRPF6, ATP6V1C1, MYBBP1A, MACF1, GRM2, TBXA2R, PRKAR2A, PTK2, PGF and MYO10 are among the genes that belong to modules 1 and 22. All these genes, being implicated in at least one of the phenomena, namely cell survival, proliferation and metastasis, have an over-expression pattern similar to that of EGFR. In few modules, the genes such as CCNA2 (Cyclin A2), CCNB2 (Cyclin B2), CDK1, CDK5, CDC27, CDCA5, CDCA8, ASPM, BUB1, KIF15, KIF2C, NEK2, NUSAP1, PRC1, SMC4, SYCE2, TFDP1, CDC42 and ARHGEF9 are present that play a crucial role in cell cycle progression. In addition to the mentioned genes, there are some other genes (i.e. DLGAP5, BIRC5, PSMD2, Src, TTK, SENP2, PSMD2, DOK2, FUS and etc.) in the modules.

  11. Genome-scale reconstruction of the Streptococcus pyogenes M49 metabolic network reveals growth requirements and indicates potential drug targets.

    Science.gov (United States)

    Levering, Jennifer; Fiedler, Tomas; Sieg, Antje; van Grinsven, Koen W A; Hering, Silvio; Veith, Nadine; Olivier, Brett G; Klett, Lara; Hugenholtz, Jeroen; Teusink, Bas; Kreikemeyer, Bernd; Kummer, Ursula

    2016-08-20

    Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes M49. Initially, we based the reconstruction on genome annotations and already existing and curated metabolic networks of Bacillus subtilis, Escherichia coli, Lactobacillus plantarum and Lactococcus lactis. This initial draft was manually curated with the final reconstruction accounting for 480 genes associated with 576 reactions and 558 metabolites. In order to constrain the model further, we performed growth experiments of wild type and arcA deletion strains of S. pyogenes M49 in a chemically defined medium and calculated nutrient uptake and production fluxes. We additionally performed amino acid auxotrophy experiments to test the consistency of the model. The established genome-scale model can be used to understand the growth requirements of the human pathogen S. pyogenes and define optimal and suboptimal conditions, but also to describe differences and similarities between S. pyogenes and related lactic acid bacteria such as L. lactis in order to find strategies to reduce the growth of the pathogen and propose drug targets. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. The Overcomplete Dictionary-Based Directional Estimation Model and Nonconvex Reconstruction Methods.

    Science.gov (United States)

    Lin, Leping; Liu, Fang; Jiao, Licheng; Yang, Shuyuan; Hao, Hongxia

    2018-03-01

    In this paper, it is proposed the directional estimation model on the overcomplete dictionary, which bridges the compressed measurements of the image blocks and the directional structures of the dictionary. In the model, it is established the analytical method to estimate the structure type of a block as either smooth, single-oriented, or multioriented. Furthermore, the structures of each type of blocks are described by the structured subdictionaries. Then based on the obtained estimations and the constrains on the sparse dictionaries, the original image will be estimated. To verify the model, the nonconvex methods are designed for compressed sensing. Specifically, the greedy pursuit-based methods are established to search the subdictionaries obtained by the model, which achieve better local structural estimation than the methods without the directional estimation. More importantly, it is proposed the nonconvex image reconstruction method with direction-guided dictionaries and evolutionary searching strategies (NR_DG), where the evolutionary searching strategies are delicately designed for each type of the blocks based on the directional estimation. By the experimental results, it is shown that the NR_DG method performs better than the available two-stage evolutionary reconstruction method.

  13. a Method for the Reconstruction and Temporal Extension of Climatological Time Series

    Science.gov (United States)

    Valero, F.; Gonzalez, J. F.; Doblas, F. J.; García-Miguel, J. A.

    1996-02-01

    A method for the reconstruction and temporal extension of climatological time series is provided. This method was focused on a combination of methods, including harmonic analysis, seasonal weights, and the Durbin-Watson (DW) regression method. The DW method has been modified in this paper and is described in detail because it represents a novel use of the original DW method.The method is applied to monthly means of daily wind-run data sets recorded in two historical observatories (M series and A series) within the Parque del Retiro in Madrid (Spain) and covering different time periods with an overlapping period (1901-1919). The aim of the present study is to fill up to and to construct a historical time series ranging from 1867 to 1992. The proposed model is developed for the 1906-1919 calibration period and validated over the 1901-1905 verification period, which includes the hypothesis of constant ratio of variances. The verification results are almost as good as those for the calibration period. Hence, the M series was extended back to 1867, which results in the longest climatological wind-run data-set in Spain. Also, the reconstruction is shown to be reliable.

  14. Improvement of coda phase detectability and reconstruction of global seismic data using frequency-wavenumber methods

    Science.gov (United States)

    Schneider, Simon; Thomas, Christine; Dokht, Ramin M. H.; Gu, Yu Jeffrey; Chen, Yunfeng

    2018-02-01

    Due to uneven earthquake source and receiver distributions, our abilities to isolate weak signals from interfering phases and reconstruct missing data are fundamental to improving the resolution of seismic imaging techniques. In this study, we introduce a modified frequency-wavenumber (fk) domain based approach using a `Projection Onto Convex Sets' (POCS) algorithm. POCS takes advantage of the sparsity of the dominating energies of phase arrivals in the fk domain, which enables an effective detection and reconstruction of the weak seismic signals. Moreover, our algorithm utilizes the 2-D Fourier transform to perform noise removal, interpolation and weak-phase extraction. To improve the directional resolution of the reconstructed data, we introduce a band-stop 2-D Fourier filter to remove the energy of unwanted, interfering phases in the fk domain, which significantly increases the robustness of the signal of interest. The effectiveness and benefits of this method are clearly demonstrated using both simulated and actual broadband recordings of PP precursors from an array located in Tanzania. When used properly, this method could significantly enhance the resolution of weak crust and mantle seismic phases.

  15. A novel reconstruction method for giant incisional hernia: Hybrid laparoscopic technique

    Directory of Open Access Journals (Sweden)

    G Ozturk

    2015-01-01

    Full Text Available Background and Objectives: Laparoscopic reconstruction of ventral hernia is a popular technique today. Patients with large defects have various difficulties of laparoscopic approach. In this study, we aimed to present a new reconstruction technique that combines laparoscopic and open approach in giant incisional hernias. Materials and Methods: Between January 2006 and August 2012, 28 patients who were operated consequently for incisional hernia with defect size over 10 cm included in this study and separated into two groups. Group 1 (n = 12 identifies patients operated with standard laparoscopic approach, whereas group 2 (n = 16 labels laparoscopic technique combined with open approach. Patients were evaluated in terms of age, gender, body mass index (BMI, mean operation time, length of hospital stay, surgical site infection (SSI and recurrence rate. Results: There are 12 patients in group 1 and 16 patients in group 2. Mean length of hospital stay and SSI rates are similar in both groups. Postoperative seroma formation was observed in six patients for group 1 and in only 1 patient for group 2. Group 1 had 1 patient who suffered from recurrence where group 2 had no recurrence. Discussion: Laparoscopic technique combined with open approach may safely be used as an alternative method for reconstruction of giant incisional hernias.

  16. High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI

    Science.gov (United States)

    Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer

    2011-03-01

    Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.

  17. Diagrammatic perturbation methods in networks and sports ranking combinatorics

    Science.gov (United States)

    Park, Juyong

    2010-04-01

    Analytic and computational tools developed in statistical physics are being increasingly applied to the study of complex networks. Here we present recent developments in the diagrammatic perturbation methods for the exponential random graph models, and apply them to the combinatoric problem of determining the ranking of nodes in directed networks that represent pairwise competitions.

  18. Impact of reconstruction methods and pathological factors on survival after pancreaticoduodenectomy

    Directory of Open Access Journals (Sweden)

    Salah Binziad

    2013-01-01

    Full Text Available Background: Surgery remains the mainstay of therapy for pancreatic head (PH and periampullary carcinoma (PC and provides the only chance of cure. Improvements of surgical technique, increased surgical experience and advances in anesthesia, intensive care and parenteral nutrition have substantially decreased surgical complications and increased survival. We evaluate the effects of reconstruction type, complications and pathological factors on survival and quality of life. Materials and Methods: This is a prospective study to evaluate the impact of various reconstruction methods of the pancreatic remnant after pancreaticoduodenectomy and the pathological characteristics of PC patients over 3.5 years. Patient characteristics and descriptive analysis in the three variable methods either with or without stent were compared with Chi-square test. Multivariate analysis was performed with the logistic regression analysis test and multinomial logistic regression analysis test. Survival rate was analyzed by use Kaplan-Meier test. Results: Forty-one consecutive patients with PC were enrolled. There were 23 men (56.1% and 18 women (43.9%, with a median age of 56 years (16 to 70 years. There were 24 cases of PH cancer, eight cases of PC, four cases of distal CBD cancer and five cases of duodenal carcinoma. Nine patients underwent duct-to-mucosa pancreatico jejunostomy (PJ, 17 patients underwent telescoping pancreatico jejunostomy (PJ and 15 patients pancreaticogastrostomy (PG. The pancreatic duct was stented in 30 patients while in 11 patients, the duct was not stented. The PJ duct-to-mucosa caused significantly less leakage, but longer operative and reconstructive times. Telescoping PJ was associated with the shortest hospital stay. There were 5 postoperative mortalities, while postoperative morbidities included pancreatic fistula-6 patients, delayed gastric emptying in-11, GI fistula-3, wound infection-12, burst abdomen-6 and pulmonary infection-2. Factors

  19. Research on EPR measurement methods of sucrose used in radiation accident dose reconstruction.

    Science.gov (United States)

    Ding, Yanqiu; Jiao, Ling; Zhang, Wenyi; Zhou, Li; Zhang, Xiaodong; Zhang, Liang'an

    2010-03-01

    Sucrose is a convenient, common, tissue-equivalent material suitable for electron paramagnetic resonance (EPR) dosimetry of ionising radiation. A number of publications have reported on the dosimetric properties of sucrose and their use in radiation accident dose reconstruction. However, previous studies did not include specially the description of measurement methods of sucrose by EPR. The aim of this work is to introduce particularly the EPR measurement methods of sucrose. In this regard, practical considerations of sample size, microwave power, modulation amplitude, EPR spectrum and signal stability are discussed.

  20. Flight Path Reconstruction and Parameter Estimation Using Output-Error Method

    Directory of Open Access Journals (Sweden)

    Benedito Carlos de Oliveira Maciel

    2006-01-01

    Full Text Available This work describes the application of the output-error method using the Levenberg-Marquardt optimization algorithm to the Flight Path Reconstruction (FPR problem, which constitutes an important preliminary step towards the aircraft parameter identification. This method is also applied to obtain the aerodynamic and control derivatives of a regional jet aircraft from flight test data with measurement noise and bias. Experimental results are reported, employing a real jet aircraft, with flight test data acquired by smart probes, inertial sensors (gyrometers and accelerometers and Global Positioning Systems (GPS receivers.

  1. Reconstruction of road defects and road roughness classification using vehicle responses with artificial neural networks simulation

    CSIR Research Space (South Africa)

    Ngwangwa, HM

    2010-04-01

    Full Text Available This paper proposes a procedure for utilizing measured responses on a vehicle to reconstruct road profiles and their attendant defects. The study seeks to capitalize on the popularization of vehicle information systems, where sensors...

  2. Quantitative methods for ecological network analysis.

    Science.gov (United States)

    Ulanowicz, Robert E

    2004-12-01

    The analysis of networks of ecological trophic transfers is a useful complement to simulation modeling in the quest for understanding whole-ecosystem dynamics. Trophic networks can be studied in quantitative and systematic fashion at several levels. Indirect relationships between any two individual taxa in an ecosystem, which often differ in either nature or magnitude from their direct influences, can be assayed using techniques from linear algebra. The same mathematics can also be employed to ascertain where along the trophic continuum any individual taxon is operating, or to map the web of connections into a virtual linear chain that summarizes trophodynamic performance by the system. Backtracking algorithms with pruning have been written which identify pathways for the recycle of materials and energy within the system. The pattern of such cycling often reveals modes of control or types of functions exhibited by various groups of taxa. The performance of the system as a whole at processing material and energy can be quantified using information theory. In particular, the complexity of process interactions can be parsed into separate terms that distinguish organized, efficient performance from the capacity for further development and recovery from disturbance. Finally, the sensitivities of the information-theoretic system indices appear to identify the dynamical bottlenecks in ecosystem functioning.

  3. Decision support systems and methods for complex networks

    Science.gov (United States)

    Huang, Zhenyu [Richland, WA; Wong, Pak Chung [Richland, WA; Ma, Jian [Richland, WA; Mackey, Patrick S [Richland, WA; Chen, Yousu [Richland, WA; Schneider, Kevin P [Seattle, WA

    2012-02-28

    Methods and systems for automated decision support in analyzing operation data from a complex network. Embodiments of the present invention utilize these algorithms and techniques not only to characterize the past and present condition of a complex network, but also to predict future conditions to help operators anticipate deteriorating and/or problem situations. In particular, embodiments of the present invention characterize network conditions from operation data using a state estimator. Contingency scenarios can then be generated based on those network conditions. For at least a portion of all of the contingency scenarios, risk indices are determined that describe the potential impact of each of those scenarios. Contingency scenarios with risk indices are presented visually as graphical representations in the context of a visual representation of the complex network. Analysis of the historical risk indices based on the graphical representations can then provide trends that allow for prediction of future network conditions.

  4. Network reconstruction based on proteomic data and prior knowledge of protein connectivity using graph theory.

    Science.gov (United States)

    Stavrakas, Vassilis; Melas, Ioannis N; Sakellaropoulos, Theodore; Alexopoulos, Leonidas G

    2015-01-01

    Modeling of signal transduction pathways is instrumental for understanding cells' function. People have been tackling modeling of signaling pathways in order to accurately represent the signaling events inside cells' biochemical microenvironment in a way meaningful for scientists in a biological field. In this article, we propose a method to interrogate such pathways in order to produce cell-specific signaling models. We integrate available prior knowledge of protein connectivity, in a form of a Prior Knowledge Network (PKN) with phosphoproteomic data to construct predictive models of the protein connectivity of the interrogated cell type. Several computational methodologies focusing on pathways' logic modeling using optimization formulations or machine learning algorithms have been published on this front over the past few years. Here, we introduce a light and fast approach that uses a breadth-first traversal of the graph to identify the shortest pathways and score proteins in the PKN, fitting the dependencies extracted from the experimental design. The pathways are then combined through a heuristic formulation to produce a final topology handling inconsistencies between the PKN and the experimental scenarios. Our results show that the algorithm we developed is efficient and accurate for the construction of medium and large scale signaling networks. We demonstrate the applicability of the proposed approach by interrogating a manually curated interaction graph model of EGF/TNFA stimulation against made up experimental data. To avoid the possibility of erroneous predictions, we performed a cross-validation analysis. Finally, we validate that the introduced approach generates predictive topologies, comparable to the ILP formulation. Overall, an efficient approach based on graph theory is presented herein to interrogate protein-protein interaction networks and to provide meaningful biological insights.

  5. Semantic Security Methods for Software-Defined Networks

    Directory of Open Access Journals (Sweden)

    Ekaterina Ju. Antoshina

    2017-01-01

    Full Text Available Software-defined networking is a promising technology for constructing communication networks where the network management is the software that configures network devices. This contrasts with the traditional point of view where the network behaviour is updated by manual configuration uploading to devices under control. The software controller allows dynamic routing configuration inside the net depending on the quality of service. However, there must be a proof that ensures that every network flow is secure, for example, we can define security policy as follows: confidential nodes can not send data to the public segment of the network. The paper shows how this problem can be solved by using a semantic security model. We propose a method that allows us to construct semantics that captures necessary security properties the network must follow. This involves the specification that states allowed and forbidden network flows. The specification is then modeled as a decision tree that may be reduced. We use the decision tree for semantic construction that captures security requirements. The semantic can be implemented as a module of the controller software so the correctness of the control plane of the network can be ensured on-the-fly. 

  6. A photoacoustic imaging reconstruction method based on directional total variation with adaptive directivity.

    Science.gov (United States)

    Wang, Jin; Zhang, Chen; Wang, Yuanyuan

    2017-05-30

    In photoacoustic tomography (PAT), total variation (TV) based iteration algorithm is reported to have a good performance in PAT image reconstruction. However, classical TV based algorithm fails to preserve the edges and texture details of the image because it is not sensitive to the direction of the image. Therefore, it is of great significance to develop a new PAT reconstruction algorithm to effectively solve the drawback of TV. In this paper, a directional total variation with adaptive directivity (DDTV) model-based PAT image reconstruction algorithm, which weightedly sums the image gradients based on the spatially varying directivity pattern of the image is proposed to overcome the shortcomings of TV. The orientation field of the image is adaptively estimated through a gradient-based approach. The image gradients are weighted at every pixel based on both its anisotropic direction and another parameter, which evaluates the estimated orientation field reliability. An efficient algorithm is derived to solve the iteration problem associated with DDTV and possessing directivity of the image adaptively updated for each iteration step. Several texture images with various directivity patterns are chosen as the phantoms for the numerical simulations. The 180-, 90- and 30-view circular scans are conducted. Results obtained show that the DDTV-based PAT reconstructed algorithm outperforms the filtered back-projection method (FBP) and TV algorithms in the quality of reconstructed images with the peak signal-to-noise rations (PSNR) exceeding those of TV and FBP by about 10 and 18 dB, respectively, for all cases. The Shepp-Logan phantom is studied with further discussion of multimode scanning, convergence speed, robustness and universality aspects. In-vitro experiments are performed for both the sparse-view circular scanning and linear scanning. The results further prove the effectiveness of the DDTV, which shows better results than that of the TV with sharper image edges and

  7. A path reconstruction method integrating dead-reckoning and position fixes applied to humpback whales.

    Science.gov (United States)

    Wensveen, Paul J; Thomas, Len; Miller, Patrick J O

    2015-01-01

    Detailed information about animal location and movement is often crucial in studies of natural behaviour and how animals respond to anthropogenic activities. Dead-reckoning can be used to infer such detailed information, but without additional positional data this method results in uncertainty that grows with time. Combining dead-reckoning with new Fastloc-GPS technology should provide good opportunities for reconstructing georeferenced fine-scale tracks, and should be particularly useful for marine animals that spend most of their time under water. We developed a computationally efficient, Bayesian state-space modelling technique to estimate humpback whale locations through time, integrating dead-reckoning using on-animal sensors with measurements of whale locations using on-animal Fastloc-GPS and visual observations. Positional observation models were based upon error measurements made during calibrations. High-resolution 3-dimensional movement tracks were produced for 13 whales using a simple process model in which errors caused by water current movements, non-location sensor errors, and other dead-reckoning errors were accumulated into a combined error term. Positional uncertainty quantified by the track reconstruction model was much greater for tracks with visual positions and few or no GPS positions, indicating a strong benefit to using Fastloc-GPS for track reconstruction. Compared to tracks derived only from position fixes, the inclusion of dead-reckoning data greatly improved the level of detail in the reconstructed tracks of humpback whales. Using cross-validation, a clear improvement in the predictability of out-of-set Fastloc-GPS data was observed compared to more conventional track reconstruction methods. Fastloc-GPS observation errors during calibrations were found to vary by number of GPS satellites received and by orthogonal dimension analysed; visual observation errors varied most by distance to the whale. By systematically accounting for the

  8. A Temporoparietal Fascia Pocket Method in Elevation of Reconstructed Auricle for Microtia.

    Science.gov (United States)

    Kurabayashi, Takashi; Asato, Hirotaka; Suzuki, Yasutoshi; Kaji, Nobuyuki; Mitoma, Yoko

    2017-04-01

    In two-stage procedures for reconstruction of microtia, an axial flap of temporoparietal fascia is widely used to cover the costal cartilage blocks placed behind the framework. Although a temporoparietal fascia flap is undoubtedly reliable, use of the flap is associated with some morbidity and comes at the expense of the option for salvage surgery. The authors devised a simplified procedure for covering the cartilage blocks by creating a pocket in the postauricular temporoparietal fascia. In this procedure, the constructed auricle is elevated from the head superficially to the temporoparietal fascia, and a pocket is created under the temporoparietal fascia and the capsule of the auricle framework. Then, cartilage blocks are inserted into the pocket and fixed. A total of 38 reconstructed ears in 38 patients with microtia ranging in age from 9 to 19 years were elevated using the authors' method from 2002 to 2014 and followed for at least 5 months. To evaluate the long-term stability of the method, two-way analysis of variance (p fascia flap method versus a temporoparietal fascia pocket method) over long-term follow-up. Good projection of the auricles and creation of well-defined temporoauricular sulci were achieved. Furthermore, the sulci had a tendency to hold their steep profile over a long period. The temporoparietal fascia pocket method is simple but produces superior results. Moreover, pocket creation is less invasive and has the benefit of sparing temporoparietal fascia flap elevation. Therapeutic, IV.

  9. A model reduction method for biochemical reaction networks

    National Research Council Canada - National Science Library

    Rao, Shodhan; van der Schaft, Arjan; van Eunen, Karen; Bakker, Barbara; Jayawardhana, Bayu

    2014-01-01

    Background: In this paper we propose a model reduction method for biochemical reaction networks governed by a variety of reversible and irreversible enzyme kinetic rate laws, including reversible Michaelis-Menten and Hill kinetics...

  10. A method to reconstruct and apply 3D primary fluence for treatment delivery verification.

    Science.gov (United States)

    Liu, Shi; Mazur, Thomas R; Li, Harold; Curcuru, Austen; Green, Olga L; Sun, Baozhou; Mutic, Sasa; Yang, Deshan

    2017-01-01

    In this study, a method is reported to perform IMRT and VMAT treatment delivery verification using 3D volumetric primary beam fluences reconstructed directly from planned beam parameters and treatment delivery records. The goals of this paper are to demonstrate that 1) 3D beam fluences can be reconstructed efficiently, 2) quality assurance (QA) based on the reconstructed 3D fluences is capable of detecting additional treatment delivery errors, particularly for VMAT plans, beyond those identifiable by other existing treatment delivery verification methods, and 3) QA results based on 3D fluence calculation (3DFC) are correlated with QA results based on physical phantom measurements and radiation dose recalculations. Using beam parameters extracted from DICOM plan files and treatment delivery log files, 3D volumetric primary fluences are reconstructed by forward-projecting the beam apertures, defined by the MLC leaf positions and modulated by beam MU values, at all gantry angles using first-order ray tracing. Treatment delivery verifications are performed by comparing 3D fluences reconstructed using beam parameters in delivery log files against those reconstructed from treatment plans. Passing rates are then determined using both voxel intensity differences and a 3D gamma analysis. QA sensitivity to various sources of errors is defined as the observed differences in passing rates. Correlations between passing rates obtained from QA derived from both 3D fluence calculations and physical measurements are investigated prospectively using 20 clinical treatment plans with artificially introduced machine delivery errors. Studies with artificially introduced errors show that common treatment delivery problems including gantry angle errors, MU errors, jaw position errors, collimator rotation errors, and MLC leaf position errors were detectable at less than normal machine tolerances. The reported 3DFC QA method has greater sensitivity than measurement-based QA methods

  11. CdSe/ZnS quantum dot fluorescence spectra shape-based thermometry via neural network reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Munro, Troy [Multiscale Thermal-Physics Lab, Department of Mechanical and Aerospace Engineering, Utah Stat