WorldWideScience

Sample records for reconstruction network modeling

  1. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Saifullah

    2002-07-01

    This dissertation deals with (1) experimental measurements on petrophysical, reservoir engineering and morphological properties of Ekofisk chalk, (2) numerical simulation of core flood experiments to analyze and improve relative permeability data, (3) stochastic reconstruction of chalk samples from limited morphological information, (4) extraction of pore space parameters from the reconstructed samples, development of network model using pore space information, and computation of petrophysical and reservoir engineering properties from network model, and (5) development of 2D and 3D idealized fractured reservoir models and verification of the applicability of several widely used conventional up scaling techniques in fractured reservoir simulation. Experiments have been conducted on eight Ekofisk chalk samples and porosity, absolute permeability, formation factor, and oil-water relative permeability, capillary pressure and resistivity index are measured at laboratory conditions. Mercury porosimetry data and backscatter scanning electron microscope images have also been acquired for the samples. A numerical simulation technique involving history matching of the production profiles is employed to improve the relative permeability curves and to analyze hysteresis of the Ekofisk chalk samples. The technique was found to be a powerful tool to supplement the uncertainties in experimental measurements. Porosity and correlation statistics obtained from backscatter scanning electron microscope images are used to reconstruct microstructures of chalk and particulate media. The reconstruction technique involves a simulated annealing algorithm, which can be constrained by an arbitrary number of morphological parameters. This flexibility of the algorithm is exploited to successfully reconstruct particulate media and chalk samples using more than one correlation functions. A technique based on conditional simulated annealing has been introduced for exact reproduction of vuggy

  2. Enhanced capital-asset pricing model for the reconstruction of bipartite financial networks

    Science.gov (United States)

    Squartini, Tiziano; Almog, Assaf; Caldarelli, Guido; van Lelyveld, Iman; Garlaschelli, Diego; Cimini, Giulio

    2017-09-01

    Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets are strongly affected by the interconnections among financial institutions. Yet, while the aggregate balance sheets of institutions are publicly disclosed, information on single positions is mostly confidential and, as such, unavailable. Standard approaches to reconstruct the network of financial interconnection produce unrealistically dense topologies, leading to a biased estimation of systemic risk. Moreover, reconstruction techniques are generally designed for monopartite networks of bilateral exposures between financial institutions, thus failing in reproducing bipartite networks of security holdings (e.g., investment portfolios). Here we propose a reconstruction method based on constrained entropy maximization, tailored for bipartite financial networks. Such a procedure enhances the traditional capital-asset pricing model (CAPM) and allows us to reproduce the correct topology of the network. We test this enhanced CAPM (ECAPM) method on a dataset, collected by the European Central Bank, of detailed security holdings of European institutional sectors over a period of six years (2009-2015). Our approach outperforms the traditional CAPM and the recently proposed maximum-entropy CAPM both in reproducing the network topology and in estimating systemic risk due to fire sales spillovers. In general, ECAPM can be applied to the whole class of weighted bipartite networks described by the fitness model.

  3. Craniofacial Reconstruction Evaluation by Geodesic Network

    Directory of Open Access Journals (Sweden)

    Junli Zhao

    2014-01-01

    Full Text Available Craniofacial reconstruction is to estimate an individual’s face model from its skull. It has a widespread application in forensic medicine, archeology, medical cosmetic surgery, and so forth. However, little attention is paid to the evaluation of craniofacial reconstruction. This paper proposes an objective method to evaluate globally and locally the reconstructed craniofacial faces based on the geodesic network. Firstly, the geodesic networks of the reconstructed craniofacial face and the original face are built, respectively, by geodesics and isogeodesics, whose intersections are network vertices. Then, the absolute value of the correlation coefficient of the features of all corresponding geodesic network vertices between two models is taken as the holistic similarity, where the weighted average of the shape index values in a neighborhood is defined as the feature of each network vertex. Moreover, the geodesic network vertices of each model are divided into six subareas, that is, forehead, eyes, nose, mouth, cheeks, and chin, and the local similarity is measured for each subarea. Experiments using 100 pairs of reconstructed craniofacial faces and their corresponding original faces show that the evaluation by our method is roughly consistent with the subjective evaluation derived from thirty-five persons in five groups.

  4. Craniofacial Reconstruction Evaluation by Geodesic Network

    OpenAIRE

    Zhao, Junli; Liu, Cuiting; Wu, Zhongke; Duan, Fuqing; Wang, Kang; Jia, Taorui; Liu, Quansheng

    2014-01-01

    Craniofacial reconstruction is to estimate an individual’s face model from its skull. It has a widespread application in forensic medicine, archeology, medical cosmetic surgery, and so forth. However, little attention is paid to the evaluation of craniofacial reconstruction. This paper proposes an objective method to evaluate globally and locally the reconstructed craniofacial faces based on the geodesic network. Firstly, the geodesic networks of the reconstructed craniofacial face and the or...

  5. Reconstruction and validation of RefRec: a global model for the yeast molecular interaction network.

    Directory of Open Access Journals (Sweden)

    Tommi Aho

    2010-05-01

    Full Text Available Molecular interaction networks establish all cell biological processes. The networks are under intensive research that is facilitated by new high-throughput measurement techniques for the detection, quantification, and characterization of molecules and their physical interactions. For the common model organism yeast Saccharomyces cerevisiae, public databases store a significant part of the accumulated information and, on the way to better understanding of the cellular processes, there is a need to integrate this information into a consistent reconstruction of the molecular interaction network. This work presents and validates RefRec, the most comprehensive molecular interaction network reconstruction currently available for yeast. The reconstruction integrates protein synthesis pathways, a metabolic network, and a protein-protein interaction network from major biological databases. The core of the reconstruction is based on a reference object approach in which genes, transcripts, and proteins are identified using their primary sequences. This enables their unambiguous identification and non-redundant integration. The obtained total number of different molecular species and their connecting interactions is approximately 67,000. In order to demonstrate the capacity of RefRec for functional predictions, it was used for simulating the gene knockout damage propagation in the molecular interaction network in approximately 590,000 experimentally validated mutant strains. Based on the simulation results, a statistical classifier was subsequently able to correctly predict the viability of most of the strains. The results also showed that the usage of different types of molecular species in the reconstruction is important for accurate phenotype prediction. In general, the findings demonstrate the benefits of global reconstructions of molecular interaction networks. With all the molecular species and their physical interactions explicitly modeled, our

  6. Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting

    Science.gov (United States)

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Lu

    2017-09-01

    In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.

  7. Recurrent neural network based hybrid model for reconstructing gene regulatory network.

    Science.gov (United States)

    Raza, Khalid; Alam, Mansaf

    2016-10-01

    One of the exciting problems in systems biology research is to decipher how genome controls the development of complex biological system. The gene regulatory networks (GRNs) help in the identification of regulatory interactions between genes and offer fruitful information related to functional role of individual gene in a cellular system. Discovering GRNs lead to a wide range of applications, including identification of disease related pathways providing novel tentative drug targets, helps to predict disease response, and also assists in diagnosing various diseases including cancer. Reconstruction of GRNs from available biological data is still an open problem. This paper proposes a recurrent neural network (RNN) based model of GRN, hybridized with generalized extended Kalman filter for weight update in backpropagation through time training algorithm. The RNN is a complex neural network that gives a better settlement between biological closeness and mathematical flexibility to model GRN; and is also able to capture complex, non-linear and dynamic relationships among variables. Gene expression data are inherently noisy and Kalman filter performs well for estimation problem even in noisy data. Hence, we applied non-linear version of Kalman filter, known as generalized extended Kalman filter, for weight update during RNN training. The developed model has been tested on four benchmark networks such as DNA SOS repair network, IRMA network, and two synthetic networks from DREAM Challenge. We performed a comparison of our results with other state-of-the-art techniques which shows superiority of our proposed model. Further, 5% Gaussian noise has been induced in the dataset and result of the proposed model shows negligible effect of noise on results, demonstrating the noise tolerance capability of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Reconstruction of networks from one-step data by matching positions

    Science.gov (United States)

    Wu, Jianshe; Dang, Ni; Jiao, Yang

    2018-05-01

    It is a challenge in estimating the topology of a network from short time series data. In this paper, matching positions is developed to reconstruct the topology of a network from only one-step data. We consider a general network model of coupled agents, in which the phase transformation of each node is determined by its neighbors. From the phase transformation information from one step to the next, the connections of the tail vertices are reconstructed firstly by the matching positions. Removing the already reconstructed vertices, and repeatedly reconstructing the connections of tail vertices, the topology of the entire network is reconstructed. For sparse scale-free networks with more than ten thousands nodes, we almost obtain the actual topology using only the one-step data in simulations.

  9. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two

  10. Strategy on energy saving reconstruction of distribution networks based on life cycle cost

    Science.gov (United States)

    Chen, Xiaofei; Qiu, Zejing; Xu, Zhaoyang; Xiao, Chupeng

    2017-08-01

    Because the actual distribution network reconstruction project funds are often limited, the cost-benefit model and the decision-making method are crucial for distribution network energy saving reconstruction project. From the perspective of life cycle cost (LCC), firstly the research life cycle is determined for the energy saving reconstruction of distribution networks with multi-devices. Then, a new life cycle cost-benefit model for energy-saving reconstruction of distribution network is developed, in which the modification schemes include distribution transformers replacement, lines replacement and reactive power compensation. In the operation loss cost and maintenance cost area, the operation cost model considering the influence of load season characteristics and the maintenance cost segmental model of transformers are proposed. Finally, aiming at the highest energy saving profit per LCC, a decision-making method is developed while considering financial and technical constraints as well. The model and method are applied to a real distribution network reconstruction, and the results prove that the model and method are effective.

  11. Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models

    Science.gov (United States)

    Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...

  12. Stability indicators in network reconstruction.

    Directory of Open Access Journals (Sweden)

    Michele Filosi

    Full Text Available The number of available algorithms to infer a biological network from a dataset of high-throughput measurements is overwhelming and keeps growing. However, evaluating their performance is unfeasible unless a 'gold standard' is available to measure how close the reconstructed network is to the ground truth. One measure of this is the stability of these predictions to data resampling approaches. We introduce NetSI, a family of Network Stability Indicators, to assess quantitatively the stability of a reconstructed network in terms of inference variability due to data subsampling. In order to evaluate network stability, the main NetSI methods use a global/local network metric in combination with a resampling (bootstrap or cross-validation procedure. In addition, we provide two normalized variability scores over data resampling to measure edge weight stability and node degree stability, and then introduce a stability ranking for edges and nodes. A complete implementation of the NetSI indicators, including the Hamming-Ipsen-Mikhailov (HIM network distance adopted in this paper is available with the R package nettools. We demonstrate the use of the NetSI family by measuring network stability on four datasets against alternative network reconstruction methods. First, the effect of sample size on stability of inferred networks is studied in a gold standard framework on yeast-like data from the Gene Net Weaver simulator. We also consider the impact of varying modularity on a set of structurally different networks (50 nodes, from 2 to 10 modules, and then of complex feature covariance structure, showing the different behaviours of standard reconstruction methods based on Pearson correlation, Maximum Information Coefficient (MIC and False Discovery Rate (FDR strategy. Finally, we demonstrate a strong combined effect of different reconstruction methods and phenotype subgroups on a hepatocellular carcinoma miRNA microarray dataset (240 subjects, and we

  13. Enhanced reconstruction of weighted networks from strengths and degrees

    International Nuclear Information System (INIS)

    Mastrandrea, Rossana; Fagiolo, Giorgio; Squartini, Tiziano; Garlaschelli, Diego

    2014-01-01

    Network topology plays a key role in many phenomena, from the spreading of diseases to that of financial crises. Whenever the whole structure of a network is unknown, one must resort to reconstruction methods that identify the least biased ensemble of networks consistent with the partial information available. A challenging case, frequently encountered due to privacy issues in the analysis of interbank flows and Big Data, is when there is only local (node-specific) aggregate information available. For binary networks, the relevant ensemble is one where the degree (number of links) of each node is constrained to its observed value. However, for weighted networks the problem is much more complicated. While the naïve approach prescribes to constrain the strengths (total link weights) of all nodes, recent counter-intuitive results suggest that in weighted networks the degrees are often more informative than the strengths. This implies that the reconstruction of weighted networks would be significantly enhanced by the specification of both strengths and degrees, a computationally hard and bias-prone procedure. Here we solve this problem by introducing an analytical and unbiased maximum-entropy method that works in the shortest possible time and does not require the explicit generation of reconstructed samples. We consider several real-world examples and show that, while the strengths alone give poor results, the additional knowledge of the degrees yields accurately reconstructed networks. Information-theoretic criteria rigorously confirm that the degree sequence, as soon as it is non-trivial, is irreducible to the strength sequence. Our results have strong implications for the analysis of motifs and communities and whenever the reconstructed ensemble is required as a null model to detect higher-order patterns

  14. Tomographic image reconstruction using Artificial Neural Networks

    International Nuclear Information System (INIS)

    Paschalis, P.; Giokaris, N.D.; Karabarbounis, A.; Loudos, G.K.; Maintas, D.; Papanicolas, C.N.; Spanoudaki, V.; Tsoumpas, Ch.; Stiliaris, E.

    2004-01-01

    A new image reconstruction technique based on the usage of an Artificial Neural Network (ANN) is presented. The most crucial factor in designing such a reconstruction system is the network architecture and the number of the input projections needed to reconstruct the image. Although the training phase requires a large amount of input samples and a considerable CPU time, the trained network is characterized by simplicity and quick response. The performance of this ANN is tested using several image patterns. It is intended to be used together with a phantom rotating table and the γ-camera of IASA for SPECT image reconstruction

  15. Yeast 5 – an expanded reconstruction of the Saccharomyces cerevisiae metabolic network

    Directory of Open Access Journals (Sweden)

    Heavner Benjamin D

    2012-06-01

    Full Text Available Abstract Background Efforts to improve the computational reconstruction of the Saccharomyces cerevisiae biochemical reaction network and to refine the stoichiometrically constrained metabolic models that can be derived from such a reconstruction have continued since the first stoichiometrically constrained yeast genome scale metabolic model was published in 2003. Continuing this ongoing process, we have constructed an update to the Yeast Consensus Reconstruction, Yeast 5. The Yeast Consensus Reconstruction is a product of efforts to forge a community-based reconstruction emphasizing standards compliance and biochemical accuracy via evidence-based selection of reactions. It draws upon models published by a variety of independent research groups as well as information obtained from biochemical databases and primary literature. Results Yeast 5 refines the biochemical reactions included in the reconstruction, particularly reactions involved in sphingolipid metabolism; updates gene-reaction annotations; and emphasizes the distinction between reconstruction and stoichiometrically constrained model. Although it was not a primary goal, this update also improves the accuracy of model prediction of viability and auxotrophy phenotypes and increases the number of epistatic interactions. This update maintains an emphasis on standards compliance, unambiguous metabolite naming, and computer-readable annotations available through a structured document format. Additionally, we have developed MATLAB scripts to evaluate the model’s predictive accuracy and to demonstrate basic model applications such as simulating aerobic and anaerobic growth. These scripts, which provide an independent tool for evaluating the performance of various stoichiometrically constrained yeast metabolic models using flux balance analysis, are included as Additional files 1, 2 and 3. Additional file 1 Function testYeastModel.m.m. Click here for file Additional file 2 Function modelToReconstruction

  16. Enhanced capital-asset pricing model for the reconstruction of bipartite financial networks

    NARCIS (Netherlands)

    Squartini, Tiziano; Almog, Assaf; Caldarelli, Guido; Van Lelyveld, Iman; Garlaschelli, Diego; Cimini, Giulio

    2017-01-01

    Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets are

  17. Reconstruction of coupling architecture of neural field networks from vector time series

    Science.gov (United States)

    Sysoev, Ilya V.; Ponomarenko, Vladimir I.; Pikovsky, Arkady

    2018-04-01

    We propose a method of reconstruction of the network coupling matrix for a basic voltage-model of the neural field dynamics. Assuming that the multivariate time series of observations from all nodes are available, we describe a technique to find coupling constants which is unbiased in the limit of long observations. Furthermore, the method is generalized for reconstruction of networks with time-delayed coupling, including the reconstruction of unknown time delays. The approach is compared with other recently proposed techniques.

  18. Reconstruction of network topology using status-time-series data

    Science.gov (United States)

    Pandey, Pradumn Kumar; Badarla, Venkataramana

    2018-01-01

    Uncovering the heterogeneous connection pattern of a networked system from the available status-time-series (STS) data of a dynamical process on the network is of great interest in network science and known as a reverse engineering problem. Dynamical processes on a network are affected by the structure of the network. The dependency between the diffusion dynamics and structure of the network can be utilized to retrieve the connection pattern from the diffusion data. Information of the network structure can help to devise the control of dynamics on the network. In this paper, we consider the problem of network reconstruction from the available status-time-series (STS) data using matrix analysis. The proposed method of network reconstruction from the STS data is tested successfully under susceptible-infected-susceptible (SIS) diffusion dynamics on real-world and computer-generated benchmark networks. High accuracy and efficiency of the proposed reconstruction procedure from the status-time-series data define the novelty of the method. Our proposed method outperforms compressed sensing theory (CST) based method of network reconstruction using STS data. Further, the same procedure of network reconstruction is applied to the weighted networks. The ordering of the edges in the weighted networks is identified with high accuracy.

  19. Reconstruction of Micropattern Detector Signals using Convolutional Neural Networks

    Science.gov (United States)

    Flekova, L.; Schott, M.

    2017-10-01

    Micropattern gaseous detector (MPGD) technologies, such as GEMs or MicroMegas, are particularly suitable for precision tracking and triggering in high rate environments. Given their relatively low production costs, MPGDs are an exemplary candidate for the next generation of particle detectors. Having acknowledged these advantages, both the ATLAS and CMS collaborations at the LHC are exploiting these new technologies for their detector upgrade programs in the coming years. When MPGDs are utilized for triggering purposes, the measured signals need to be precisely reconstructed within less than 200 ns, which can be achieved by the usage of FPGAs. In this work, we present a novel approach to identify reconstructed signals, their timing and the corresponding spatial position on the detector. In particular, we study the effect of noise and dead readout strips on the reconstruction performance. Our approach leverages the potential of convolutional neural network (CNNs), which have recently manifested an outstanding performance in a range of modeling tasks. The proposed neural network architecture of our CNN is designed simply enough, so that it can be modeled directly by an FPGA and thus provide precise information on reconstructed signals already in trigger level.

  20. Integration of expression data in genome-scale metabolic network reconstructions

    Directory of Open Access Journals (Sweden)

    Anna S. Blazier

    2012-08-01

    Full Text Available With the advent of high-throughput technologies, the field of systems biology has amassed an abundance of omics data, quantifying thousands of cellular components across a variety of scales, ranging from mRNA transcript levels to metabolite quantities. Methods are needed to not only integrate this omics data but to also use this data to heighten the predictive capabilities of computational models. Several recent studies have successfully demonstrated how flux balance analysis (FBA, a constraint-based modeling approach, can be used to integrate transcriptomic data into genome-scale metabolic network reconstructions to generate predictive computational models. In this review, we summarize such FBA-based methods for integrating expression data into genome-scale metabolic network reconstructions, highlighting their advantages as well as their limitations.

  1. Quartet-net: a quartet-based method to reconstruct phylogenetic networks.

    Science.gov (United States)

    Yang, Jialiang; Grünewald, Stefan; Wan, Xiu-Feng

    2013-05-01

    Phylogenetic networks can model reticulate evolutionary events such as hybridization, recombination, and horizontal gene transfer. However, reconstructing such networks is not trivial. Popular character-based methods are computationally inefficient, whereas distance-based methods cannot guarantee reconstruction accuracy because pairwise genetic distances only reflect partial information about a reticulate phylogeny. To balance accuracy and computational efficiency, here we introduce a quartet-based method to construct a phylogenetic network from a multiple sequence alignment. Unlike distances that only reflect the relationship between a pair of taxa, quartets contain information on the relationships among four taxa; these quartets provide adequate capacity to infer a more accurate phylogenetic network. In applications to simulated and biological data sets, we demonstrate that this novel method is robust and effective in reconstructing reticulate evolutionary events and it has the potential to infer more accurate phylogenetic distances than other conventional phylogenetic network construction methods such as Neighbor-Joining, Neighbor-Net, and Split Decomposition. This method can be used in constructing phylogenetic networks from simple evolutionary events involving a few reticulate events to complex evolutionary histories involving a large number of reticulate events. A software called "Quartet-Net" is implemented and available at http://sysbio.cvm.msstate.edu/QuartetNet/.

  2. Reconstructing the Hopfield network as an inverse Ising problem

    International Nuclear Information System (INIS)

    Huang Haiping

    2010-01-01

    We test four fast mean-field-type algorithms on Hopfield networks as an inverse Ising problem. The equilibrium behavior of Hopfield networks is simulated through Glauber dynamics. In the low-temperature regime, the simulated annealing technique is adopted. Although performances of these network reconstruction algorithms on the simulated network of spiking neurons are extensively studied recently, the analysis of Hopfield networks is lacking so far. For the Hopfield network, we found that, in the retrieval phase favored when the network wants to memory one of stored patterns, all the reconstruction algorithms fail to extract interactions within a desired accuracy, and the same failure occurs in the spin-glass phase where spurious minima show up, while in the paramagnetic phase, albeit unfavored during the retrieval dynamics, the algorithms work well to reconstruct the network itself. This implies that, as an inverse problem, the paramagnetic phase is conversely useful for reconstructing the network while the retrieval phase loses all the information about interactions in the network except for the case where only one pattern is stored. The performances of algorithms are studied with respect to the system size, memory load, and temperature; sample-to-sample fluctuations are also considered.

  3. Computing autocatalytic sets to unravel inconsistencies in metabolic network reconstructions

    DEFF Research Database (Denmark)

    Schmidt, R.; Waschina, S.; Boettger-Schmidt, D.

    2015-01-01

    , the method we report represents a powerful tool to identify inconsistencies in large-scale metabolic networks. AVAILABILITY AND IMPLEMENTATION: The method is available as source code on http://users.minet.uni-jena.de/ approximately m3kach/ASBIG/ASBIG.zip. CONTACT: christoph.kaleta@uni-jena.de SUPPLEMENTARY...... by inherent inconsistencies and gaps. RESULTS: Here we present a novel method to validate metabolic network reconstructions based on the concept of autocatalytic sets. Autocatalytic sets correspond to collections of metabolites that, besides enzymes and a growth medium, are required to produce all biomass...... components in a metabolic model. These autocatalytic sets are well-conserved across all domains of life, and their identification in specific genome-scale reconstructions allows us to draw conclusions about potential inconsistencies in these models. The method is capable of detecting inconsistencies, which...

  4. Reconstructible phylogenetic networks: do not distinguish the indistinguishable.

    Science.gov (United States)

    Pardi, Fabio; Scornavacca, Celine

    2015-04-01

    Phylogenetic networks represent the evolution of organisms that have undergone reticulate events, such as recombination, hybrid speciation or lateral gene transfer. An important way to interpret a phylogenetic network is in terms of the trees it displays, which represent all the possible histories of the characters carried by the organisms in the network. Interestingly, however, different networks may display exactly the same set of trees, an observation that poses a problem for network reconstruction: from the perspective of many inference methods such networks are "indistinguishable". This is true for all methods that evaluate a phylogenetic network solely on the basis of how well the displayed trees fit the available data, including all methods based on input data consisting of clades, triples, quartets, or trees with any number of taxa, and also sequence-based approaches such as popular formalisations of maximum parsimony and maximum likelihood for networks. This identifiability problem is partially solved by accounting for branch lengths, although this merely reduces the frequency of the problem. Here we propose that network inference methods should only attempt to reconstruct what they can uniquely identify. To this end, we introduce a novel definition of what constitutes a uniquely reconstructible network. For any given set of indistinguishable networks, we define a canonical network that, under mild assumptions, is unique and thus representative of the entire set. Given data that underwent reticulate evolution, only the canonical form of the underlying phylogenetic network can be uniquely reconstructed. While on the methodological side this will imply a drastic reduction of the solution space in network inference, for the study of reticulate evolution this is a fundamental limitation that will require an important change of perspective when interpreting phylogenetic networks.

  5. Reconstructible phylogenetic networks: do not distinguish the indistinguishable.

    Directory of Open Access Journals (Sweden)

    Fabio Pardi

    2015-04-01

    Full Text Available Phylogenetic networks represent the evolution of organisms that have undergone reticulate events, such as recombination, hybrid speciation or lateral gene transfer. An important way to interpret a phylogenetic network is in terms of the trees it displays, which represent all the possible histories of the characters carried by the organisms in the network. Interestingly, however, different networks may display exactly the same set of trees, an observation that poses a problem for network reconstruction: from the perspective of many inference methods such networks are "indistinguishable". This is true for all methods that evaluate a phylogenetic network solely on the basis of how well the displayed trees fit the available data, including all methods based on input data consisting of clades, triples, quartets, or trees with any number of taxa, and also sequence-based approaches such as popular formalisations of maximum parsimony and maximum likelihood for networks. This identifiability problem is partially solved by accounting for branch lengths, although this merely reduces the frequency of the problem. Here we propose that network inference methods should only attempt to reconstruct what they can uniquely identify. To this end, we introduce a novel definition of what constitutes a uniquely reconstructible network. For any given set of indistinguishable networks, we define a canonical network that, under mild assumptions, is unique and thus representative of the entire set. Given data that underwent reticulate evolution, only the canonical form of the underlying phylogenetic network can be uniquely reconstructed. While on the methodological side this will imply a drastic reduction of the solution space in network inference, for the study of reticulate evolution this is a fundamental limitation that will require an important change of perspective when interpreting phylogenetic networks.

  6. Bayesian Models for Streamflow and River Network Reconstruction using Tree Rings

    Science.gov (United States)

    Ravindranath, A.; Devineni, N.

    2016-12-01

    Water systems face non-stationary, dynamically shifting risks due to shifting societal conditions and systematic long-term variations in climate manifesting as quasi-periodic behavior on multi-decadal time scales. Water systems are thus vulnerable to long periods of wet or dry hydroclimatic conditions. Streamflow is a major component of water systems and a primary means by which water is transported to serve ecosystems' and human needs. Thus, our concern is in understanding streamflow variability. Climate variability and impacts on water resources are crucial factors affecting streamflow, and multi-scale variability increases risk to water sustainability and systems. Dam operations are necessary for collecting water brought by streamflow while maintaining downstream ecological health. Rules governing dam operations are based on streamflow records that are woefully short compared to periods of systematic variation present in the climatic factors driving streamflow variability and non-stationarity. We use hierarchical Bayesian regression methods in order to reconstruct paleo-streamflow records for dams within a basin using paleoclimate proxies (e.g. tree rings) to guide the reconstructions. The riverine flow network for the entire basin is subsequently modeled hierarchically using feeder stream and tributary flows. This is a starting point in analyzing streamflow variability and risks to water systems, and developing a scientifically-informed dynamic risk management framework for formulating dam operations and water policies to best hedge such risks. We will apply this work to the Missouri and Delaware River Basins (DRB). Preliminary results of streamflow reconstructions for eight dams in the upper DRB using standard Gaussian regression with regional tree ring chronologies give streamflow records that now span two to two and a half centuries, and modestly smoothed versions of these reconstructed flows indicate physically-justifiable trends in the time series.

  7. Reconstruction of periodic signals using neural networks

    Directory of Open Access Journals (Sweden)

    José Danilo Rairán Antolines

    2014-01-01

    Full Text Available In this paper, we reconstruct a periodic signal by using two neural networks. The first network is trained to approximate the period of a signal, and the second network estimates the corresponding coefficients of the signal's Fourier expansion. The reconstruction strategy consists in minimizing the mean-square error via backpro-pagation algorithms over a single neuron with a sine transfer function. Additionally, this paper presents mathematical proof about the quality of the approximation as well as a first modification of the algorithm, which requires less data to reach the same estimation; thus making the algorithm suitable for real-time implementations.

  8. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  9. Comparison of Generated Parallel Capillary Arrays to Three-Dimensional Reconstructed Capillary Networks in Modeling Oxygen Transport in Discrete Microvascular Volumes

    Science.gov (United States)

    Fraser, Graham M.; Goldman, Daniel; Ellis, Christopher G.

    2013-01-01

    Objective We compare Reconstructed Microvascular Networks (RMN) to Parallel Capillary Arrays (PCA) under several simulated physiological conditions to determine how the use of different vascular geometry affects oxygen transport solutions. Methods Three discrete networks were reconstructed from intravital video microscopy of rat skeletal muscle (84×168×342 μm, 70×157×268 μm and 65×240×571 μm) and hemodynamic measurements were made in individual capillaries. PCAs were created based on statistical measurements from RMNs. Blood flow and O2 transport models were applied and the resulting solutions for RMN and PCA models were compared under 4 conditions (rest, exercise, ischemia and hypoxia). Results Predicted tissue PO2 was consistently lower in all RMN simulations compared to the paired PCA. PO2 for 3D reconstructions at rest were 28.2±4.8, 28.1±3.5, and 33.0±4.5 mmHg for networks I, II, and III compared to the PCA mean values of 31.2±4.5, 30.6±3.4, and 33.8±4.6 mmHg. Simulated exercise yielded mean tissue PO2 in the RMN of 10.1±5.4, 12.6±5.7, and 19.7±5.7 mmHg compared to 15.3±7.3, 18.8±5.3, and 21.7±6.0 in PCA. Conclusions These findings suggest that volume matched PCA yield different results compared to reconstructed microvascular geometries when applied to O2 transport modeling; the predominant characteristic of this difference being an over estimate of mean tissue PO2. Despite this limitation, PCA models remain important for theoretical studies as they produce PO2 distributions with similar shape and parameter dependence as RMN. PMID:23841679

  10. Maximum-entropy networks pattern detection, network reconstruction and graph combinatorics

    CERN Document Server

    Squartini, Tiziano

    2017-01-01

    This book is an introduction to maximum-entropy models of random graphs with given topological properties and their applications. Its original contribution is the reformulation of many seemingly different problems in the study of both real networks and graph theory within the unified framework of maximum entropy. Particular emphasis is put on the detection of structural patterns in real networks, on the reconstruction of the properties of networks from partial information, and on the enumeration and sampling of graphs with given properties.  After a first introductory chapter explaining the motivation, focus, aim and message of the book, chapter 2 introduces the formal construction of maximum-entropy ensembles of graphs with local topological constraints. Chapter 3 focuses on the problem of pattern detection in real networks and provides a powerful way to disentangle nontrivial higher-order structural features from those that can be traced back to simpler local constraints. Chapter 4 focuses on the problem o...

  11. An Integrative Bioinformatics Framework for Genome-scale Multiple Level Network Reconstruction of Rice

    Directory of Open Access Journals (Sweden)

    Liu Lili

    2013-06-01

    Full Text Available Understanding how metabolic reactions translate the genome of an organism into its phenotype is a grand challenge in biology. Genome-wide association studies (GWAS statistically connect genotypes to phenotypes, without any recourse to known molecular interactions, whereas a molecular mechanistic description ties gene function to phenotype through gene regulatory networks (GRNs, protein-protein interactions (PPIs and molecular pathways. Integration of different regulatory information levels of an organism is expected to provide a good way for mapping genotypes to phenotypes. However, the lack of curated metabolic model of rice is blocking the exploration of genome-scale multi-level network reconstruction. Here, we have merged GRNs, PPIs and genome-scale metabolic networks (GSMNs approaches into a single framework for rice via omics’ regulatory information reconstruction and integration. Firstly, we reconstructed a genome-scale metabolic model, containing 4,462 function genes, 2,986 metabolites involved in 3,316 reactions, and compartmentalized into ten subcellular locations. Furthermore, 90,358 pairs of protein-protein interactions, 662,936 pairs of gene regulations and 1,763 microRNA-target interactions were integrated into the metabolic model. Eventually, a database was developped for systematically storing and retrieving the genome-scale multi-level network of rice. This provides a reference for understanding genotype-phenotype relationship of rice, and for analysis of its molecular regulatory network.

  12. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model

    Directory of Open Access Journals (Sweden)

    Dan Liu

    2018-04-01

    Full Text Available This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN and a continuous pairwise Conditional Random Field (CRF model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  13. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model.

    Science.gov (United States)

    Liu, Dan; Liu, Xuejun; Wu, Yiguang

    2018-04-24

    This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  14. Neural network modeling for near wall turbulent flow

    International Nuclear Information System (INIS)

    Milano, Michele; Koumoutsakos, Petros

    2002-01-01

    A neural network methodology is developed in order to reconstruct the near wall field in a turbulent flow by exploiting flow fields provided by direct numerical simulations. The results obtained from the neural network methodology are compared with the results obtained from prediction and reconstruction using proper orthogonal decomposition (POD). Using the property that the POD is equivalent to a specific linear neural network, a nonlinear neural network extension is presented. It is shown that for a relatively small additional computational cost nonlinear neural networks provide us with improved reconstruction and prediction capabilities for the near wall velocity fields. Based on these results advantages and drawbacks of both approaches are discussed with an outlook toward the development of near wall models for turbulence modeling and control

  15. An acoustical model based monitoring network

    NARCIS (Netherlands)

    Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der

    2010-01-01

    In this paper the approach for an acoustical model based monitoring network is demonstrated. This network is capable of reconstructing a noise map, based on the combination of measured sound levels and an acoustic model of the area. By pre-calculating the sound attenuation within the network the

  16. Reconstructing phylogenetic networks using maximum parsimony.

    Science.gov (United States)

    Nakhleh, Luay; Jin, Guohua; Zhao, Fengmei; Mellor-Crummey, John

    2005-01-01

    Phylogenies - the evolutionary histories of groups of organisms - are one of the most widely used tools throughout the life sciences, as well as objects of research within systematics, evolutionary biology, epidemiology, etc. Almost every tool devised to date to reconstruct phylogenies produces trees; yet it is widely understood and accepted that trees oversimplify the evolutionary histories of many groups of organims, most prominently bacteria (because of horizontal gene transfer) and plants (because of hybrid speciation). Various methods and criteria have been introduced for phylogenetic tree reconstruction. Parsimony is one of the most widely used and studied criteria, and various accurate and efficient heuristics for reconstructing trees based on parsimony have been devised. Jotun Hein suggested a straightforward extension of the parsimony criterion to phylogenetic networks. In this paper we formalize this concept, and provide the first experimental study of the quality of parsimony as a criterion for constructing and evaluating phylogenetic networks. Our results show that, when extended to phylogenetic networks, the parsimony criterion produces promising results. In a great majority of the cases in our experiments, the parsimony criterion accurately predicts the numbers and placements of non-tree events.

  17. Reconstruction of neutron spectra through neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2003-01-01

    A neural network has been used to reconstruct the neutron spectra starting from the counting rates of the detectors of the Bonner sphere spectrophotometric system. A group of 56 neutron spectra was selected to calculate the counting rates that would produce in a Bonner sphere system, with these data and the spectra it was trained the neural network. To prove the performance of the net, 12 spectra were used, 6 were taken of the group used for the training, 3 were obtained of mathematical functions and those other 3 correspond to real spectra. When comparing the original spectra of those reconstructed by the net we find that our net has a poor performance when reconstructing monoenergetic spectra, this attributes it to those characteristic of the spectra used for the training of the neural network, however for the other groups of spectra the results of the net are appropriate with the prospective ones. (Author)

  18. On the Complexity of Reconstructing Chemical Reaction Networks

    DEFF Research Database (Denmark)

    Fagerberg, Rolf; Flamm, Christoph; Merkle, Daniel

    2013-01-01

    The analysis of the structure of chemical reaction networks is crucial for a better understanding of chemical processes. Such networks are well described as hypergraphs. However, due to the available methods, analyses regarding network properties are typically made on standard graphs derived from...... the full hypergraph description, e.g. on the so-called species and reaction graphs. However, a reconstruction of the underlying hypergraph from these graphs is not necessarily unique. In this paper, we address the problem of reconstructing a hypergraph from its species and reaction graph and show NP...

  19. Gap-filling analysis of the iJO1366 Escherichia coli metabolic network reconstruction for discovery of metabolic functions

    Directory of Open Access Journals (Sweden)

    Orth Jeffrey D

    2012-05-01

    Full Text Available Abstract Background The iJO1366 reconstruction of the metabolic network of Escherichia coli is one of the most complete and accurate metabolic reconstructions available for any organism. Still, because our knowledge of even well-studied model organisms such as this one is incomplete, this network reconstruction contains gaps and possible errors. There are a total of 208 blocked metabolites in iJO1366, representing gaps in the network. Results A new model improvement workflow was developed to compare model based phenotypic predictions to experimental data to fill gaps and correct errors. A Keio Collection based dataset of E. coli gene essentiality was obtained from literature data and compared to model predictions. The SMILEY algorithm was then used to predict the most likely missing reactions in the reconstructed network, adding reactions from a KEGG based universal set of metabolic reactions. The feasibility of these putative reactions was determined by comparing updated versions of the model to the experimental dataset, and genes were predicted for the most feasible reactions. Conclusions Numerous improvements to the iJO1366 metabolic reconstruction were suggested by these analyses. Experiments were performed to verify several computational predictions, including a new mechanism for growth on myo-inositol. The other predictions made in this study should be experimentally verifiable by similar means. Validating all of the predictions made here represents a substantial but important undertaking.

  20. Comparative genomic reconstruction of transcriptional networks controlling central metabolism in the Shewanella genus

    Directory of Open Access Journals (Sweden)

    Kovaleva Galina

    2011-06-01

    Full Text Available Abstract Background Genome-scale prediction of gene regulation and reconstruction of transcriptional regulatory networks in bacteria is one of the critical tasks of modern genomics. The Shewanella genus is comprised of metabolically versatile gamma-proteobacteria, whose lifestyles and natural environments are substantially different from Escherichia coli and other model bacterial species. The comparative genomics approaches and computational identification of regulatory sites are useful for the in silico reconstruction of transcriptional regulatory networks in bacteria. Results To explore conservation and variations in the Shewanella transcriptional networks we analyzed the repertoire of transcription factors and performed genomics-based reconstruction and comparative analysis of regulons in 16 Shewanella genomes. The inferred regulatory network includes 82 transcription factors and their DNA binding sites, 8 riboswitches and 6 translational attenuators. Forty five regulons were newly inferred from the genome context analysis, whereas others were propagated from previously characterized regulons in the Enterobacteria and Pseudomonas spp.. Multiple variations in regulatory strategies between the Shewanella spp. and E. coli include regulon contraction and expansion (as in the case of PdhR, HexR, FadR, numerous cases of recruiting non-orthologous regulators to control equivalent pathways (e.g. PsrA for fatty acid degradation and, conversely, orthologous regulators to control distinct pathways (e.g. TyrR, ArgR, Crp. Conclusions We tentatively defined the first reference collection of ~100 transcriptional regulons in 16 Shewanella genomes. The resulting regulatory network contains ~600 regulated genes per genome that are mostly involved in metabolism of carbohydrates, amino acids, fatty acids, vitamins, metals, and stress responses. Several reconstructed regulons including NagR for N-acetylglucosamine catabolism were experimentally validated in S

  1. Neural Network for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Qingfa Li

    2014-01-01

    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  2. Neural network CT image reconstruction method for small amount of projection data

    International Nuclear Information System (INIS)

    Ma, X.F.; Fukuhara, M.; Takeda, T.

    2000-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications

  3. Neural network CT image reconstruction method for small amount of projection data

    CERN Document Server

    Ma, X F; Takeda, T

    2000-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications.

  4. Reconstruction of certain phylogenetic networks from their tree-average distances.

    Science.gov (United States)

    Willson, Stephen J

    2013-10-01

    Trees are commonly utilized to describe the evolutionary history of a collection of biological species, in which case the trees are called phylogenetic trees. Often these are reconstructed from data by making use of distances between extant species corresponding to the leaves of the tree. Because of increased recognition of the possibility of hybridization events, more attention is being given to the use of phylogenetic networks that are not necessarily trees. This paper describes the reconstruction of certain such networks from the tree-average distances between the leaves. For a certain class of phylogenetic networks, a polynomial-time method is presented to reconstruct the network from the tree-average distances. The method is proved to work if there is a single reticulation cycle.

  5. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, R.; Pentia, M.

    1997-01-01

    In experimental particle physics, pattern recognition problems, specifically for neural network methods, occur frequently in track finding or feature extraction. Track finding is a combinatorial optimization problem. Given a set of points in Euclidean space, one tries the reconstruction of particle trajectories, subject to smoothness constraints.The basic ingredients in a neural network are the N binary neurons and the synaptic strengths connecting them. In our case the neurons are the segments connecting all possible point pairs.The dynamics of the neural network is given by a local updating rule wich evaluates for each neuron the sign of the 'upstream activity'. An updating rule in the form of sigmoid function is given. The synaptic strengths are defined in terms of angle between the segments and the lengths of the segments implied in the track reconstruction. An algorithm based on Hopfield neural network has been developed and tested on the track coordinates measured by silicon microstrip tracking system

  6. Reconstructing Late Holocene North Atlantic atmospheric circulation changes using functional paleoclimate networks

    Science.gov (United States)

    Franke, Jasper G.; Werner, Johannes P.; Donner, Reik V.

    2017-11-01

    Obtaining reliable reconstructions of long-term atmospheric circulation changes in the North Atlantic region presents a persistent challenge to contemporary paleoclimate research, which has been addressed by a multitude of recent studies. In order to contribute a novel methodological aspect to this active field, we apply here evolving functional network analysis, a recently developed tool for studying temporal changes of the spatial co-variability structure of the Earth's climate system, to a set of Late Holocene paleoclimate proxy records covering the last two millennia. The emerging patterns obtained by our analysis are related to long-term changes in the dominant mode of atmospheric circulation in the region, the North Atlantic Oscillation (NAO). By comparing the time-dependent inter-regional linkage structures of the obtained functional paleoclimate network representations to a recent multi-centennial NAO reconstruction, we identify co-variability between southern Greenland, Svalbard, and Fennoscandia as being indicative of a positive NAO phase, while connections from Greenland and Fennoscandia to central Europe are more pronounced during negative NAO phases. By drawing upon this correspondence, we use some key parameters of the evolving network structure to obtain a qualitative reconstruction of the NAO long-term variability over the entire Common Era (last 2000 years) using a linear regression model trained upon the existing shorter reconstruction.

  7. Engine cylinder pressure reconstruction using crank kinematics and recurrently-trained neural networks

    Science.gov (United States)

    Bennett, C.; Dunne, J. F.; Trimby, S.; Richardson, D.

    2017-02-01

    A recurrent non-linear autoregressive with exogenous input (NARX) neural network is proposed, and a suitable fully-recurrent training methodology is adapted and tuned, for reconstructing cylinder pressure in multi-cylinder IC engines using measured crank kinematics. This type of indirect sensing is important for cost effective closed-loop combustion control and for On-Board Diagnostics. The challenge addressed is to accurately predict cylinder pressure traces within the cycle under generalisation conditions: i.e. using data not previously seen by the network during training. This involves direct construction and calibration of a suitable inverse crank dynamic model, which owing to singular behaviour at top-dead-centre (TDC), has proved difficult via physical model construction, calibration, and inversion. The NARX architecture is specialised and adapted to cylinder pressure reconstruction, using a fully-recurrent training methodology which is needed because the alternatives are too slow and unreliable for practical network training on production engines. The fully-recurrent Robust Adaptive Gradient Descent (RAGD) algorithm, is tuned initially using synthesised crank kinematics, and then tested on real engine data to assess the reconstruction capability. Real data is obtained from a 1.125 l, 3-cylinder, in-line, direct injection spark ignition (DISI) engine involving synchronised measurements of crank kinematics and cylinder pressure across a range of steady-state speed and load conditions. The paper shows that a RAGD-trained NARX network using both crank velocity and crank acceleration as input information, provides fast and robust training. By using the optimum epoch identified during RAGD training, acceptably accurate cylinder pressures, and especially accurate location-of-peak-pressure, can be reconstructed robustly under generalisation conditions, making it the most practical NARX configuration and recurrent training methodology for use on production engines.

  8. Network Reconstruction of Dynamic Biological Systems

    OpenAIRE

    Asadi, Behrang

    2013-01-01

    Inference of network topology from experimental data is a central endeavor in biology, since knowledge of the underlying signaling mechanisms a requirement for understanding biological phenomena. As one of the most important tools in bioinformatics area, development of methods to reconstruct biological networks has attracted remarkable attention in the current decade. Integration of different data types can lead to remarkable improvements in our ability to identify the connectivity of differe...

  9. Mass reconstruction with a neural network

    International Nuclear Information System (INIS)

    Loennblad, L.; Peterson, C.; Roegnvaldsson, T.

    1992-01-01

    A feed-forward neural network method is developed for reconstructing the invariant mass of hadronic jets appearing in a calorimeter. The approach is illustrated in W→qanti q, where W-bosons are produced in panti p reactions at SPS collider energies. The neural network method yields results that are superior to conventional methods. This neural network application differs from the classification ones in the sense that an analog number (the mass) is computed by the network, rather than a binary decision being made. As a by-product our application clearly demonstrates the need for using 'intelligent' variables in instances when the amount of training instances is limited. (orig.)

  10. Gene expression network reconstruction by convex feature selection when incorporating genetic perturbations.

    Directory of Open Access Journals (Sweden)

    Benjamin A Logsdon

    Full Text Available Cellular gene expression measurements contain regulatory information that can be used to discover novel network relationships. Here, we present a new algorithm for network reconstruction powered by the adaptive lasso, a theoretically and empirically well-behaved method for selecting the regulatory features of a network. Any algorithms designed for network discovery that make use of directed probabilistic graphs require perturbations, produced by either experiments or naturally occurring genetic variation, to successfully infer unique regulatory relationships from gene expression data. Our approach makes use of appropriately selected cis-expression Quantitative Trait Loci (cis-eQTL, which provide a sufficient set of independent perturbations for maximum network resolution. We compare the performance of our network reconstruction algorithm to four other approaches: the PC-algorithm, QTLnet, the QDG algorithm, and the NEO algorithm, all of which have been used to reconstruct directed networks among phenotypes leveraging QTL. We show that the adaptive lasso can outperform these algorithms for networks of ten genes and ten cis-eQTL, and is competitive with the QDG algorithm for networks with thirty genes and thirty cis-eQTL, with rich topologies and hundreds of samples. Using this novel approach, we identify unique sets of directed relationships in Saccharomyces cerevisiae when analyzing genome-wide gene expression data for an intercross between a wild strain and a lab strain. We recover novel putative network relationships between a tyrosine biosynthesis gene (TYR1, and genes involved in endocytosis (RCY1, the spindle checkpoint (BUB2, sulfonate catabolism (JLP1, and cell-cell communication (PRM7. Our algorithm provides a synthesis of feature selection methods and graphical model theory that has the potential to reveal new directed regulatory relationships from the analysis of population level genetic and gene expression data.

  11. A practical algorithm for reconstructing level-1 phylogenetic networks

    NARCIS (Netherlands)

    Huber, K.T.; Iersel, van L.J.J.; Kelk, S.M.; Suchecki, R.

    2011-01-01

    Recently, much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here, we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks-a type of network

  12. Quartet-based methods to reconstruct phylogenetic networks.

    Science.gov (United States)

    Yang, Jialiang; Grünewald, Stefan; Xu, Yifei; Wan, Xiu-Feng

    2014-02-20

    Phylogenetic networks are employed to visualize evolutionary relationships among a group of nucleotide sequences, genes or species when reticulate events like hybridization, recombination, reassortant and horizontal gene transfer are believed to be involved. In comparison to traditional distance-based methods, quartet-based methods consider more information in the reconstruction process and thus have the potential to be more accurate. We introduce QuartetSuite, which includes a set of new quartet-based methods, namely QuartetS, QuartetA, and QuartetM, to reconstruct phylogenetic networks from nucleotide sequences. We tested their performances and compared them with other popular methods on two simulated nucleotide sequence data sets: one generated from a tree topology and the other from a complicated evolutionary history containing three reticulate events. We further validated these methods to two real data sets: a bacterial data set consisting of seven concatenated genes of 36 bacterial species and an influenza data set related to recently emerging H7N9 low pathogenic avian influenza viruses in China. QuartetS, QuartetA, and QuartetM have the potential to accurately reconstruct evolutionary scenarios from simple branching trees to complicated networks containing many reticulate events. These methods could provide insights into the understanding of complicated biological evolutionary processes such as bacterial taxonomy and reassortant of influenza viruses.

  13. A swarm intelligence framework for reconstructing gene networks: searching for biologically plausible architectures.

    Science.gov (United States)

    Kentzoglanakis, Kyriakos; Poole, Matthew

    2012-01-01

    In this paper, we investigate the problem of reverse engineering the topology of gene regulatory networks from temporal gene expression data. We adopt a computational intelligence approach comprising swarm intelligence techniques, namely particle swarm optimization (PSO) and ant colony optimization (ACO). In addition, the recurrent neural network (RNN) formalism is employed for modeling the dynamical behavior of gene regulatory systems. More specifically, ACO is used for searching the discrete space of network architectures and PSO for searching the corresponding continuous space of RNN model parameters. We propose a novel solution construction process in the context of ACO for generating biologically plausible candidate architectures. The objective is to concentrate the search effort into areas of the structure space that contain architectures which are feasible in terms of their topological resemblance to real-world networks. The proposed framework is initially applied to the reconstruction of a small artificial network that has previously been studied in the context of gene network reverse engineering. Subsequently, we consider an artificial data set with added noise for reconstructing a subnetwork of the genetic interaction network of S. cerevisiae (yeast). Finally, the framework is applied to a real-world data set for reverse engineering the SOS response system of the bacterium Escherichia coli. Results demonstrate the relative advantage of utilizing problem-specific knowledge regarding biologically plausible structural properties of gene networks over conducting a problem-agnostic search in the vast space of network architectures.

  14. Efficient parsimony-based methods for phylogenetic network reconstruction.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2007-01-15

    Phylogenies--the evolutionary histories of groups of organisms-play a major role in representing relationships among biological entities. Although many biological processes can be effectively modeled as tree-like relationships, others, such as hybrid speciation and horizontal gene transfer (HGT), result in networks, rather than trees, of relationships. Hybrid speciation is a significant evolutionary mechanism in plants, fish and other groups of species. HGT plays a major role in bacterial genome diversification and is a significant mechanism by which bacteria develop resistance to antibiotics. Maximum parsimony is one of the most commonly used criteria for phylogenetic tree inference. Roughly speaking, inference based on this criterion seeks the tree that minimizes the amount of evolution. In 1990, Jotun Hein proposed using this criterion for inferring the evolution of sequences subject to recombination. Preliminary results on small synthetic datasets. Nakhleh et al. (2005) demonstrated the criterion's application to phylogenetic network reconstruction in general and HGT detection in particular. However, the naive algorithms used by the authors are inapplicable to large datasets due to their demanding computational requirements. Further, no rigorous theoretical analysis of computing the criterion was given, nor was it tested on biological data. In the present work we prove that the problem of scoring the parsimony of a phylogenetic network is NP-hard and provide an improved fixed parameter tractable algorithm for it. Further, we devise efficient heuristics for parsimony-based reconstruction of phylogenetic networks. We test our methods on both synthetic and biological data (rbcL gene in bacteria) and obtain very promising results.

  15. Neural network algorithm for image reconstruction using the grid friendly projections

    International Nuclear Information System (INIS)

    Cierniak, R.

    2011-01-01

    Full text: The presented paper describes a development of original approach to the reconstruction problem using a recurrent neural network. Particularly, the 'grid-friendly' angles of performed projections are selected according to the discrete Radon transform (DRT) concept to decrease the number of projections required. The methodology of our approach is consistent with analytical reconstruction algorithms. Reconstruction problem is reformulated in our approach to optimization problem. This problem is solved in present concept using method based on the maximum likelihood methodology. The reconstruction algorithm proposed in this work is consequently adapted for more practical discrete fan beam projections. Computer simulation results show that the neural network reconstruction algorithm designed to work in this way improves obtained results and outperforms conventional methods in reconstructed image quality. (author)

  16. Reconstructing missing daily precipitation data using regression trees and artificial neural networks

    Science.gov (United States)

    Incomplete meteorological data has been a problem in environmental modeling studies. The objective of this work was to develop a technique to reconstruct missing daily precipitation data in the central part of Chesapeake Bay Watershed using regression trees (RT) and artificial neural networks (ANN)....

  17. Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

    Directory of Open Access Journals (Sweden)

    Christian L Barrett

    2006-05-01

    Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.

  18. Speech reconstruction using a deep partially supervised neural network.

    Science.gov (United States)

    McLoughlin, Ian; Li, Jingjie; Song, Yan; Sharifzadeh, Hamid R

    2017-08-01

    Statistical speech reconstruction for larynx-related dysphonia has achieved good performance using Gaussian mixture models and, more recently, restricted Boltzmann machine arrays; however, deep neural network (DNN)-based systems have been hampered by the limited amount of training data available from individual voice-loss patients. The authors propose a novel DNN structure that allows a partially supervised training approach on spectral features from smaller data sets, yielding very good results compared with the current state-of-the-art.

  19. A neural network image reconstruction technique for electrical impedance tomography

    International Nuclear Information System (INIS)

    Adler, A.; Guardo, R.

    1994-01-01

    Reconstruction of Images in Electrical Impedance Tomography requires the solution of a nonlinear inverse problem on noisy data. This problem is typically ill-conditioned and requires either simplifying assumptions or regularization based on a priori knowledge. This paper presents a reconstruction algorithm using neural network techniques which calculates a linear approximation of the inverse problem directly from finite element simulations of the forward problem. This inverse is adapted to the geometry of the medium and the signal-to-noise ratio (SNR) used during network training. Results show good conductivity reconstruction where measurement SNR is similar to the training conditions. The advantages of this method are its conceptual simplicity and ease of implementation, and the ability to control the compromise between the noise performance and resolution of the image reconstruction

  20. Northern emporia and maritime networks. Modelling past communication using archaeological network analysis

    DEFF Research Database (Denmark)

    Sindbæk, Søren Michael

    2015-01-01

    preserve patterns of thisinteraction. Formal network analysis and modelling holds the potential to identify anddemonstrate such patterns, where traditional methods often prove inadequate. Thearchaeological study of communication networks in the past, however, calls for radically different analytical...... this is not a problem of network analysis, but network synthesis: theclassic problem of cracking codes or reconstructing black-box circuits. It is proposedthat archaeological approaches to network synthesis must involve a contextualreading of network data: observations arising from individual contexts, morphologies...

  1. A Practical Algorithm for Reconstructing Level-1 Phylogenetic Networks

    NARCIS (Netherlands)

    K.T. Huber; L.J.J. van Iersel (Leo); S.M. Kelk (Steven); R. Suchecki

    2010-01-01

    htmlabstractRecently much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks - a type of

  2. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  3. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    Science.gov (United States)

    Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.

    2017-12-01

    We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.

  4. Image reconstruction using Monte Carlo simulation and artificial neural networks

    International Nuclear Information System (INIS)

    Emert, F.; Missimner, J.; Blass, W.; Rodriguez, A.

    1997-01-01

    PET data sets are subject to two types of distortions during acquisition: the imperfect response of the scanner and attenuation and scattering in the active distribution. In addition, the reconstruction of voxel images from the line projections composing a data set can introduce artifacts. Monte Carlo simulation provides a means for modeling the distortions and artificial neural networks a method for correcting for them as well as minimizing artifacts. (author) figs., tab., refs

  5. Gaussian mixture models and semantic gating improve reconstructions from human brain activity

    Directory of Open Access Journals (Sweden)

    Sanne eSchoenmakers

    2015-01-01

    Full Text Available Better acquisition protocols and analysis techniques are making it possible to use fMRI to obtain highly detailed visualizations of brain processes. In particular we focus on the reconstruction of natural images from BOLD responses in visual cortex. We expand our linear Gaussian framework for percept decoding with Gaussian mixture models to better represent the prior distribution of natural images. Reconstruction of such images then boils down to probabilistic inference in a hybrid Bayesian network. In our set-up, different mixture components correspond to different character categories. Our framework can automatically infer higher-order semantic categories from lower-level brain areas. Furthermore the framework can gate semantic information from higher-order brain areas to enforce the correct category during reconstruction. When categorical information is not available, we show that automatically learned clusters in the data give a similar improvement in reconstruction. The hybrid Bayesian network leads to highly accurate reconstructions in both supervised and unsupervised settings.

  6. SCNS: a graphical tool for reconstructing executable regulatory networks from single-cell genomic data.

    Science.gov (United States)

    Woodhouse, Steven; Piterman, Nir; Wintersteiger, Christoph M; Göttgens, Berthold; Fisher, Jasmin

    2018-05-25

    Reconstruction of executable mechanistic models from single-cell gene expression data represents a powerful approach to understanding developmental and disease processes. New ambitious efforts like the Human Cell Atlas will soon lead to an explosion of data with potential for uncovering and understanding the regulatory networks which underlie the behaviour of all human cells. In order to take advantage of this data, however, there is a need for general-purpose, user-friendly and efficient computational tools that can be readily used by biologists who do not have specialist computer science knowledge. The Single Cell Network Synthesis toolkit (SCNS) is a general-purpose computational tool for the reconstruction and analysis of executable models from single-cell gene expression data. Through a graphical user interface, SCNS takes single-cell qPCR or RNA-sequencing data taken across a time course, and searches for logical rules that drive transitions from early cell states towards late cell states. Because the resulting reconstructed models are executable, they can be used to make predictions about the effect of specific gene perturbations on the generation of specific lineages. SCNS should be of broad interest to the growing number of researchers working in single-cell genomics and will help further facilitate the generation of valuable mechanistic insights into developmental, homeostatic and disease processes.

  7. A Self-Reconstructing Algorithm for Single and Multiple-Sensor Fault Isolation Based on Auto-Associative Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamidreza Mousavi

    2017-01-01

    Full Text Available Recently different approaches have been developed in the field of sensor fault diagnostics based on Auto-Associative Neural Network (AANN. In this paper we present a novel algorithm called Self reconstructing Auto-Associative Neural Network (S-AANN which is able to detect and isolate single faulty sensor via reconstruction. We have also extended the algorithm to be applicable in multiple fault conditions. The algorithm uses a calibration model based on AANN. AANN can reconstruct the faulty sensor using non-faulty sensors due to correlation between the process variables, and mean of the difference between reconstructed and original data determines which sensors are faulty. The algorithms are tested on a Dimerization process. The simulation results show that the S-AANN can isolate multiple faulty sensors with low computational time that make the algorithm appropriate candidate for online applications.

  8. MR fingerprinting Deep RecOnstruction NEtwork (DRONE).

    Science.gov (United States)

    Cohen, Ouri; Zhu, Bo; Rosen, Matthew S

    2018-09-01

    Demonstrate a novel fast method for reconstruction of multi-dimensional MR fingerprinting (MRF) data using deep learning methods. A neural network (NN) is defined using the TensorFlow framework and trained on simulated MRF data computed with the extended phase graph formalism. The NN reconstruction accuracy for noiseless and noisy data is compared to conventional MRF template matching as a function of training data size and is quantified in simulated numerical brain phantom data and International Society for Magnetic Resonance in Medicine/National Institute of Standards and Technology phantom data measured on 1.5T and 3T scanners with an optimized MRF EPI and MRF fast imaging with steady state precession (FISP) sequences with spiral readout. The utility of the method is demonstrated in a healthy subject in vivo at 1.5T. Network training required 10 to 74 minutes; once trained, data reconstruction required approximately 10 ms for the MRF EPI and 76 ms for the MRF FISP sequence. Reconstruction of simulated, noiseless brain data using the NN resulted in a RMS error (RMSE) of 2.6 ms for T 1 and 1.9 ms for T 2 . The reconstruction error in the presence of noise was less than 10% for both T 1 and T 2 for SNR greater than 25 dB. Phantom measurements yielded good agreement (R 2  = 0.99/0.99 for MRF EPI T 1 /T 2 and 0.94/0.98 for MRF FISP T 1 /T 2 ) between the T 1 and T 2 estimated by the NN and reference values from the International Society for Magnetic Resonance in Medicine/National Institute of Standards and Technology phantom. Reconstruction of MRF data with a NN is accurate, 300- to 5000-fold faster, and more robust to noise and dictionary undersampling than conventional MRF dictionary-matching. © 2018 International Society for Magnetic Resonance in Medicine.

  9. Automatic reconstruction of fault networks from seismicity catalogs including location uncertainty

    International Nuclear Information System (INIS)

    Wang, Y.

    2013-01-01

    Within the framework of plate tectonics, the deformation that arises from the relative movement of two plates occurs across discontinuities in the earth's crust, known as fault zones. Active fault zones are the causal locations of most earthquakes, which suddenly release tectonic stresses within a very short time. In return, fault zones slowly grow by accumulating slip due to such earthquakes by cumulated damage at their tips, and by branching or linking between pre-existing faults of various sizes. Over the last decades, a large amount of knowledge has been acquired concerning the overall phenomenology and mechanics of individual faults and earthquakes: A deep physical and mechanical understanding of the links and interactions between and among them is still missing, however. One of the main issues lies in our failure to always succeed in assigning an earthquake to its causative fault. Using approaches based in pattern-recognition theory, more insight into the relationship between earthquakes and fault structure can be gained by developing an automatic fault network reconstruction approach using high resolution earthquake data sets at largely different scales and by considering individual event uncertainties. This thesis introduces the Anisotropic Clustering of Location Uncertainty Distributions (ACLUD) method to reconstruct active fault networks on the basis of both earthquake locations and their estimated individual uncertainties. This method consists in fitting a given set of hypocenters with an increasing amount of finite planes until the residuals of the fit compare with location uncertainties. After a massive search through the large solution space of possible reconstructed fault networks, six different validation procedures are applied in order to select the corresponding best fault network. Two of the validation steps (cross-validation and Bayesian Information Criterion (BIC)) process the fit residuals, while the four others look for solutions that

  10. Automatic reconstruction of fault networks from seismicity catalogs including location uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y.

    2013-07-01

    Within the framework of plate tectonics, the deformation that arises from the relative movement of two plates occurs across discontinuities in the earth's crust, known as fault zones. Active fault zones are the causal locations of most earthquakes, which suddenly release tectonic stresses within a very short time. In return, fault zones slowly grow by accumulating slip due to such earthquakes by cumulated damage at their tips, and by branching or linking between pre-existing faults of various sizes. Over the last decades, a large amount of knowledge has been acquired concerning the overall phenomenology and mechanics of individual faults and earthquakes: A deep physical and mechanical understanding of the links and interactions between and among them is still missing, however. One of the main issues lies in our failure to always succeed in assigning an earthquake to its causative fault. Using approaches based in pattern-recognition theory, more insight into the relationship between earthquakes and fault structure can be gained by developing an automatic fault network reconstruction approach using high resolution earthquake data sets at largely different scales and by considering individual event uncertainties. This thesis introduces the Anisotropic Clustering of Location Uncertainty Distributions (ACLUD) method to reconstruct active fault networks on the basis of both earthquake locations and their estimated individual uncertainties. This method consists in fitting a given set of hypocenters with an increasing amount of finite planes until the residuals of the fit compare with location uncertainties. After a massive search through the large solution space of possible reconstructed fault networks, six different validation procedures are applied in order to select the corresponding best fault network. Two of the validation steps (cross-validation and Bayesian Information Criterion (BIC)) process the fit residuals, while the four others look for solutions that

  11. Reconstruction of financial networks for robust estimation of systemic risk

    Science.gov (United States)

    Mastromatteo, Iacopo; Zarinelli, Elia; Marsili, Matteo

    2012-03-01

    In this paper we estimate the propagation of liquidity shocks through interbank markets when the information about the underlying credit network is incomplete. We show that techniques such as maximum entropy currently used to reconstruct credit networks severely underestimate the risk of contagion by assuming a trivial (fully connected) topology, a type of network structure which can be very different from the one empirically observed. We propose an efficient message-passing algorithm to explore the space of possible network structures and show that a correct estimation of the network degree of connectedness leads to more reliable estimations for systemic risk. Such an algorithm is also able to produce maximally fragile structures, providing a practical upper bound for the risk of contagion when the actual network structure is unknown. We test our algorithm on ensembles of synthetic data encoding some features of real financial networks (sparsity and heterogeneity), finding that more accurate estimations of risk can be achieved. Finally we find that this algorithm can be used to control the amount of information that regulators need to require from banks in order to sufficiently constrain the reconstruction of financial networks.

  12. Reconstruction of financial networks for robust estimation of systemic risk

    International Nuclear Information System (INIS)

    Mastromatteo, Iacopo; Zarinelli, Elia; Marsili, Matteo

    2012-01-01

    In this paper we estimate the propagation of liquidity shocks through interbank markets when the information about the underlying credit network is incomplete. We show that techniques such as maximum entropy currently used to reconstruct credit networks severely underestimate the risk of contagion by assuming a trivial (fully connected) topology, a type of network structure which can be very different from the one empirically observed. We propose an efficient message-passing algorithm to explore the space of possible network structures and show that a correct estimation of the network degree of connectedness leads to more reliable estimations for systemic risk. Such an algorithm is also able to produce maximally fragile structures, providing a practical upper bound for the risk of contagion when the actual network structure is unknown. We test our algorithm on ensembles of synthetic data encoding some features of real financial networks (sparsity and heterogeneity), finding that more accurate estimations of risk can be achieved. Finally we find that this algorithm can be used to control the amount of information that regulators need to require from banks in order to sufficiently constrain the reconstruction of financial networks

  13. A consensus yeast metabolic network reconstruction obtained from a community approach to systems biology

    NARCIS (Netherlands)

    Herrgård, Markus J.; Swainston, Neil; Dobson, Paul; Dunn, Warwick B.; Arga, K. Yalçin; Arvas, Mikko; Blüthgen, Nils; Borger, Simon; Costenoble, Roeland; Heinemann, Matthias; Hucka, Michael; Novère, Nicolas Le; Li, Peter; Liebermeister, Wolfram; Mo, Monica L.; Oliveira, Ana Paula; Petranovic, Dina; Pettifer, Stephen; Simeonidis, Evangelos; Smallbone, Kieran; Spasić, Irena; Weichart, Dieter; Brent, Roger; Broomhead, David S.; Westerhoff, Hans V.; Kırdar, Betül; Penttilä, Merja; Klipp, Edda; Palsson, Bernhard Ø.; Sauer, Uwe; Oliver, Stephen G.; Mendes, Pedro; Nielsen, Jens; Kell, Douglas B.

    2008-01-01

    Genomic data allow the large-scale manual or semi-automated assembly of metabolic network reconstructions, which provide highly curated organism-specific knowledge bases. Although several genome-scale network reconstructions describe Saccharomyces cerevisiae metabolism, they differ in scope and

  14. An artificial neural network approach to reconstruct the source term of a nuclear accident

    International Nuclear Information System (INIS)

    Giles, J.; Palma, C. R.; Weller, P.

    1997-01-01

    This work makes use of one of the main features of artificial neural networks, which is their ability to 'learn' from sets of known input and output data. Indeed, a trained artificial neural network can be used to make predictions on the input data when the output is known, and this feedback process enables one to reconstruct the source term from field observations. With this aim, an artificial neural networks has been trained, using the projections of a segmented plume atmospheric dispersion model at fixed points, simulating a set of gamma detectors located outside the perimeter of a nuclear facility. The resulting set of artificial neural networks was used to determine the release fraction and rate for each of the noble gases, iodines and particulate fission products that could originate from a nuclear accident. Model projections were made using a large data set consisting of effective release height, release fraction of noble gases, iodines and particulate fission products, atmospheric stability, wind speed and wind direction. The model computed nuclide-specific gamma dose rates. The locations of the detectors were chosen taking into account both building shine and wake effects, and varied in distance between 800 and 1200 m from the reactor.The inputs to the artificial neural networks consisted of the measurements from the detector array, atmospheric stability, wind speed and wind direction; the outputs comprised a set of release fractions and heights. Once trained, the artificial neural networks was used to reconstruct the source term from the detector responses for data sets not used in training. The preliminary results are encouraging and show that the noble gases and particulate fission product release fractions are well determined

  15. Virtual resistive network and conductivity reconstruction with Faraday's law

    International Nuclear Information System (INIS)

    Lee, Min Gi; Ko, Min-Su; Kim, Yong-Jung

    2014-01-01

    A network-based conductivity reconstruction method is introduced using the third Maxwell equation, or Faraday's law, for a static case. The usual choice in electrical impedance tomography is the divergence-free equation for the electrical current density. However, if the electrical current density is given, the curl-free equation for the electrical field gives a direct relation between the current and the conductivity and this relation is used in this paper. Mimetic discretization is applied to the equation, which gives the virtual resistive network system. Properties of the numerical schemes introduced are investigated and their advantages over other conductivity reconstruction methods are discussed. Numerically simulated results, with an analysis of noise propagation, are presented. (paper)

  16. Differential reconstructed gene interaction networks for deriving toxicity threshold in chemical risk assessment.

    Science.gov (United States)

    Yang, Yi; Maxwell, Andrew; Zhang, Xiaowei; Wang, Nan; Perkins, Edward J; Zhang, Chaoyang; Gong, Ping

    2013-01-01

    Pathway alterations reflected as changes in gene expression regulation and gene interaction can result from cellular exposure to toxicants. Such information is often used to elucidate toxicological modes of action. From a risk assessment perspective, alterations in biological pathways are a rich resource for setting toxicant thresholds, which may be more sensitive and mechanism-informed than traditional toxicity endpoints. Here we developed a novel differential networks (DNs) approach to connect pathway perturbation with toxicity threshold setting. Our DNs approach consists of 6 steps: time-series gene expression data collection, identification of altered genes, gene interaction network reconstruction, differential edge inference, mapping of genes with differential edges to pathways, and establishment of causal relationships between chemical concentration and perturbed pathways. A one-sample Gaussian process model and a linear regression model were used to identify genes that exhibited significant profile changes across an entire time course and between treatments, respectively. Interaction networks of differentially expressed (DE) genes were reconstructed for different treatments using a state space model and then compared to infer differential edges/interactions. DE genes possessing differential edges were mapped to biological pathways in databases such as KEGG pathways. Using the DNs approach, we analyzed a time-series Escherichia coli live cell gene expression dataset consisting of 4 treatments (control, 10, 100, 1000 mg/L naphthenic acids, NAs) and 18 time points. Through comparison of reconstructed networks and construction of differential networks, 80 genes were identified as DE genes with a significant number of differential edges, and 22 KEGG pathways were altered in a concentration-dependent manner. Some of these pathways were perturbed to a degree as high as 70% even at the lowest exposure concentration, implying a high sensitivity of our DNs approach

  17. Statistical inference approach to structural reconstruction of complex networks from binary time series

    Science.gov (United States)

    Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng

    2018-02-01

    Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.

  18. Network reconstruction via graph blending

    Science.gov (United States)

    Estrada, Rolando

    2016-05-01

    Graphs estimated from empirical data are often noisy and incomplete due to the difficulty of faithfully observing all the components (nodes and edges) of the true graph. This problem is particularly acute for large networks where the number of components may far exceed available surveillance capabilities. Errors in the observed graph can render subsequent analyses invalid, so it is vital to develop robust methods that can minimize these observational errors. Errors in the observed graph may include missing and spurious components, as well fused (multiple nodes are merged into one) and split (a single node is misinterpreted as many) nodes. Traditional graph reconstruction methods are only able to identify missing or spurious components (primarily edges, and to a lesser degree nodes), so we developed a novel graph blending framework that allows us to cast the full estimation problem as a simple edge addition/deletion problem. Armed with this framework, we systematically investigate the viability of various topological graph features, such as the degree distribution or the clustering coefficients, and existing graph reconstruction methods for tackling the full estimation problem. Our experimental results suggest that incorporating any topological feature as a source of information actually hinders reconstruction accuracy. We provide a theoretical analysis of this phenomenon and suggest several avenues for improving this estimation problem.

  19. Harnessing diversity towards the reconstructing of large scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Takeshi Hase

    Full Text Available Elucidating gene regulatory network (GRN from large scale experimental data remains a central challenge in systems biology. Recently, numerous techniques, particularly consensus driven approaches combining different algorithms, have become a potentially promising strategy to infer accurate GRNs. Here, we develop a novel consensus inference algorithm, TopkNet that can integrate multiple algorithms to infer GRNs. Comprehensive performance benchmarking on a cloud computing framework demonstrated that (i a simple strategy to combine many algorithms does not always lead to performance improvement compared to the cost of consensus and (ii TopkNet integrating only high-performance algorithms provide significant performance improvement compared to the best individual algorithms and community prediction. These results suggest that a priori determination of high-performance algorithms is a key to reconstruct an unknown regulatory network. Similarity among gene-expression datasets can be useful to determine potential optimal algorithms for reconstruction of unknown regulatory networks, i.e., if expression-data associated with known regulatory network is similar to that with unknown regulatory network, optimal algorithms determined for the known regulatory network can be repurposed to infer the unknown regulatory network. Based on this observation, we developed a quantitative measure of similarity among gene-expression datasets and demonstrated that, if similarity between the two expression datasets is high, TopkNet integrating algorithms that are optimal for known dataset perform well on the unknown dataset. The consensus framework, TopkNet, together with the similarity measure proposed in this study provides a powerful strategy towards harnessing the wisdom of the crowds in reconstruction of unknown regulatory networks.

  20. Orthotropic conductivity reconstruction with virtual-resistive network and Faraday's law

    KAUST Repository

    Lee, Min-Gi

    2015-06-01

    We obtain the existence and the uniqueness at the same time in the reconstruction of orthotropic conductivity in two-space dimensions by using two sets of internal current densities and boundary conductivity. The curl-free equation of Faraday\\'s law is taken instead of the elliptic equation in a divergence form that is typically used in electrical impedance tomography. A reconstruction method based on layered bricks-type virtual-resistive network is developed to reconstruct orthotropic conductivity with up to 40% multiplicative noise.

  1. Cyanobacterial Biofuels: Strategies and Developments on Network and Modeling.

    Science.gov (United States)

    Klanchui, Amornpan; Raethong, Nachon; Prommeenate, Peerada; Vongsangnak, Wanwipa; Meechai, Asawin

    Cyanobacteria, the phototrophic microorganisms, have attracted much attention recently as a promising source for environmentally sustainable biofuels production. However, barriers for commercial markets of cyanobacteria-based biofuels concern the economic feasibility. Miscellaneous strategies for improving the production performance of cyanobacteria have thus been developed. Among these, the simple ad hoc strategies resulting in failure to optimize fully cell growth coupled with desired product yield are explored. With the advancement of genomics and systems biology, a new paradigm toward systems metabolic engineering has been recognized. In particular, a genome-scale metabolic network reconstruction and modeling is a crucial systems-based tool for whole-cell-wide investigation and prediction. In this review, the cyanobacterial genome-scale metabolic models, which offer a system-level understanding of cyanobacterial metabolism, are described. The main process of metabolic network reconstruction and modeling of cyanobacteria are summarized. Strategies and developments on genome-scale network and modeling through the systems metabolic engineering approach are advanced and employed for efficient cyanobacterial-based biofuels production.

  2. Reconstructing see-saw models

    International Nuclear Information System (INIS)

    Ibarra, Alejandro

    2007-01-01

    In this talk we discuss the prospects to reconstruct the high-energy see-saw Lagrangian from low energy experiments in supersymmetric scenarios. We show that the model with three right-handed neutrinos could be reconstructed in theory, but not in practice. Then, we discuss the prospects to reconstruct the model with two right-handed neutrinos, which is the minimal see-saw model able to accommodate neutrino observations. We identify the relevant processes to achieve this goal, and comment on the sensitivity of future experiments to them. We find the prospects much more promising and we emphasize in particular the importance of the observation of rare leptonic decays for the reconstruction of the right-handed neutrino masses

  3. Signal reconstruction in wireless sensor networks based on a cubature Kalman particle filter

    International Nuclear Information System (INIS)

    Huang Jin-Wang; Feng Jiu-Chao

    2014-01-01

    For solving the issues of the signal reconstruction of nonlinear non-Gaussian signals in wireless sensor networks (WSNs), a new signal reconstruction algorithm based on a cubature Kalman particle filter (CKPF) is proposed in this paper. We model the reconstruction signal first and then use the CKPF to estimate the signal. The CKPF uses a cubature Kalman filter (CKF) to generate the importance proposal distribution of the particle filter and integrates the latest observation, which can approximate the true posterior distribution better. It can improve the estimation accuracy. CKPF uses fewer cubature points than the unscented Kalman particle filter (UKPF) and has less computational overheads. Meanwhile, CKPF uses the square root of the error covariance for iterating and is more stable and accurate than the UKPF counterpart. Simulation results show that the algorithm can reconstruct the observed signals quickly and effectively, at the same time consuming less computational time and with more accuracy than the method based on UKPF. (general)

  4. Optimizing Markovian modeling of chaotic systems with recurrent neural networks

    International Nuclear Information System (INIS)

    Cechin, Adelmo L.; Pechmann, Denise R.; Oliveira, Luiz P.L. de

    2008-01-01

    In this paper, we propose a methodology for optimizing the modeling of an one-dimensional chaotic time series with a Markov Chain. The model is extracted from a recurrent neural network trained for the attractor reconstructed from the data set. Each state of the obtained Markov Chain is a region of the reconstructed state space where the dynamics is approximated by a specific piecewise linear map, obtained from the network. The Markov Chain represents the dynamics of the time series in its statistical essence. An application to a time series resulted from Lorenz system is included

  5. Genome-scale reconstruction of the Saccharomyces cerevisiae metabolic network

    DEFF Research Database (Denmark)

    Förster, Jochen; Famili, I.; Fu, P.

    2003-01-01

    The metabolic network in the yeast Saccharomyces cerevisiae was reconstructed using currently available genomic, biochemical, and physiological information. The metabolic reactions were compartmentalized between the cytosol and the mitochondria, and transport steps between the compartments...

  6. EnzDP: improved enzyme annotation for metabolic network reconstruction based on domain composition profiles.

    Science.gov (United States)

    Nguyen, Nam-Ninh; Srihari, Sriganesh; Leong, Hon Wai; Chong, Ket-Fah

    2015-10-01

    Determining the entire complement of enzymes and their enzymatic functions is a fundamental step for reconstructing the metabolic network of cells. High quality enzyme annotation helps in enhancing metabolic networks reconstructed from the genome, especially by reducing gaps and increasing the enzyme coverage. Currently, structure-based and network-based approaches can only cover a limited number of enzyme families, and the accuracy of homology-based approaches can be further improved. Bottom-up homology-based approach improves the coverage by rebuilding Hidden Markov Model (HMM) profiles for all known enzymes. However, its clustering procedure relies firmly on BLAST similarity score, ignoring protein domains/patterns, and is sensitive to changes in cut-off thresholds. Here, we use functional domain architecture to score the association between domain families and enzyme families (Domain-Enzyme Association Scoring, DEAS). The DEAS score is used to calculate the similarity between proteins, which is then used in clustering procedure, instead of using sequence similarity score. We improve the enzyme annotation protocol using a stringent classification procedure, and by choosing optimal threshold settings and checking for active sites. Our analysis shows that our stringent protocol EnzDP can cover up to 90% of enzyme families available in Swiss-Prot. It achieves a high accuracy of 94.5% based on five-fold cross-validation. EnzDP outperforms existing methods across several testing scenarios. Thus, EnzDP serves as a reliable automated tool for enzyme annotation and metabolic network reconstruction. Available at: www.comp.nus.edu.sg/~nguyennn/EnzDP .

  7. Reconstruction of sparse connectivity in neural networks from spike train covariances

    International Nuclear Information System (INIS)

    Pernice, Volker; Rotter, Stefan

    2013-01-01

    The inference of causation from correlation is in general highly problematic. Correspondingly, it is difficult to infer the existence of physical synaptic connections between neurons from correlations in their activity. Covariances in neural spike trains and their relation to network structure have been the subject of intense research, both experimentally and theoretically. The influence of recurrent connections on covariances can be characterized directly in linear models, where connectivity in the network is described by a matrix of linear coupling kernels. However, as indirect connections also give rise to covariances, the inverse problem of inferring network structure from covariances can generally not be solved unambiguously. Here we study to what degree this ambiguity can be resolved if the sparseness of neural networks is taken into account. To reconstruct a sparse network, we determine the minimal set of linear couplings consistent with the measured covariances by minimizing the L 1 norm of the coupling matrix under appropriate constraints. Contrary to intuition, after stochastic optimization of the coupling matrix, the resulting estimate of the underlying network is directed, despite the fact that a symmetric matrix of count covariances is used for inference. The performance of the new method is best if connections are neither exceedingly sparse, nor too dense, and it is easily applicable for networks of a few hundred nodes. Full coupling kernels can be obtained from the matrix of full covariance functions. We apply our method to networks of leaky integrate-and-fire neurons in an asynchronous–irregular state, where spike train covariances are well described by a linear model. (paper)

  8. The Convolutional Visual Network for Identification and Reconstruction of NOvA Events

    Energy Technology Data Exchange (ETDEWEB)

    Psihas, Fernanda [Indiana U.

    2017-11-22

    In 2016 the NOvA experiment released results for the observation of oscillations in the vμ and ve channels as well as ve cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identification and reconstruction of the neutrino flavor and energy recorded by our detectors. This presentation describes the first application of convolutional neural network technology for event identification and reconstruction in particle detectors like NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identification, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the ve appearance signal by 40% and studies show potential impact to the vμ disappearance analysis.

  9. Integrated Approach to Reconstruction of Microbial Regulatory Networks

    Energy Technology Data Exchange (ETDEWEB)

    Rodionov, Dmitry A [Sanford-Burnham Medical Research Institute; Novichkov, Pavel S [Lawrence Berkeley National Laboratory

    2013-11-04

    This project had the goal(s) of development of integrated bioinformatics platform for genome-scale inference and visualization of transcriptional regulatory networks (TRNs) in bacterial genomes. The work was done in Sanford-Burnham Medical Research Institute (SBMRI, P.I. D.A. Rodionov) and Lawrence Berkeley National Laboratory (LBNL, co-P.I. P.S. Novichkov). The developed computational resources include: (1) RegPredict web-platform for TRN inference and regulon reconstruction in microbial genomes, and (2) RegPrecise database for collection, visualization and comparative analysis of transcriptional regulons reconstructed by comparative genomics. These analytical resources were selected as key components in the DOE Systems Biology KnowledgeBase (SBKB). The high-quality data accumulated in RegPrecise will provide essential datasets of reference regulons in diverse microbes to enable automatic reconstruction of draft TRNs in newly sequenced genomes. We outline our progress toward the three aims of this grant proposal, which were: Develop integrated platform for genome-scale regulon reconstruction; Infer regulatory annotations in several groups of bacteria and building of reference collections of microbial regulons; and Develop KnowledgeBase on microbial transcriptional regulation.

  10. Efficient network reconstruction from dynamical cascades identifies small-world topology of neuronal avalanches.

    Directory of Open Access Journals (Sweden)

    Sinisa Pajevic

    2009-01-01

    Full Text Available Cascading activity is commonly found in complex systems with directed interactions such as metabolic networks, neuronal networks, or disease spreading in social networks. Substantial insight into a system's organization can be obtained by reconstructing the underlying functional network architecture from the observed activity cascades. Here we focus on Bayesian approaches and reduce their computational demands by introducing the Iterative Bayesian (IB and Posterior Weighted Averaging (PWA methods. We introduce a special case of PWA, cast in nonparametric form, which we call the normalized count (NC algorithm. NC efficiently reconstructs random and small-world functional network topologies and architectures from subcritical, critical, and supercritical cascading dynamics and yields significant improvements over commonly used correlation methods. With experimental data, NC identified a functional and structural small-world topology and its corresponding traffic in cortical networks with neuronal avalanche dynamics.

  11. Application of historical, topographic maps and remote sensing data for reconstruction of gully network development as source of information for gully erosion modeling

    Science.gov (United States)

    Belyaev, Vladimir; Kuznetsova, Yulia

    2017-04-01

    Central parts of European Russia are characterized by relatively shorter history of intensive agriculture in comparison to the Western Europe. As a result of that, significant part of the time period of large-scale cultivation is covered by different types of historical documents. For the last about 150 years reasonably good-quality maps are available. Gully erosion history for the European Russia is more or less well-established, with known peaks of activity associated with initial cultivation (400-200 years ago for the territory of Central Russian Upland) and change of land ownership in 1861 that caused splitting large landlords-owned fields into numerous small parcels owned by individual peasant families. The latter was the most important trigger for dramatic growth of gully erosion intensity as most of such parcels were oriented downslope. It is believed that by detailed reconstructions of gully network development using all the available information sources it can be possible to obtain information suitable for gully erosion models testing. Such models can later be applied for predicting further development of the existing gully networks for several different land use and climate change scenarios. Reconstructions for the two case study areas located in different geographic and historical settings will be presented.

  12. Stochastic model and method of zoning water networks

    OpenAIRE

    Тевяшев, Андрей Дмитриевич; Матвиенко, Ольга Ивановна

    2014-01-01

    Water consumption at different time of the day is uneven. The model of steady flow distribution in water-supply networks is calculated for maximum consumption and effectively used in the network design and reconstruction. Quasi-stationary modes, in which the parameters are random variables and vary relative to their mean values are more suitable for operational management and planning of rational network operation modes.Leaks, which sometimes exceed 50 % of the volume of water supplied, are o...

  13. Reconstruction of three-dimensional porous media using generative adversarial neural networks

    Science.gov (United States)

    Mosser, Lukas; Dubrule, Olivier; Blunt, Martin J.

    2017-10-01

    To evaluate the variability of multiphase flow properties of porous media at the pore scale, it is necessary to acquire a number of representative samples of the void-solid structure. While modern x-ray computer tomography has made it possible to extract three-dimensional images of the pore space, assessment of the variability in the inherent material properties is often experimentally not feasible. We present a method to reconstruct the solid-void structure of porous media by applying a generative neural network that allows an implicit description of the probability distribution represented by three-dimensional image data sets. We show, by using an adversarial learning approach for neural networks, that this method of unsupervised learning is able to generate representative samples of porous media that honor their statistics. We successfully compare measures of pore morphology, such as the Euler characteristic, two-point statistics, and directional single-phase permeability of synthetic realizations with the calculated properties of a bead pack, Berea sandstone, and Ketton limestone. Results show that generative adversarial networks can be used to reconstruct high-resolution three-dimensional images of porous media at different scales that are representative of the morphology of the images used to train the neural network. The fully convolutional nature of the trained neural network allows the generation of large samples while maintaining computational efficiency. Compared to classical stochastic methods of image reconstruction, the implicit representation of the learned data distribution can be stored and reused to generate multiple realizations of the pore structure very rapidly.

  14. The Reconstruction and Analysis of Gene Regulatory Networks.

    Science.gov (United States)

    Zheng, Guangyong; Huang, Tao

    2018-01-01

    In post-genomic era, an important task is to explore the function of individual biological molecules (i.e., gene, noncoding RNA, protein, metabolite) and their organization in living cells. For this end, gene regulatory networks (GRNs) are constructed to show relationship between biological molecules, in which the vertices of network denote biological molecules and the edges of network present connection between nodes (Strogatz, Nature 410:268-276, 2001; Bray, Science 301:1864-1865, 2003). Biologists can understand not only the function of biological molecules but also the organization of components of living cells through interpreting the GRNs, since a gene regulatory network is a comprehensively physiological map of living cells and reflects influence of genetic and epigenetic factors (Strogatz, Nature 410:268-276, 2001; Bray, Science 301:1864-1865, 2003). In this paper, we will review the inference methods of GRN reconstruction and analysis approaches of network structure. As a powerful tool for studying complex diseases and biological processes, the applications of the network method in pathway analysis and disease gene identification will be introduced.

  15. SCENERY: a web application for (causal) network reconstruction from cytometry data

    KAUST Repository

    Papoutsoglou, Georgios

    2017-05-08

    Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/.

  16. Genome-scale reconstruction of the Streptococcus pyogenes M49 metabolic network reveals growth requirements and indicates potential drug targets.

    Science.gov (United States)

    Levering, Jennifer; Fiedler, Tomas; Sieg, Antje; van Grinsven, Koen W A; Hering, Silvio; Veith, Nadine; Olivier, Brett G; Klett, Lara; Hugenholtz, Jeroen; Teusink, Bas; Kreikemeyer, Bernd; Kummer, Ursula

    2016-08-20

    Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes M49. Initially, we based the reconstruction on genome annotations and already existing and curated metabolic networks of Bacillus subtilis, Escherichia coli, Lactobacillus plantarum and Lactococcus lactis. This initial draft was manually curated with the final reconstruction accounting for 480 genes associated with 576 reactions and 558 metabolites. In order to constrain the model further, we performed growth experiments of wild type and arcA deletion strains of S. pyogenes M49 in a chemically defined medium and calculated nutrient uptake and production fluxes. We additionally performed amino acid auxotrophy experiments to test the consistency of the model. The established genome-scale model can be used to understand the growth requirements of the human pathogen S. pyogenes and define optimal and suboptimal conditions, but also to describe differences and similarities between S. pyogenes and related lactic acid bacteria such as L. lactis in order to find strategies to reduce the growth of the pathogen and propose drug targets. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Genome-scale reconstruction of the Streptococcus pyogenes M49 metabolic network reveals growth requirements and indicates potential drug targets

    NARCIS (Netherlands)

    Levering, J.; Fiedler, T.; Sieg, A.; van Grinsven, K.W.A.; Hering, S.; Veith, N.; Olivier, B.G.; Klett, L.; Hugenholtz, J.; Teusink, B.; Kreikemeyer, B.; Kummer, U.

    2016-01-01

    Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes

  18. Reconstruction and analysis of nutrient-induced phosphorylation networks in Arabidopsis thaliana.

    Directory of Open Access Journals (Sweden)

    Guangyou eDuan

    2013-12-01

    Full Text Available Elucidating the dynamics of molecular processes in living organisms in response to external perturbations is a central goal in modern systems biology. We investigated the dynamics of protein phosphorylation events in Arabidopsis thaliana exposed to changing nutrient conditions. Phosphopeptide expression levels were detected at five consecutive time points over a time interval of 30 minutes after nutrient resupply following prior starvation. The three tested inorganic, ionic nutrients NH4+, NO3-, PO43- elicited similar phosphosignaling responses that were distinguishable from those invoked by the sugars mannitol, sucrose. When embedded in the protein-protein interaction network of Arabidopsis thaliana, phosphoproteins were found to exhibit a higher degree compared to average proteins. Based on the time-series data, we reconstructed a network of regulatory interactions mediated by phosphorylation. The performance of different network inference methods was evaluated by the observed likelihood of physical interactions within and across different subcellular compartments and based on gene ontology semantic similarity. The dynamic phosphorylation network was then reconstructed using a Pearson correlation method with added directionality based on partial variance differences. The topology of the inferred integrated network corresponds to an information dissemination architecture, in which the phosphorylation signal is passed on to an increasing number of phosphoproteins stratified into an initiation, processing, and effector layer. Specific phosphorylation peptide motifs associated with the distinct layers were identified indicating the action of layer-specific kinases. Despite the limited temporal resolution, combined with information on subcellular location, the available time-series data proved useful for reconstructing the dynamics of the molecular signaling cascade in response to nutrient stress conditions in the plant Arabidopsis thaliana.

  19. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, Raluca; Pentia, Mircea

    1996-01-01

    This work uses neural network technique (Hopfield method) to reconstruct particle tracks starting from a data set obtained with a coordinate detector system placed around a high energy accelerated particle interaction region. A learning algorithm for finding the optimal connection of the signal points have been elaborated and tested. We used a single layer neutral network with constraints in order to obtain the particle tracks drawn through the detected signal points. The dynamics of the systems is given by the MFT equations which determine the system evolution to a minimum energy function. We carried out a computing program that has been tested on a lot of Monte Carlo simulated data. With this program we obtained good results even for noise/signal ratio 200. (authors)

  20. Pseudo-proxy evaluation of climate field reconstruction methods of North Atlantic climate based on an annually resolved marine proxy network

    Directory of Open Access Journals (Sweden)

    M. Pyrina

    2017-10-01

    Full Text Available Two statistical methods are tested to reconstruct the interannual variations in past sea surface temperatures (SSTs of the North Atlantic (NA Ocean over the past millennium based on annually resolved and absolutely dated marine proxy records of the bivalve mollusk Arctica islandica. The methods are tested in a pseudo-proxy experiment (PPE setup using state-of-the-art climate models (CMIP5 Earth system models and reanalysis data from the COBE2 SST data set. The methods were applied in the virtual reality provided by global climate simulations and reanalysis data to reconstruct the past NA SSTs using pseudo-proxy records that mimic the statistical characteristics and network of Arctica islandica. The multivariate linear regression methods evaluated here are principal component regression and canonical correlation analysis. Differences in the skill of the climate field reconstruction (CFR are assessed according to different calibration periods and different proxy locations within the NA basin. The choice of the climate model used as a surrogate reality in the PPE has a more profound effect on the CFR skill than the calibration period and the statistical reconstruction method. The differences between the two methods are clearer for the MPI-ESM model due to its higher spatial resolution in the NA basin. The pseudo-proxy results of the CCSM4 model are closer to the pseudo-proxy results based on the reanalysis data set COBE2. Conducting PPEs using noise-contaminated pseudo-proxies instead of noise-free pseudo-proxies is important for the evaluation of the methods, as more spatial differences in the reconstruction skill are revealed. Both methods are appropriate for the reconstruction of the temporal evolution of the NA SSTs, even though they lead to a great loss of variance away from the proxy sites. Under reasonable assumptions about the characteristics of the non-climate noise in the proxy records, our results show that the marine network of Arctica

  1. Sequence-based model of gap gene regulatory network.

    Science.gov (United States)

    Kozlov, Konstantin; Gursky, Vitaly; Kulakovskiy, Ivan; Samsonova, Maria

    2014-01-01

    The detailed analysis of transcriptional regulation is crucially important for understanding biological processes. The gap gene network in Drosophila attracts large interest among researches studying mechanisms of transcriptional regulation. It implements the most upstream regulatory layer of the segmentation gene network. The knowledge of molecular mechanisms involved in gap gene regulation is far less complete than that of genetics of the system. Mathematical modeling goes beyond insights gained by genetics and molecular approaches. It allows us to reconstruct wild-type gene expression patterns in silico, infer underlying regulatory mechanism and prove its sufficiency. We developed a new model that provides a dynamical description of gap gene regulatory systems, using detailed DNA-based information, as well as spatial transcription factor concentration data at varying time points. We showed that this model correctly reproduces gap gene expression patterns in wild type embryos and is able to predict gap expression patterns in Kr mutants and four reporter constructs. We used four-fold cross validation test and fitting to random dataset to validate the model and proof its sufficiency in data description. The identifiability analysis showed that most model parameters are well identifiable. We reconstructed the gap gene network topology and studied the impact of individual transcription factor binding sites on the model output. We measured this impact by calculating the site regulatory weight as a normalized difference between the residual sum of squares error for the set of all annotated sites and for the set with the site of interest excluded. The reconstructed topology of the gap gene network is in agreement with previous modeling results and data from literature. We showed that 1) the regulatory weights of transcription factor binding sites show very weak correlation with their PWM score; 2) sites with low regulatory weight are important for the model output; 3

  2. Reconstruction of source location in a network of gravitational wave interferometric detectors

    International Nuclear Information System (INIS)

    Cavalier, Fabien; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Clapson, Andre-Claude; Davier, Michel; Hello, Patrice; Kreckelbergh, Stephane; Leroy, Nicolas; Varvella, Monica

    2006-01-01

    This paper deals with the reconstruction of the direction of a gravitational wave source using the detection made by a network of interferometric detectors, mainly the LIGO and Virgo detectors. We suppose that an event has been seen in coincidence using a filter applied on the three detector data streams. Using the arrival time (and its associated error) of the gravitational signal in each detector, the direction of the source in the sky is computed using a χ 2 minimization technique. For reasonably large signals (SNR>4.5 in all detectors), the mean angular error between the real location and the reconstructed one is about 1 deg. . We also investigate the effect of the network geometry assuming the same angular response for all interferometric detectors. It appears that the reconstruction quality is not uniform over the sky and is degraded when the source approaches the plane defined by the three detectors. Adding at least one other detector to the LIGO-Virgo network reduces the blind regions and in the case of 6 detectors, a precision less than 1 deg. on the source direction can be reached for 99% of the sky

  3. The parallel implementation of a backpropagation neural network and its applicability to SPECT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, John Patrick [Iowa State Univ., Ames, IA (United States)

    1992-01-01

    The objective of this study was to determine the feasibility of using an Artificial Neural Network (ANN), in particular a backpropagation ANN, to improve the speed and quality of the reconstruction of three-dimensional SPECT (single photon emission computed tomography) images. In addition, since the processing elements (PE)s in each layer of an ANN are independent of each other, the speed and efficiency of the neural network architecture could be better optimized by implementing the ANN on a massively parallel computer. The specific goals of this research were: to implement a fully interconnected backpropagation neural network on a serial computer and a SIMD parallel computer, to identify any reduction in the time required to train these networks on the parallel machine versus the serial machine, to determine if these neural networks can learn to recognize SPECT data by training them on a section of an actual SPECT image, and to determine from the knowledge obtained in this research if full SPECT image reconstruction by an ANN implemented on a parallel computer is feasible both in time required to train the network, and in quality of the images reconstructed.

  4. Connections model for tomographic images reconstruction

    International Nuclear Information System (INIS)

    Rodrigues, R.G.S.; Pela, C.A.; Roque, S.F. A.C.

    1998-01-01

    This paper shows an artificial neural network with an adequately topology for tomographic image reconstruction. The associated error function is derived and the learning algorithm is make. The simulated results are presented and demonstrate the existence of a generalized solution for nets with linear activation function. (Author)

  5. Photoacoustic image reconstruction via deep learning

    Science.gov (United States)

    Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes

    2018-02-01

    Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.

  6. A method of reconstructing the spatial measurement network by mobile measurement transmitter for shipbuilding

    International Nuclear Information System (INIS)

    Guo, Siyang; Lin, Jiarui; Yang, Linghui; Ren, Yongjie; Guo, Yin

    2017-01-01

    The workshop Measurement Position System (wMPS) is a distributed measurement system which is suitable for the large-scale metrology. However, there are some inevitable measurement problems in the shipbuilding industry, such as the restriction by obstacles and limited measurement range. To deal with these factors, this paper presents a method of reconstructing the spatial measurement network by mobile transmitter. A high-precision coordinate control network with more than six target points is established. The mobile measuring transmitter can be added into the measurement network using this coordinate control network with the spatial resection method. This method reconstructs the measurement network and broadens the measurement scope efficiently. To verify this method, two comparison experiments are designed with the laser tracker as the reference. The results demonstrate that the accuracy of point-to-point length is better than 0.4mm and the accuracy of coordinate measurement is better than 0.6mm. (paper)

  7. Reconstruction and Analysis of Human Kidney-Specific Metabolic Network Based on Omics Data

    Directory of Open Access Journals (Sweden)

    Ai-Di Zhang

    2013-01-01

    Full Text Available With the advent of the high-throughput data production, recent studies of tissue-specific metabolic networks have largely advanced our understanding of the metabolic basis of various physiological and pathological processes. However, for kidney, which plays an essential role in the body, the available kidney-specific model remains incomplete. This paper reports the reconstruction and characterization of the human kidney metabolic network based on transcriptome and proteome data. In silico simulations revealed that house-keeping genes were more essential than kidney-specific genes in maintaining kidney metabolism. Importantly, a total of 267 potential metabolic biomarkers for kidney-related diseases were successfully explored using this model. Furthermore, we found that the discrepancies in metabolic processes of different tissues are directly corresponding to tissue's functions. Finally, the phenotypes of the differentially expressed genes in diabetic kidney disease were characterized, suggesting that these genes may affect disease development through altering kidney metabolism. Thus, the human kidney-specific model constructed in this study may provide valuable information for the metabolism of kidney and offer excellent insights into complex kidney diseases.

  8. In Vitro Reconstruction of Neuronal Networks Derived from Human iPS Cells Using Microfabricated Devices.

    Directory of Open Access Journals (Sweden)

    Yuzo Takayama

    Full Text Available Morphology and function of the nervous system is maintained via well-coordinated processes both in central and peripheral nervous tissues, which govern the homeostasis of organs/tissues. Impairments of the nervous system induce neuronal disorders such as peripheral neuropathy or cardiac arrhythmia. Although further investigation is warranted to reveal the molecular mechanisms of progression in such diseases, appropriate model systems mimicking the patient-specific communication between neurons and organs are not established yet. In this study, we reconstructed the neuronal network in vitro either between neurons of the human induced pluripotent stem (iPS cell derived peripheral nervous system (PNS and central nervous system (CNS, or between PNS neurons and cardiac cells in a morphologically and functionally compartmentalized manner. Networks were constructed in photolithographically microfabricated devices with two culture compartments connected by 20 microtunnels. We confirmed that PNS and CNS neurons connected via synapses and formed a network. Additionally, calcium-imaging experiments showed that the bundles originating from the PNS neurons were functionally active and responded reproducibly to external stimuli. Next, we confirmed that CNS neurons showed an increase in calcium activity during electrical stimulation of networked bundles from PNS neurons in order to demonstrate the formation of functional cell-cell interactions. We also confirmed the formation of synapses between PNS neurons and mature cardiac cells. These results indicate that compartmentalized culture devices are promising tools for reconstructing network-wide connections between PNS neurons and various organs, and might help to understand patient-specific molecular and functional mechanisms under normal and pathological conditions.

  9. Genetic Network Programming with Reconstructed Individuals

    Science.gov (United States)

    Ye, Fengming; Mabu, Shingo; Wang, Lutao; Eto, Shinji; Hirasawa, Kotaro

    A lot of research on evolutionary computation has been done and some significant classical methods such as Genetic Algorithm (GA), Genetic Programming (GP), Evolutionary Programming (EP), and Evolution Strategies (ES) have been studied. Recently, a new approach named Genetic Network Programming (GNP) has been proposed. GNP can evolve itself and find the optimal solution. It is based on the idea of Genetic Algorithm and uses the data structure of directed graphs. Many papers have demonstrated that GNP can deal with complex problems in the dynamic environments very efficiently and effectively. As a result, recently, GNP is getting more and more attentions and is used in many different areas such as data mining, extracting trading rules of stock markets, elevator supervised control systems, etc., and GNP has obtained some outstanding results. Focusing on the GNP's distinguished expression ability of the graph structure, this paper proposes a method named Genetic Network Programming with Reconstructed Individuals (GNP-RI). The aim of GNP-RI is to balance the exploitation and exploration of GNP, that is, to strengthen the exploitation ability by using the exploited information extensively during the evolution process of GNP and finally obtain better performances than that of GNP. In the proposed method, the worse individuals are reconstructed and enhanced by the elite information before undergoing genetic operations (mutation and crossover). The enhancement of worse individuals mimics the maturing phenomenon in nature, where bad individuals can become smarter after receiving a good education. In this paper, GNP-RI is applied to the tile-world problem which is an excellent bench mark for evaluating the proposed architecture. The performance of GNP-RI is compared with that of the conventional GNP. The simulation results show some advantages of GNP-RI demonstrating its superiority over the conventional GNPs.

  10. Reconstruction of gastric slow wave from finger photoplethysmographic signal using radial basis function neural network.

    Science.gov (United States)

    Mohamed Yacin, S; Srinivasa Chakravarthy, V; Manivannan, M

    2011-11-01

    Extraction of extra-cardiac information from photoplethysmography (PPG) signal is a challenging research problem with significant clinical applications. In this study, radial basis function neural network (RBFNN) is used to reconstruct the gastric myoelectric activity (GMA) slow wave from finger PPG signal. Finger PPG and GMA (measured using Electrogastrogram, EGG) signals were acquired simultaneously at the sampling rate of 100 Hz from ten healthy subjects. Discrete wavelet transform (DWT) was used to extract slow wave (0-0.1953 Hz) component from the finger PPG signal; this slow wave PPG was used to reconstruct EGG. A RBFNN is trained on signals obtained from six subjects in both fasting and postprandial conditions. The trained network is tested on data obtained from the remaining four subjects. In the earlier study, we have shown the presence of GMA information in finger PPG signal using DWT and cross-correlation method. In this study, we explicitly reconstruct gastric slow wave from finger PPG signal by the proposed RBFNN-based method. It was found that the network-reconstructed slow wave provided significantly higher (P wave than the correlation obtained (≈0.7) between the PPG slow wave from DWT and the EEG slow wave. Our results showed that a simple finger PPG signal can be used to reconstruct gastric slow wave using RBFNN method.

  11. Reconstruction of neutron spectra through neural networks; Reconstruccion de espectros de neutrones mediante redes neuronales

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E. [Cuerpo Academico de Radiobiologia, Estudios Nucleares, Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico)] e-mail: rvega@cantera.reduaz.mx [and others

    2003-07-01

    A neural network has been used to reconstruct the neutron spectra starting from the counting rates of the detectors of the Bonner sphere spectrophotometric system. A group of 56 neutron spectra was selected to calculate the counting rates that would produce in a Bonner sphere system, with these data and the spectra it was trained the neural network. To prove the performance of the net, 12 spectra were used, 6 were taken of the group used for the training, 3 were obtained of mathematical functions and those other 3 correspond to real spectra. When comparing the original spectra of those reconstructed by the net we find that our net has a poor performance when reconstructing monoenergetic spectra, this attributes it to those characteristic of the spectra used for the training of the neural network, however for the other groups of spectra the results of the net are appropriate with the prospective ones. (Author)

  12. Aboveground Biomass Estimation Using Reconstructed Feature of Airborne Discrete-Return LIDAR by Auto-Encoder Neural Network

    Science.gov (United States)

    Li, T.; Wang, Z.; Peng, J.

    2018-04-01

    Aboveground biomass (AGB) estimation is critical for quantifying carbon stocks and essential for evaluating carbon cycle. In recent years, airborne LiDAR shows its great ability for highly-precision AGB estimation. Most of the researches estimate AGB by the feature metrics extracted from the canopy height distribution of the point cloud which calculated based on precise digital terrain model (DTM). However, if forest canopy density is high, the probability of the LiDAR signal penetrating the canopy is lower, resulting in ground points is not enough to establish DTM. Then the distribution of forest canopy height is imprecise and some critical feature metrics which have a strong correlation with biomass such as percentiles, maximums, means and standard deviations of canopy point cloud can hardly be extracted correctly. In order to address this issue, we propose a strategy of first reconstructing LiDAR feature metrics through Auto-Encoder neural network and then using the reconstructed feature metrics to estimate AGB. To assess the prediction ability of the reconstructed feature metrics, both original and reconstructed feature metrics were regressed against field-observed AGB using the multiple stepwise regression (MS) and the partial least squares regression (PLS) respectively. The results showed that the estimation model using reconstructed feature metrics improved R2 by 5.44 %, 18.09 %, decreased RMSE value by 10.06 %, 22.13 % and reduced RMSEcv by 10.00 %, 21.70 % for AGB, respectively. Therefore, reconstructing LiDAR point feature metrics has potential for addressing AGB estimation challenge in dense canopy area.

  13. Reconstruction of the yeast protein-protein interaction network involved in nutrient sensing and global metabolic regulation

    DEFF Research Database (Denmark)

    Nandy, Subir Kumar; Jouhten, Paula; Nielsen, Jens

    2010-01-01

    proteins. Despite the value of BioGRID for studying protein-protein interactions, there is a need for manual curation of these interactions in order to remove false positives. RESULTS: Here we describe an annotated reconstruction of the protein-protein interactions around four key nutrient......) and for all the interactions between them (edges). The annotated information is readily available utilizing the functionalities of network modelling tools such as Cytoscape and CellDesigner. CONCLUSIONS: The reported fully annotated interaction model serves as a platform for integrated systems biology studies...

  14. PROVIDING OF SAFETY AT WORKS IMPLEMENTATION ON RECONSTRUCTION OF PLUMBINGS NETWORKS IN THE STRAITENED TERMS

    Directory of Open Access Journals (Sweden)

    DIDENKO L. M.

    2016-07-01

    Full Text Available Summary. Raising of problem. In all regions of our country plumbings networks have a considerable physical and moral wear, because in the majority they were laid in the middle of the last century. It is known that more than 50 % on-the-road pipelines are made from steel, here middle tenure of employment of metallic pipes for plumbings networks makes 30. [1]. Statistical data testify that more than 34 % plumbings and sewage networks are in the emergency state. Thus, a large enough stake in building industry of Ukraine is on works on the reconstruction of this type of engineering networks. Thus complete replacement of all pipes requires heavy material tolls, a reconstruction and major repairs of separate emergency areas are mainly produced on this account. Logically to assert that providing of safe production of the examined type of works becomes complicated by the presence of harmful and dangerous productive factors arising up due to the complex factor of straitened. This factor is stipulated by that plumbings networks are laid within the limits of folded municipal building and on territory of operating industrial enterprises. About the danger of production of works on a reconstruction the high level of traumatism testifies at their production. According to the law of Ukraine "On a labour (item 13 protection", an employer is under an obligation to create in the workplace the terms of labour accordingly normatively - to the legal acts, requirements of legislation on the observance of rights of workers in area of labour protection. [2] Providing of safety at implementation of works on the reconstruction of plumbings networks, maybe only at the complex going near the study of this problem, that plugs in itself: research of influence of factors of straitened; exposure of features of technology of production building, assembling, breaking-down, earthen and other types of works executable on a site area at a reconstruction; perfection of existent

  15. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    Directory of Open Access Journals (Sweden)

    Shao Jie

    2014-01-01

    Full Text Available A modeling based on the improved Elman neural network (IENN is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL model, Chebyshev neural network (CNN model, and basic Elman neural network (BENN model, the proposed model has better performance.

  16. P-Finder: Reconstruction of Signaling Networks from Protein-Protein Interactions and GO Annotations.

    Science.gov (United States)

    Young-Rae Cho; Yanan Xin; Speegle, Greg

    2015-01-01

    Because most complex genetic diseases are caused by defects of cell signaling, illuminating a signaling cascade is essential for understanding their mechanisms. We present three novel computational algorithms to reconstruct signaling networks between a starting protein and an ending protein using genome-wide protein-protein interaction (PPI) networks and gene ontology (GO) annotation data. A signaling network is represented as a directed acyclic graph in a merged form of multiple linear pathways. An advanced semantic similarity metric is applied for weighting PPIs as the preprocessing of all three methods. The first algorithm repeatedly extends the list of nodes based on path frequency towards an ending protein. The second algorithm repeatedly appends edges based on the occurrence of network motifs which indicate the link patterns more frequently appearing in a PPI network than in a random graph. The last algorithm uses the information propagation technique which iteratively updates edge orientations based on the path strength and merges the selected directed edges. Our experimental results demonstrate that the proposed algorithms achieve higher accuracy than previous methods when they are tested on well-studied pathways of S. cerevisiae. Furthermore, we introduce an interactive web application tool, called P-Finder, to visualize reconstructed signaling networks.

  17. Reconstruction and in silico analysis of metabolic network for an oleaginous yeast, Yarrowia lipolytica.

    Directory of Open Access Journals (Sweden)

    Pengcheng Pan

    Full Text Available With the emergence of energy scarcity, the use of renewable energy sources such as biodiesel is becoming increasingly necessary. Recently, many researchers have focused their minds on Yarrowia lipolytica, a model oleaginous yeast, which can be employed to accumulate large amounts of lipids that could be further converted to biodiesel. In order to understand the metabolic characteristics of Y. lipolytica at a systems level and to examine the potential for enhanced lipid production, a genome-scale compartmentalized metabolic network was reconstructed based on a combination of genome annotation and the detailed biochemical knowledge from multiple databases such as KEGG, ENZYME and BIGG. The information about protein and reaction associations of all the organisms in KEGG and Expasy-ENZYME database was arranged into an EXCEL file that can then be regarded as a new useful database to generate other reconstructions. The generated model iYL619_PCP accounts for 619 genes, 843 metabolites and 1,142 reactions including 236 transport reactions, 125 exchange reactions and 13 spontaneous reactions. The in silico model successfully predicted the minimal media and the growing abilities on different substrates. With flux balance analysis, single gene knockouts were also simulated to predict the essential genes and partially essential genes. In addition, flux variability analysis was applied to design new mutant strains that will redirect fluxes through the network and may enhance the production of lipid. This genome-scale metabolic model of Y. lipolytica can facilitate system-level metabolic analysis as well as strain development for improving the production of biodiesels and other valuable products by Y. lipolytica and other closely related oleaginous yeasts.

  18. A fast and efficient gene-network reconstruction method from multiple over-expression experiments

    Directory of Open Access Journals (Sweden)

    Thurner Stefan

    2009-08-01

    Full Text Available Abstract Background Reverse engineering of gene regulatory networks presents one of the big challenges in systems biology. Gene regulatory networks are usually inferred from a set of single-gene over-expressions and/or knockout experiments. Functional relationships between genes are retrieved either from the steady state gene expressions or from respective time series. Results We present a novel algorithm for gene network reconstruction on the basis of steady-state gene-chip data from over-expression experiments. The algorithm is based on a straight forward solution of a linear gene-dynamics equation, where experimental data is fed in as a first predictor for the solution. We compare the algorithm's performance with the NIR algorithm, both on the well known E. coli experimental data and on in-silico experiments. Conclusion We show superiority of the proposed algorithm in the number of correctly reconstructed links and discuss computational time and robustness. The proposed algorithm is not limited by combinatorial explosion problems and can be used in principle for large networks.

  19. Variable disparity estimation based intermediate view reconstruction in dynamic flow allocation over EPON-based access networks

    Science.gov (United States)

    Bae, Kyung-Hoon; Lee, Jungjoon; Kim, Eun-Soo

    2008-06-01

    In this paper, a variable disparity estimation (VDE)-based intermediate view reconstruction (IVR) in dynamic flow allocation (DFA) over an Ethernet passive optical network (EPON)-based access network is proposed. In the proposed system, the stereoscopic images are estimated by a variable block-matching algorithm (VBMA), and they are transmitted to the receiver through DFA over EPON. This scheme improves a priority-based access network by converting it to a flow-based access network with a new access mechanism and scheduling algorithm, and then 16-view images are synthesized by the IVR using VDE. Some experimental results indicate that the proposed system improves the peak-signal-to-noise ratio (PSNR) to as high as 4.86 dB and reduces the processing time to 3.52 s. Additionally, the network service provider can provide upper limits of transmission delays by the flow. The modeling and simulation results, including mathematical analyses, from this scheme are also provided.

  20. Short-Term Load Forecasting Model Based on Quantum Elman Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhisheng Zhang

    2016-01-01

    Full Text Available Short-term load forecasting model based on quantum Elman neural networks was constructed in this paper. The quantum computation and Elman feedback mechanism were integrated into quantum Elman neural networks. Quantum computation can effectively improve the approximation capability and the information processing ability of the neural networks. Quantum Elman neural networks have not only the feedforward connection but also the feedback connection. The feedback connection between the hidden nodes and the context nodes belongs to the state feedback in the internal system, which has formed specific dynamic memory performance. Phase space reconstruction theory is the theoretical basis of constructing the forecasting model. The training samples are formed by means of K-nearest neighbor approach. Through the example simulation, the testing results show that the model based on quantum Elman neural networks is better than the model based on the quantum feedforward neural network, the model based on the conventional Elman neural network, and the model based on the conventional feedforward neural network. So the proposed model can effectively improve the prediction accuracy. The research in the paper makes a theoretical foundation for the practical engineering application of the short-term load forecasting model based on quantum Elman neural networks.

  1. The SF3M approach to 3-D photo-reconstruction for non-expert users: application to a gully network

    Science.gov (United States)

    Castillo, C.; James, M. R.; Redel-Macías, M. D.; Pérez, R.; Gómez, J. A.

    2015-04-01

    3-D photo-reconstruction (PR) techniques have been successfully used to produce high resolution elevation models for different applications and over different spatial scales. However, innovative approaches are required to overcome some limitations that this technique may present in challenging scenarios. Here, we evaluate SF3M, a new graphical user interface for implementing a complete PR workflow based on freely available software (including external calls to VisualSFM and CloudCompare), in combination with a low-cost survey design for the reconstruction of a several-hundred-meters-long gully network. SF3M provided a semi-automated workflow for 3-D reconstruction requiring ~ 49 h (of which only 17% required operator assistance) for obtaining a final gully network model of > 17 million points over a gully plan area of 4230 m2. We show that a walking itinerary along the gully perimeter using two light-weight automatic cameras (1 s time-lapse mode) and a 6 m-long pole is an efficient method for 3-D monitoring of gullies, at a low cost (about EUR 1000 budget for the field equipment) and time requirements (~ 90 min for image collection). A mean error of 6.9 cm at the ground control points was found, mainly due to model deformations derived from the linear geometry of the gully and residual errors in camera calibration. The straightforward image collection and processing approach can be of great benefit for non-expert users working on gully erosion assessment.

  2. Applying Bayesian neural networks to event reconstruction in reactor neutrino experiments

    International Nuclear Information System (INIS)

    Xu Ye; Xu Weiwei; Meng Yixiong; Zhu Kaien; Xu Wei

    2008-01-01

    A toy detector has been designed to simulate central detectors in reactor neutrino experiments in the paper. The electron samples from the Monte-Carlo simulation of the toy detector have been reconstructed by the method of Bayesian neural networks (BNNs) and the standard algorithm, a maximum likelihood method (MLD), respectively. The result of the event reconstruction using BNN has been compared with the one using MLD. Compared to MLD, the uncertainties of the electron vertex are not improved, but the energy resolutions are significantly improved using BNN. And the improvement is more obvious for the high energy electrons than the low energy ones

  3. Microalgal Metabolic Network Model Refinement through High-Throughput Functional Metabolic Profiling

    International Nuclear Information System (INIS)

    Chaiboonchoe, Amphun; Dohai, Bushra Saeed; Cai, Hong; Nelson, David R.; Jijakli, Kenan; Salehi-Ashtiani, Kourosh

    2014-01-01

    Metabolic modeling provides the means to define metabolic processes at a systems level; however, genome-scale metabolic models often remain incomplete in their description of metabolic networks and may include reactions that are experimentally unverified. This shortcoming is exacerbated in reconstructed models of newly isolated algal species, as there may be little to no biochemical evidence available for the metabolism of such isolates. The phenotype microarray (PM) technology (Biolog, Hayward, CA, USA) provides an efficient, high-throughput method to functionally define cellular metabolic activities in response to a large array of entry metabolites. The platform can experimentally verify many of the unverified reactions in a network model as well as identify missing or new reactions in the reconstructed metabolic model. The PM technology has been used for metabolic phenotyping of non-photosynthetic bacteria and fungi, but it has not been reported for the phenotyping of microalgae. Here, we introduce the use of PM assays in a systematic way to the study of microalgae, applying it specifically to the green microalgal model species Chlamydomonas reinhardtii. The results obtained in this study validate a number of existing annotated metabolic reactions and identify a number of novel and unexpected metabolites. The obtained information was used to expand and refine the existing COBRA-based C. reinhardtii metabolic network model iRC1080. Over 254 reactions were added to the network, and the effects of these additions on flux distribution within the network are described. The novel reactions include the support of metabolism by a number of d-amino acids, l-dipeptides, and l-tripeptides as nitrogen sources, as well as support of cellular respiration by cysteamine-S-phosphate as a phosphorus source. The protocol developed here can be used as a foundation to functionally profile other microalgae such as known microalgae mutants and novel isolates.

  4. Microalgal Metabolic Network Model Refinement through High-Throughput Functional Metabolic Profiling

    Energy Technology Data Exchange (ETDEWEB)

    Chaiboonchoe, Amphun; Dohai, Bushra Saeed; Cai, Hong; Nelson, David R. [Division of Science and Math, New York University Abu Dhabi, Abu Dhabi (United Arab Emirates); Center for Genomics and Systems Biology (CGSB), New York University Abu Dhabi Institute, Abu Dhabi (United Arab Emirates); Jijakli, Kenan [Division of Science and Math, New York University Abu Dhabi, Abu Dhabi (United Arab Emirates); Center for Genomics and Systems Biology (CGSB), New York University Abu Dhabi Institute, Abu Dhabi (United Arab Emirates); Engineering Division, Biofinery, Manhattan, KS (United States); Salehi-Ashtiani, Kourosh, E-mail: ksa3@nyu.edu [Division of Science and Math, New York University Abu Dhabi, Abu Dhabi (United Arab Emirates); Center for Genomics and Systems Biology (CGSB), New York University Abu Dhabi Institute, Abu Dhabi (United Arab Emirates)

    2014-12-10

    Metabolic modeling provides the means to define metabolic processes at a systems level; however, genome-scale metabolic models often remain incomplete in their description of metabolic networks and may include reactions that are experimentally unverified. This shortcoming is exacerbated in reconstructed models of newly isolated algal species, as there may be little to no biochemical evidence available for the metabolism of such isolates. The phenotype microarray (PM) technology (Biolog, Hayward, CA, USA) provides an efficient, high-throughput method to functionally define cellular metabolic activities in response to a large array of entry metabolites. The platform can experimentally verify many of the unverified reactions in a network model as well as identify missing or new reactions in the reconstructed metabolic model. The PM technology has been used for metabolic phenotyping of non-photosynthetic bacteria and fungi, but it has not been reported for the phenotyping of microalgae. Here, we introduce the use of PM assays in a systematic way to the study of microalgae, applying it specifically to the green microalgal model species Chlamydomonas reinhardtii. The results obtained in this study validate a number of existing annotated metabolic reactions and identify a number of novel and unexpected metabolites. The obtained information was used to expand and refine the existing COBRA-based C. reinhardtii metabolic network model iRC1080. Over 254 reactions were added to the network, and the effects of these additions on flux distribution within the network are described. The novel reactions include the support of metabolism by a number of d-amino acids, l-dipeptides, and l-tripeptides as nitrogen sources, as well as support of cellular respiration by cysteamine-S-phosphate as a phosphorus source. The protocol developed here can be used as a foundation to functionally profile other microalgae such as known microalgae mutants and novel isolates.

  5. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Directory of Open Access Journals (Sweden)

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  6. The future of genome-scale modeling of yeast through integration of a transcriptional regulatory network

    DEFF Research Database (Denmark)

    Liu, Guodong; Marras, Antonio; Nielsen, Jens

    2014-01-01

    regulatory information is necessary to improve the accuracy and predictive ability of metabolic models. Here we review the strategies for the reconstruction of a transcriptional regulatory network (TRN) for yeast and the integration of such a reconstruction into a flux balance analysis-based metabolic model......Metabolism is regulated at multiple levels in response to the changes of internal or external conditions. Transcriptional regulation plays an important role in regulating many metabolic reactions by altering the concentrations of metabolic enzymes. Thus, integration of the transcriptional....... While many large-scale TRN reconstructions have been reported for yeast, these reconstructions still need to be improved regarding the functionality and dynamic property of the regulatory interactions. In addition, mathematical modeling approaches need to be further developed to efficiently integrate...

  7. Robustness and Optimization of Complex Networks : Reconstructability, Algorithms and Modeling

    NARCIS (Netherlands)

    Liu, D.

    2013-01-01

    The infrastructure networks, including the Internet, telecommunication networks, electrical power grids, transportation networks (road, railway, waterway, and airway networks), gas networks and water networks, are becoming more and more complex. The complex infrastructure networks are crucial to our

  8. Genome-scale reconstruction of the sigma factor network in Escherichia coli: topology and functional states

    DEFF Research Database (Denmark)

    Cho, Byung-Kwan; Kim, Donghyuk; Knight, Eric M.

    2014-01-01

    Background: At the beginning of the transcription process, the RNA polymerase (RNAP) core enzyme requires a sigma-factor to recognize the genomic location at which the process initiates. Although the crucial role of sigma-factors has long been appreciated and characterized for many individual...... to transcription units (TUs), representing an increase of more than 300% over what has been previously reported. The reconstructed network was used to investigate competition between alternative sigma-factors (the sigma(70) and sigma(38) regulons), confirming the competition model of sigma substitution...

  9. A homologous mapping method for three-dimensional reconstruction of protein networks reveals disease-associated mutations.

    Science.gov (United States)

    Huang, Sing-Han; Lo, Yu-Shu; Luo, Yong-Chun; Tseng, Yu-Yao; Yang, Jinn-Moon

    2018-03-19

    One of the crucial steps toward understanding the associations among molecular interactions, pathways, and diseases in a cell is to investigate detailed atomic protein-protein interactions (PPIs) in the structural interactome. Despite the availability of large-scale methods for analyzing PPI networks, these methods often focused on PPI networks using genome-scale data and/or known experimental PPIs. However, these methods are unable to provide structurally resolved interaction residues and their conservations in PPI networks. Here, we reconstructed a human three-dimensional (3D) structural PPI network (hDiSNet) with the detailed atomic binding models and disease-associated mutations by enhancing our PPI families and 3D-domain interologs from 60,618 structural complexes and complete genome database with 6,352,363 protein sequences across 2274 species. hDiSNet is a scale-free network (γ = 2.05), which consists of 5177 proteins and 19,239 PPIs with 5843 mutations. These 19,239 structurally resolved PPIs not only expanded the number of PPIs compared to present structural PPI network, but also achieved higher agreement with gene ontology similarities and higher co-expression correlation than the ones of 181,868 experimental PPIs recorded in public databases. Among 5843 mutations, 1653 and 790 mutations involved in interacting domains and contacting residues, respectively, are highly related to diseases. Our hDiSNet can provide detailed atomic interactions of human disease and their associated proteins with mutations. Our results show that the disease-related mutations are often located at the contacting residues forming the hydrogen bonds or conserved in the PPI family. In addition, hDiSNet provides the insights of the FGFR (EGFR)-MAPK pathway for interpreting the mechanisms of breast cancer and ErbB signaling pathway in brain cancer. Our results demonstrate that hDiSNet can explore structural-based interactions insights for understanding the mechanisms of disease

  10. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction.

    Science.gov (United States)

    Kang, Eunhee; Min, Junhong; Ye, Jong Chul

    2017-10-01

    Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning. However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis. Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community. Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns. To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach. We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images. More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise. In addition, our CNN is designed with a residual learning architecture for faster network training and better performance. Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose. In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches. Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 "Low-Dose CT Grand Challenge." To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective. In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from

  11. SF3M software: 3-D photo-reconstruction for non-expert users and its application to a gully network

    Science.gov (United States)

    Castillo, C.; James, M. R.; Redel-Macías, M. D.; Pérez, R.; Gómez, J. A.

    2015-08-01

    Three-dimensional photo-reconstruction (PR) techniques have been successfully used to produce high-resolution surface models for different applications and over different spatial scales. However, innovative approaches are required to overcome some limitations that this technique may present for field image acquisition in challenging scene geometries. Here, we evaluate SF3M, a new graphical user interface for implementing a complete PR workflow based on freely available software (including external calls to VisualSFM and CloudCompare), in combination with a low-cost survey design for the reconstruction of a several-hundred-metres-long gully network. SF3M provided a semi-automated workflow for 3-D reconstruction requiring ~ 49 h (of which only 17 % required operator assistance) for obtaining a final gully network model of > 17 million points over a gully plan area of 4230 m2. We show that a walking itinerary along the gully perimeter using two lightweight automatic cameras (1 s time-lapse mode) and a 6 m long pole is an efficient method for 3-D monitoring of gullies, at a low cost (~ EUR 1000 budget for the field equipment) and the time requirements (~ 90 min for image collection). A mean error of 6.9 cm at the ground control points was found, mainly due to model deformations derived from the linear geometry of the gully and residual errors in camera calibration. The straightforward image collection and processing approach can be of great benefit for non-expert users working on gully erosion assessment.

  12. The North American Drought Atlas: Tree-Ring Reconstructions of Drought Variability for Climate Modeling and Assessment

    Science.gov (United States)

    Cook, E. R.

    2007-05-01

    The North American Drought Atlas describes a detailed reconstruction of drought variability from tree rings over most of North America for the past 500-1000 years. The first version of it, produced over three years ago, was based on a network of 835 tree-ring chronologies and a 286-point grid of instrumental Palmer Drought Severity Indices (PDSI). These gridded PDSI reconstructions have been used in numerous published studies now that range from modeling fire in the American West, to the impact of drought on palaeo-Indian societies, and to the determination of the primary causes of drought over North America through climate modeling experiments. Some examples of these applications will be described to illustrate the scientific value of these large-scale reconstructions of drought. Since the development and free public release of Version 1 of the North American Drought Atlas (see http:iridl.ldeo.columbia.edu/SOURCES/.LDEO/.TRL/.NADA2004/.pdsi-atlas.html), great improvements have been made in the critical tree-ring network used to reconstruct PDSI at each grid point. This network has now been enlarged to 1743 annual tree-ring chronologies, which greatly improves the density of tree-ring records in certain parts of the grid, especially in Canada and Mexico. In addition, the number of tree-ring records that extend back before AD 1400 has been substantially increased. These developments justify the creation of Version 2 of the North American Drought Atlas. In this talk I will describe this new version of the drought atlas and some of its properties that make it a significant improvement over the previous version. The new product provides enhanced resolution of the spatial and temporal variability of prolonged drought such as the late 16th century event that impacted regions of both Mexico and the United States. I will also argue for the North American Drought Atlas being used as a template for the development of large-scale drought reconstructions in other land areas of

  13. Vascular dynamics aid a coupled neurovascular network learn sparse independent features: A computational model

    Directory of Open Access Journals (Sweden)

    Ryan Thomas Philips

    2016-02-01

    Full Text Available Cerebral vascular dynamics are generally thought to be controlled by neural activity in a unidirectional fashion. However, both computational modeling and experimental evidence point to the feedback effects of vascular dynamics on neural activity. Vascular feedback in the form of glucose and oxygen controls neuronal ATP, either directly or via the agency of astrocytes, which in turn modulates neural firing. Recently, a detailed model of the neuron-astrocyte-vessel system has shown how vasomotion can modulate neural firing. Similarly, arguing from known cerebrovascular physiology, an approach known as `hemoneural hypothesis' postulates functional modulation of neural activity by vascular feedback. To instantiate this perspective, we present a computational model in which a network of `vascular units' supplies energy to a neural network. The complex dynamics of the vascular network, modeled by a network of oscillators, turns neurons ON and OFF randomly. The informational consequence of such dynamics is explored in the context of an auto-encoder network. In the proposed model, each vascular unit supplies energy to a subset of hidden neurons of an autoencoder network, which constitutes its `projective field'. Neurons that receive adequate energy in a given trial have reduced threshold, and thus are prone to fire. Dynamics of the vascular network are governed by changes in the reconstruction error of the auto-encoder network, interpreted as the neuronal demand. Vascular feedback causes random inactivation of a subset of hidden neurons in every trial. We observe that, under conditions of desynchronized vascular dynamics, the output reconstruction error is low and the feature vectors learnt are sparse and independent. Our earlier modeling study highlighted the link between desynchronized vascular dynamics and efficient energy delivery in skeletal muscle. We now show that desynchronized vascular dynamics leads to efficient training in an auto

  14. Sub-component modeling for face image reconstruction in video communications

    Science.gov (United States)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  15. Dense Matching Comparison Between Census and a Convolutional Neural Network Algorithm for Plant Reconstruction

    Science.gov (United States)

    Xia, Y.; Tian, J.; d'Angelo, P.; Reinartz, P.

    2018-05-01

    3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.

  16. Mouse obesity network reconstruction with a variational Bayes algorithm to employ aggressive false positive control

    Directory of Open Access Journals (Sweden)

    Logsdon Benjamin A

    2012-04-01

    Full Text Available Abstract Background We propose a novel variational Bayes network reconstruction algorithm to extract the most relevant disease factors from high-throughput genomic data-sets. Our algorithm is the only scalable method for regularized network recovery that employs Bayesian model averaging and that can internally estimate an appropriate level of sparsity to ensure few false positives enter the model without the need for cross-validation or a model selection criterion. We use our algorithm to characterize the effect of genetic markers and liver gene expression traits on mouse obesity related phenotypes, including weight, cholesterol, glucose, and free fatty acid levels, in an experiment previously used for discovery and validation of network connections: an F2 intercross between the C57BL/6 J and C3H/HeJ mouse strains, where apolipoprotein E is null on the background. Results We identified eleven genes, Gch1, Zfp69, Dlgap1, Gna14, Yy1, Gabarapl1, Folr2, Fdft1, Cnr2, Slc24a3, and Ccl19, and a quantitative trait locus directly connected to weight, glucose, cholesterol, or free fatty acid levels in our network. None of these genes were identified by other network analyses of this mouse intercross data-set, but all have been previously associated with obesity or related pathologies in independent studies. In addition, through both simulations and data analysis we demonstrate that our algorithm achieves superior performance in terms of power and type I error control than other network recovery algorithms that use the lasso and have bounds on type I error control. Conclusions Our final network contains 118 previously associated and novel genes affecting weight, cholesterol, glucose, and free fatty acid levels that are excellent obesity risk candidates.

  17. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    Science.gov (United States)

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  18. Dendroclimatic transfer functions revisited: Little Ice Age and Medieval Warm Period summer temperatures reconstructed using artificial neural networks and linear algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Helama, S.; Holopainen, J.; Eronen, M. [Department of Geology, University of Helsinki, (Finland); Makarenko, N.G. [Russian Academy of Sciences, St. Petersburg (Russian Federation). Pulkovo Astronomical Observatory; Karimova, L.M.; Kruglun, O.A. [Institute of Mathematics, Almaty (Kazakhstan); Timonen, M. [Finnish Forest Research Institute, Rovaniemi Research Unit (Finland); Merilaeinen, J. [SAIMA Unit of the Savonlinna Department of Teacher Education, University of Joensuu (Finland)

    2009-07-01

    Tree-rings tell of past climates. To do so, tree-ring chronologies comprising numerous climate-sensitive living-tree and subfossil time-series need to be 'transferred' into palaeoclimate estimates using transfer functions. The purpose of this study is to compare different types of transfer functions, especially linear and nonlinear algorithms. Accordingly, multiple linear regression (MLR), linear scaling (LSC) and artificial neural networks (ANN, nonlinear algorithm) were compared. Transfer functions were built using a regional tree-ring chronology and instrumental temperature observations from Lapland (northern Finland and Sweden). In addition, conventional MLR was compared with a hybrid model whereby climate was reconstructed separately for short- and long-period timescales prior to combining the bands of timescales into a single hybrid model. The fidelity of the different reconstructions was validated against instrumental climate data. The reconstructions by MLR and ANN showed reliable reconstruction capabilities over the instrumental period (AD 1802-1998). LCS failed to reach reasonable verification statistics and did not qualify as a reliable reconstruction: this was due mainly to exaggeration of the low-frequency climatic variance. Over this instrumental period, the reconstructed low-frequency amplitudes of climate variability were rather similar by MLR and ANN. Notably greater differences between the models were found over the actual reconstruction period (AD 802-1801). A marked temperature decline, as reconstructed by MLR, from the Medieval Warm Period (AD 931-1180) to the Little Ice Age (AD 1601-1850), was evident in all the models. This decline was approx. 0.5 C as reconstructed by MLR. Different ANN based palaeotemperatures showed simultaneous cooling of 0.2 to 0.5 C, depending on algorithm. The hybrid MLR did not seem to provide further benefit above conventional MLR in our sample. The robustness of the conventional MLR over the calibration

  19. Reconstructing Generalized Logical Networks of Transcriptional Regulation in Mouse Brain from Temporal Gene Expression Data

    Energy Technology Data Exchange (ETDEWEB)

    Song, Mingzhou (Joe) [New Mexico State University, Las Cruces; Lewis, Chris K. [New Mexico State University, Las Cruces; Lance, Eric [New Mexico State University, Las Cruces; Chesler, Elissa J [ORNL; Kirova, Roumyana [Bristol-Myers Squibb Pharmaceutical Research & Development, NJ; Langston, Michael A [University of Tennessee, Knoxville (UTK); Bergeson, Susan [Texas Tech University, Lubbock

    2009-01-01

    The problem of reconstructing generalized logical networks to account for temporal dependencies among genes and environmental stimuli from high-throughput transcriptomic data is addressed. A network reconstruction algorithm was developed that uses the statistical significance as a criterion for network selection to avoid false-positive interactions arising from pure chance. Using temporal gene expression data collected from the brains of alcohol-treated mice in an analysis of the molecular response to alcohol, this algorithm identified genes from a major neuronal pathway as putative components of the alcohol response mechanism. Three of these genes have known associations with alcohol in the literature. Several other potentially relevant genes, highlighted and agreeing with independent results from literature mining, may play a role in the response to alcohol. Additional, previously-unknown gene interactions were discovered that, subject to biological verification, may offer new clues in the search for the elusive molecular mechanisms of alcoholism.

  20. DENSE MATCHING COMPARISON BETWEEN CENSUS AND A CONVOLUTIONAL NEURAL NETWORK ALGORITHM FOR PLANT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Y. Xia

    2018-05-01

    Full Text Available 3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.

  1. A collaborative computing framework of cloud network and WBSN applied to fall detection and 3-D motion reconstruction.

    Science.gov (United States)

    Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh

    2014-03-01

    As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.

  2. Reconstruction and analysis of a genome-scale metabolic model for Scheffersomyces stipitis

    Directory of Open Access Journals (Sweden)

    Balagurunathan Balaji

    2012-02-01

    Full Text Available Abstract Background Fermentation of xylose, the major component in hemicellulose, is essential for economic conversion of lignocellulosic biomass to fuels and chemicals. The yeast Scheffersomyces stipitis (formerly known as Pichia stipitis has the highest known native capacity for xylose fermentation and possesses several genes for lignocellulose bioconversion in its genome. Understanding the metabolism of this yeast at a global scale, by reconstructing the genome scale metabolic model, is essential for manipulating its metabolic capabilities and for successful transfer of its capabilities to other industrial microbes. Results We present a genome-scale metabolic model for Scheffersomyces stipitis, a native xylose utilizing yeast. The model was reconstructed based on genome sequence annotation, detailed experimental investigation and known yeast physiology. Macromolecular composition of Scheffersomyces stipitis biomass was estimated experimentally and its ability to grow on different carbon, nitrogen, sulphur and phosphorus sources was determined by phenotype microarrays. The compartmentalized model, developed based on an iterative procedure, accounted for 814 genes, 1371 reactions, and 971 metabolites. In silico computed growth rates were compared with high-throughput phenotyping data and the model could predict the qualitative outcomes in 74% of substrates investigated. Model simulations were used to identify the biosynthetic requirements for anaerobic growth of Scheffersomyces stipitis on glucose and the results were validated with published literature. The bottlenecks in Scheffersomyces stipitis metabolic network for xylose uptake and nucleotide cofactor recycling were identified by in silico flux variability analysis. The scope of the model in enhancing the mechanistic understanding of microbial metabolism is demonstrated by identifying a mechanism for mitochondrial respiration and oxidative phosphorylation. Conclusion The genome

  3. Snapshot of iron response in Shewanella oneidensis by gene network reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yunfeng; Harris, Daniel P.; Luo, Feng; Xiong, Wenlu; Joachimiak, Marcin; Wu, Liyou; Dehal, Paramvir; Jacobsen, Janet; Yang, Zamin; Palumbo, Anthony V.; Arkin, Adam P.; Zhou, Jizhong

    2008-10-09

    Background: Iron homeostasis of Shewanella oneidensis, a gamma-proteobacterium possessing high iron content, is regulated by a global transcription factor Fur. However, knowledge is incomplete about other biological pathways that respond to changes in iron concentration, as well as details of the responses. In this work, we integrate physiological, transcriptomics and genetic approaches to delineate the iron response of S. oneidensis. Results: We show that the iron response in S. oneidensis is a rapid process. Temporal gene expression profiles were examined for iron depletion and repletion, and a gene co-expression network was reconstructed. Modules of iron acquisition systems, anaerobic energy metabolism and protein degradation were the most noteworthy in the gene network. Bioinformatics analyses suggested that genes in each of the modules might be regulated by DNA-binding proteins Fur, CRP and RpoH, respectively. Closer inspection of these modules revealed a transcriptional regulator (SO2426) involved in iron acquisition and ten transcriptional factors involved in anaerobic energy metabolism. Selected genes in the network were analyzed by genetic studies. Disruption of genes encoding a putative alcaligin biosynthesis protein (SO3032) and a gene previously implicated in protein degradation (SO2017) led to severe growth deficiency under iron depletion conditions. Disruption of a novel transcriptional factor (SO1415) caused deficiency in both anaerobic iron reduction and growth with thiosulfate or TMAO as an electronic acceptor, suggesting that SO1415 is required for specific branches of anaerobic energy metabolism pathways. Conclusions: Using a reconstructed gene network, we identified major biological pathways that were differentially expressed during iron depletion and repletion. Genetic studies not only demonstrated the importance of iron acquisition and protein degradation for iron depletion, but also characterized a novel transcriptional factor (SO1415) with a

  4. A Pseudoproxy-Ensemble Study of Late-Holocene Climate Field Reconstructions Using CCA

    Science.gov (United States)

    Amrhein, D. E.; Smerdon, J. E.

    2009-12-01

    Recent evaluations of late-Holocene multi-proxy reconstruction methods have used pseudoproxy experiments derived from millennial General Circulation Model (GCM) integrations. These experiments assess the performance of a reconstruction technique by comparing pseudoproxy reconstructions, which use restricted subsets of model data, against complete GCM data fields. Most previous studies have tested methodologies using different pseudoproxy noise levels, but only with single realizations for each noise classification. A more robust evaluation of performance is to create an ensemble of pseudoproxy networks with distinct sets of noise realizations and a corresponding reconstruction ensemble that can be evaluated for consistency and sensitivity to random error. This work investigates canonical correlation analysis (CCA) as a late-Holocene climate field reconstruction (CFR) technique using ensembles of pseudoproxy experiments derived from the NCAR CSM 1.4 millennial integration. Three 200-member reconstruction ensembles are computed using pseudoproxies with signal-to-noise ratios (by standard deviation) of 1, 0.5, and 0.25 and locations that approximate the spatial distribution of real-world multiproxy networks. An important component of these ensemble calculations is the independent optimization of the three CCA truncation parameters for each ensemble member. This task is accomplished using an inexpensive discrete optimization algorithm that minimizes both RMS error in the calibration interval and the number of free parameters in the reconstruction model to avoid artificial skill. Within this framework, CCA is investigated for its sensitivity to the level of noise in the pseudoproxy network and the spatial distribution of the network. Warm biases, variance losses, and validation-interval error increase with noise level and vary spatially within the reconstructed fields. Reconstruction skill, measured as grid-point correlations during the validation interval, is lowest in

  5. lpNet: a linear programming approach to reconstruct signal transduction networks.

    Science.gov (United States)

    Matos, Marta R A; Knapp, Bettina; Kaderali, Lars

    2015-10-01

    With the widespread availability of high-throughput experimental technologies it has become possible to study hundreds to thousands of cellular factors simultaneously, such as coding- or non-coding mRNA or protein concentrations. Still, extracting information about the underlying regulatory or signaling interactions from these data remains a difficult challenge. We present a flexible approach towards network inference based on linear programming. Our method reconstructs the interactions of factors from a combination of perturbation/non-perturbation and steady-state/time-series data. We show both on simulated and real data that our methods are able to reconstruct the underlying networks fast and efficiently, thus shedding new light on biological processes and, in particular, into disease's mechanisms of action. We have implemented the approach as an R package available through bioconductor. This R package is freely available under the Gnu Public License (GPL-3) from bioconductor.org (http://bioconductor.org/packages/release/bioc/html/lpNet.html) and is compatible with most operating systems (Windows, Linux, Mac OS) and hardware architectures. bettina.knapp@helmholtz-muenchen.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. A biomechanical modeling-guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction

    Science.gov (United States)

    Huang, Xiaokun; Zhang, You; Wang, Jing

    2018-02-01

    Reconstructing four-dimensional cone-beam computed tomography (4D-CBCT) images directly from respiratory phase-sorted traditional 3D-CBCT projections can capture target motion trajectory, reduce motion artifacts, and reduce imaging dose and time. However, the limited numbers of projections in each phase after phase-sorting decreases CBCT image quality under traditional reconstruction techniques. To address this problem, we developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, an iterative method that can reconstruct higher quality 4D-CBCT images from limited projections using an inter-phase intensity-driven motion model. However, the accuracy of the intensity-driven motion model is limited in regions with fine details whose quality is degraded due to insufficient projection number, which consequently degrades the reconstructed image quality in corresponding regions. In this study, we developed a new 4D-CBCT reconstruction algorithm by introducing biomechanical modeling into SMEIR (SMEIR-Bio) to boost the accuracy of the motion model in regions with small fine structures. The biomechanical modeling uses tetrahedral meshes to model organs of interest and solves internal organ motion using tissue elasticity parameters and mesh boundary conditions. This physics-driven approach enhances the accuracy of solved motion in the organ’s fine structures regions. This study used 11 lung patient cases to evaluate the performance of SMEIR-Bio, making both qualitative and quantitative comparisons between SMEIR-Bio, SMEIR, and the algebraic reconstruction technique with total variation regularization (ART-TV). The reconstruction results suggest that SMEIR-Bio improves the motion model’s accuracy in regions containing small fine details, which consequently enhances the accuracy and quality of the reconstructed 4D-CBCT images.

  7. Living on the edge: a toy model for holographic reconstruction of algebras with centers

    Energy Technology Data Exchange (ETDEWEB)

    Donnelly, William; Marolf, Donald; Michel, Ben; Wien, Jason [Department of Physics, University of California,Santa Barbara, CA 93106 (United States)

    2017-04-18

    We generalize the Pastawski-Yoshida-Harlow-Preskill (HaPPY) holographic quantum error-correcting code to provide a toy model for bulk gauge fields or linearized gravitons. The key new elements are the introduction of degrees of freedom on the links (edges) of the associated tensor network and their connection to further copies of the HaPPY code by an appropriate isometry. The result is a model in which boundary regions allow the reconstruction of bulk algebras with central elements living on the interior edges of the (greedy) entanglement wedge, and where these central elements can also be reconstructed from complementary boundary regions. In addition, the entropy of boundary regions receives both Ryu-Takayanagi-like contributions and further corrections that model the ((δArea)/(4G{sub N})) term of Faulkner, Lewkowycz, and Maldacena. Comparison with Yang-Mills theory then suggests that this ((δArea)/(4G{sub N})) term can be reinterpreted as a part of the bulk entropy of gravitons under an appropriate extension of the physical bulk Hilbert space.

  8. Living on the edge: a toy model for holographic reconstruction of algebras with centers

    International Nuclear Information System (INIS)

    Donnelly, William; Marolf, Donald; Michel, Ben; Wien, Jason

    2017-01-01

    We generalize the Pastawski-Yoshida-Harlow-Preskill (HaPPY) holographic quantum error-correcting code to provide a toy model for bulk gauge fields or linearized gravitons. The key new elements are the introduction of degrees of freedom on the links (edges) of the associated tensor network and their connection to further copies of the HaPPY code by an appropriate isometry. The result is a model in which boundary regions allow the reconstruction of bulk algebras with central elements living on the interior edges of the (greedy) entanglement wedge, and where these central elements can also be reconstructed from complementary boundary regions. In addition, the entropy of boundary regions receives both Ryu-Takayanagi-like contributions and further corrections that model the ((δArea)/(4G N )) term of Faulkner, Lewkowycz, and Maldacena. Comparison with Yang-Mills theory then suggests that this ((δArea)/(4G N )) term can be reinterpreted as a part of the bulk entropy of gravitons under an appropriate extension of the physical bulk Hilbert space.

  9. Evaluating climate field reconstruction techniques using improved emulations of real-world conditions

    Science.gov (United States)

    Wang, J.; Emile-Geay, J.; Guillot, D.; Smerdon, J. E.; Rajaratnam, B.

    2014-01-01

    Pseudoproxy experiments (PPEs) have become an important framework for evaluating paleoclimate reconstruction methods. Most existing PPE studies assume constant proxy availability through time and uniform proxy quality across the pseudoproxy network. Real multiproxy networks are, however, marked by pronounced disparities in proxy quality, and a steep decline in proxy availability back in time, either of which may have large effects on reconstruction skill. A suite of PPEs constructed from a millennium-length general circulation model (GCM) simulation is thus designed to mimic these various real-world characteristics. The new pseudoproxy network is used to evaluate four climate field reconstruction (CFR) techniques: truncated total least squares embedded within the regularized EM (expectation-maximization) algorithm (RegEM-TTLS), the Mann et al. (2009) implementation of RegEM-TTLS (M09), canonical correlation analysis (CCA), and Gaussian graphical models embedded within RegEM (GraphEM). Each method's risk properties are also assessed via a 100-member noise ensemble. Contrary to expectation, it is found that reconstruction skill does not vary monotonically with proxy availability, but also is a function of the type and amplitude of climate variability (forced events vs. internal variability). The use of realistic spatiotemporal pseudoproxy characteristics also exposes large inter-method differences. Despite the comparable fidelity in reconstructing the global mean temperature, spatial skill varies considerably between CFR techniques. Both GraphEM and CCA efficiently exploit teleconnections, and produce consistent reconstructions across the ensemble. RegEM-TTLS and M09 appear advantageous for reconstructions on highly noisy data, but are subject to larger stochastic variations across different realizations of pseudoproxy noise. Results collectively highlight the importance of designing realistic pseudoproxy networks and implementing multiple noise realizations of PPEs

  10. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  11. Dose reconstruction modeling for medical radiation workers

    International Nuclear Information System (INIS)

    Choi, Yeong Chull; Cha, Eun Shil; Lee, Won Jin

    2017-01-01

    Exposure information is a crucial element for the assessment of health risk due to radiation. Radiation doses received by medical radiation workers have been collected and maintained by public registry since 1996. Since exposure levels in the remote past are greater concern, it is essential to reconstruct unmeasured doses in the past using known information. We developed retrodiction models for different groups of medical radiation workers and estimate individual past doses before 1996. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure.

  12. Dose reconstruction modeling for medical radiation workers

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yeong Chull; Cha, Eun Shil; Lee, Won Jin [Dept. of Preventive Medicine, Korea University, Seoul (Korea, Republic of)

    2017-04-15

    Exposure information is a crucial element for the assessment of health risk due to radiation. Radiation doses received by medical radiation workers have been collected and maintained by public registry since 1996. Since exposure levels in the remote past are greater concern, it is essential to reconstruct unmeasured doses in the past using known information. We developed retrodiction models for different groups of medical radiation workers and estimate individual past doses before 1996. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure.

  13. Automated comparison of Bayesian reconstructions of experimental profiles with physical models

    International Nuclear Information System (INIS)

    Irishkin, Maxim

    2014-01-01

    In this work we developed an expert system that carries out in an integrated and fully automated way i) a reconstruction of plasma profiles from the measurements, using Bayesian analysis ii) a prediction of the reconstructed quantities, according to some models and iii) an intelligent comparison of the first two steps. This system includes systematic checking of the internal consistency of the reconstructed quantities, enables automated model validation and, if a well-validated model is used, can be applied to help detecting interesting new physics in an experiment. The work shows three applications of this quite general system. The expert system can successfully detect failures in the automated plasma reconstruction and provide (on successful reconstruction cases) statistics of agreement of the models with the experimental data, i.e. information on the model validity. (author) [fr

  14. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Reconstruction of biological networks based on life science data integration.

    Science.gov (United States)

    Kormeier, Benjamin; Hippe, Klaus; Arrigo, Patrizio; Töpel, Thoralf; Janowski, Sebastian; Hofestädt, Ralf

    2010-10-27

    For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks. Therefore, based on relevant molecular database and information systems, biological data integration is an essential step in constructing biological networks. In this paper, we will motivate the applications BioDWH--an integration toolkit for building life science data warehouses, CardioVINEdb--a information system for biological data in cardiovascular-disease and VANESA--a network editor for modeling and simulation of biological networks. Based on this integration process, the system supports the generation of biological network models. A case study of a cardiovascular-disease related gene-regulated biological network is also presented.

  16. Generation of Complex Karstic Conduit Networks with a Hydro-chemical Model

    Science.gov (United States)

    De Rooij, R.; Graham, W. D.

    2016-12-01

    The discrete-continuum approach is very well suited to simulate flow and solute transport within karst aquifers. Using this approach, discrete one-dimensional conduits are embedded within a three-dimensional continuum representative of the porous limestone matrix. Typically, however, little is known about the geometry of the karstic conduit network. As such the discrete-continuum approach is rarely used for practical applications. It may be argued, however, that the uncertainty associated with the geometry of the network could be handled by modeling an ensemble of possible karst conduit networks within a stochastic framework. We propose to generate stochastically realistic karst conduit networks by simulating the widening of conduits as caused by the dissolution of limestone over geological relevant timescales. We illustrate that advanced numerical techniques permit to solve the non-linear and coupled hydro-chemical processes efficiently, such that relatively large and complex networks can be generated in acceptable time frames. Instead of specifying flow boundary conditions on conduit cells to recharge the network as is typically done in classical speleogenesis models, we specify an effective rainfall rate over the land surface and let model physics determine the amount of water entering the network. This is advantageous since the amount of water entering the network is extremely difficult to reconstruct, whereas the effective rainfall rate may be quantified using paleoclimatic data. Furthermore, we show that poorly known flow conditions may be constrained by requiring a realistic flow field. Using our speleogenesis model we have investigated factors that influence the geometry of simulated conduit networks. We illustrate that our model generates typical branchwork, network and anastomotic conduit systems. Flow, solute transport and water ages in karst aquifers are simulated using a few illustrative networks.

  17. A program for verification of phylogenetic network models.

    Science.gov (United States)

    Gunawan, Andreas D M; Lu, Bingxin; Zhang, Louxin

    2016-09-01

    Genetic material is transferred in a non-reproductive manner across species more frequently than commonly thought, particularly in the bacteria kingdom. On one hand, extant genomes are thus more properly considered as a fusion product of both reproductive and non-reproductive genetic transfers. This has motivated researchers to adopt phylogenetic networks to study genome evolution. On the other hand, a gene's evolution is usually tree-like and has been studied for over half a century. Accordingly, the relationships between phylogenetic trees and networks are the basis for the reconstruction and verification of phylogenetic networks. One important problem in verifying a network model is determining whether or not certain existing phylogenetic trees are displayed in a phylogenetic network. This problem is formally called the tree containment problem. It is NP-complete even for binary phylogenetic networks. We design an exponential time but efficient method for determining whether or not a phylogenetic tree is displayed in an arbitrary phylogenetic network. It is developed on the basis of the so-called reticulation-visible property of phylogenetic networks. A C-program is available for download on http://www.math.nus.edu.sg/∼matzlx/tcp_package matzlx@nus.edu.sg Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Reconstruction of biological networks based on life science data integration

    Directory of Open Access Journals (Sweden)

    Kormeier Benjamin

    2010-06-01

    Full Text Available For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks. Therefore, based on relevant molecular database and information systems, biological data integration is an essential step in constructing biological networks. In this paper, we will motivate the applications BioDWH - an integration toolkit for building life science data warehouses, CardioVINEdb - a information system for biological data in cardiovascular-disease and VANESA- a network editor for modeling and simulation of biological networks. Based on this integration process, the system supports the generation of biological network models. A case study of a cardiovascular-disease related gene-regulated biological network is also presented.

  19. Modeling the citation network by network cosmology.

    Science.gov (United States)

    Xie, Zheng; Ouyang, Zhenzheng; Zhang, Pengyuan; Yi, Dongyun; Kong, Dexing

    2015-01-01

    Citation between papers can be treated as a causal relationship. In addition, some citation networks have a number of similarities to the causal networks in network cosmology, e.g., the similar in-and out-degree distributions. Hence, it is possible to model the citation network using network cosmology. The casual network models built on homogenous spacetimes have some restrictions when describing some phenomena in citation networks, e.g., the hot papers receive more citations than other simultaneously published papers. We propose an inhomogenous causal network model to model the citation network, the connection mechanism of which well expresses some features of citation. The node growth trend and degree distributions of the generated networks also fit those of some citation networks well.

  20. Complex Behavior in an Integrate-and-Fire Neuron Model Based on Small World Networks

    International Nuclear Information System (INIS)

    Lin Min; Chen Tianlun

    2005-01-01

    Based on our previously pulse-coupled integrate-and-fire neuron model in small world networks, we investigate the complex behavior of electroencephalographic (EEG)-like activities produced by such a model. We find EEG-like activities have obvious chaotic characteristics. We also analyze the complex behaviors of EEG-like signals, such as spectral analysis, reconstruction of the phase space, the correlation dimension, and so on.

  1. Continental-Scale Temperature Reconstructions from the PAGES 2k Network

    Science.gov (United States)

    Kaufman, D. S.

    2012-12-01

    We present a major new synthesis of seven regional temperature reconstructions to elucidate the global pattern of variations and their association with climate-forcing mechanisms over the past two millennia. To coordinate the integration of new and existing data of all proxy types, the Past Global Changes (PAGES) project developed the 2k Network. It comprises nine working groups representing eight continental-scale regions and the oceans. The PAGES 2k Consortium, authoring this paper, presently includes 79 representatives from 25 countries. For this synthesis, each of the PAGES 2k working groups identified the proxy climate records for reconstructing past temperature and associated uncertainty using the data and methodologies that they deemed most appropriate for their region. The datasets are from 973 sites where tree rings, pollen, corals, lake and marine sediment, glacier ice, speleothems, and historical documents record changes in biologically and physically mediated processes that are sensitive to temperature change, among other climatic factors. The proxy records used for this synthesis are available through the NOAA World Data Center for Paleoclimatology. On long time scales, the temperature reconstructions display similarities among regions, and a large part of this common behavior can be explained by known climate forcings. Reconstructed temperatures in all regions show an overall long-term cooling trend until around 1900 C.E., followed by strong warming during the 20th century. On the multi-decadal time scale, we assessed the variability among the temperature reconstructions using principal component (PC) analysis of the standardized decadal mean temperatures over the period of overlap among the reconstructions (1200 to 1980 C.E.). PC1 explains 35% of the total variability and is strongly correlated with temperature reconstructions from the four Northern Hemisphere regions, and with the sum of external forcings including solar, volcanic, and greenhouse

  2. Reconstruction of road defects and road roughness classification using vehicle responses with artificial neural networks simulation

    CSIR Research Space (South Africa)

    Ngwangwa, HM

    2010-04-01

    Full Text Available -1 Journal of Terramechanics Volume 47, Issue 2, April 2010, Pages 97-111 Reconstruction of road defects and road roughness classification using vehicle responses with artificial neural networks simulation H.M. Ngwangwaa, P.S. Heynsa, , , F...

  3. Reconstructing bidimensional scalar field theory models

    International Nuclear Information System (INIS)

    Flores, Gabriel H.; Svaiter, N.F.

    2001-07-01

    In this paper we review how to reconstruct scalar field theories in two dimensional spacetime starting from solvable Scrodinger equations. Theree different Schrodinger potentials are analyzed. We obtained two new models starting from the Morse and Scarf II hyperbolic potencials, the U (θ) θ 2 In 2 (θ 2 ) model and U (θ) = θ 2 cos 2 (In(θ 2 )) model respectively. (author)

  4. Modelling computer networks

    International Nuclear Information System (INIS)

    Max, G

    2011-01-01

    Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.

  5. A reliability index for assessment of crack profile reconstructed from ECT signals using a neural-network approach

    International Nuclear Information System (INIS)

    Yusa, Noritaka; Chen, Zhenmao; Miya, Kenzo; Cheng, Weiying

    2002-01-01

    This paper proposes a reliability parameter to enhance an version scheme developed by authors. The scheme is based upon an artificial neural network that simulates mapping between eddy current signals and crack profiles. One of the biggest advantages of the scheme is that it can deal with conductive cracks, which is necessary to reconstruct natural cracks. However, it has one significant disadvantage: the reliability of reconstructed profiles was unknown. The parameter provides an index for assessment of the crack profile and overcomes this disadvantage. After the parameter is validated by reconstruction of simulated cracks, it is applied to reconstruction of natural cracks that occurred in steam generator tubes of a pressurized water reactor. It is revealed that the parameter is applicable to not only simulated cracks but also natural ones. (author)

  6. Linking plate reconstructions with deforming lithosphere to geodynamic models

    Science.gov (United States)

    Müller, R. D.; Gurnis, M.; Flament, N.; Seton, M.; Spasojevic, S.; Williams, S.; Zahirovic, S.

    2011-12-01

    While global computational models are rapidly advancing in terms of their capabilities, there is an increasing need for assimilating observations into these models and/or ground-truthing model outputs. The open-source and platform independent GPlates software fills this gap. It was originally conceived as a tool to interactively visualize and manipulate classical rigid plate reconstructions and represent them as time-dependent topological networks of editable plate boundaries. The user can export time-dependent plate velocity meshes that can be used either to define initial surface boundary conditions for geodynamic models or alternatively impose plate motions throughout a geodynamic model run. However, tectonic plates are not rigid, and neglecting plate deformation, especially that of the edges of overriding plates, can result in significant misplacing of plate boundaries through time. A new, substantially re-engineered version of GPlates is now being developed that allows an embedding of deforming plates into topological plate boundary networks. We use geophysical and geological data to define the limit between rigid and deforming areas, and the deformation history of non-rigid blocks. The velocity field predicted by these reconstructions can then be used as a time-dependent surface boundary condition in regional or global 3-D geodynamic models, or alternatively as an initial boundary condition for a particular plate configuration at a given time. For time-dependent models with imposed plate motions (e.g. using CitcomS) we incorporate the continental lithosphere by embedding compositionally distinct crust and continental lithosphere within the thermal lithosphere. We define three isostatic columns of different thickness and buoyancy based on the tectonothermal age of the continents: Archean, Proterozoic and Phanerozoic. In the fourth isostatic column, the oceans, the thickness of the thermal lithosphere is assimilated using a half-space cooling model. We also

  7. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    Directory of Open Access Journals (Sweden)

    J. P. Werner

    2015-03-01

    Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.

  8. Overview of the neural network based technique for monitoring of road condition via reconstructed road profiles

    CSIR Research Space (South Africa)

    Ngwangwa, HM

    2008-07-01

    Full Text Available on the road and driver to assess the integrity of road and vehicle infrastructure. In this paper, vehicle vibration data are applied to an artificial neural network to reconstruct the corresponding road surface profiles. The results show that the technique...

  9. Reconstruction of Daily Sea Surface Temperature Based on Radial Basis Function Networks

    Directory of Open Access Journals (Sweden)

    Zhihong Liao

    2017-11-01

    Full Text Available A radial basis function network (RBFN method is proposed to reconstruct daily Sea surface temperatures (SSTs with limited SST samples. For the purpose of evaluating the SSTs using this method, non-biased SST samples in the Pacific Ocean (10°N–30°N, 115°E–135°E are selected when the tropical storm Hagibis arrived in June 2014, and these SST samples are obtained from the Reynolds optimum interpolation (OI v2 daily 0.25° SST (OISST products according to the distribution of AVHRR L2p SST and in-situ SST data. Furthermore, an improved nearest neighbor cluster (INNC algorithm is designed to search for the optimal hidden knots for RBFNs from both the SST samples and the background fields. Then, the reconstructed SSTs from the RBFN method are compared with the results from the OI method. The statistical results show that the RBFN method has a better performance of reconstructing SST than the OI method in the study, and that the average RMSE is 0.48 °C for the RBFN method, which is quite smaller than the value of 0.69 °C for the OI method. Additionally, the RBFN methods with different basis functions and clustering algorithms are tested, and we discover that the INNC algorithm with multi-quadric function is quite suitable for the RBFN method to reconstruct SSTs when the SST samples are sparsely distributed.

  10. Reconstructing Causal Biological Networks through Active Learning.

    Directory of Open Access Journals (Sweden)

    Hyunghoon Cho

    Full Text Available Reverse-engineering of biological networks is a central problem in systems biology. The use of intervention data, such as gene knockouts or knockdowns, is typically used for teasing apart causal relationships among genes. Under time or resource constraints, one needs to carefully choose which intervention experiments to carry out. Previous approaches for selecting most informative interventions have largely been focused on discrete Bayesian networks. However, continuous Bayesian networks are of great practical interest, especially in the study of complex biological systems and their quantitative properties. In this work, we present an efficient, information-theoretic active learning algorithm for Gaussian Bayesian networks (GBNs, which serve as important models for gene regulatory networks. In addition to providing linear-algebraic insights unique to GBNs, leading to significant runtime improvements, we demonstrate the effectiveness of our method on data simulated with GBNs and the DREAM4 network inference challenge data sets. Our method generally leads to faster recovery of underlying network structure and faster convergence to final distribution of confidence scores over candidate graph structures using the full data, in comparison to random selection of intervention experiments.

  11. Dynamic network reconstruction from gene expression data applied to immune response during bacterial infection.

    Science.gov (United States)

    Guthke, Reinhard; Möller, Ulrich; Hoffmann, Martin; Thies, Frank; Töpfer, Susanne

    2005-04-15

    The immune response to bacterial infection represents a complex network of dynamic gene and protein interactions. We present an optimized reverse engineering strategy aimed at a reconstruction of this kind of interaction networks. The proposed approach is based on both microarray data and available biological knowledge. The main kinetics of the immune response were identified by fuzzy clustering of gene expression profiles (time series). The number of clusters was optimized using various evaluation criteria. For each cluster a representative gene with a high fuzzy-membership was chosen in accordance with available physiological knowledge. Then hypothetical network structures were identified by seeking systems of ordinary differential equations, whose simulated kinetics could fit the gene expression profiles of the cluster-representative genes. For the construction of hypothetical network structures singular value decomposition (SVD) based methods and a newly introduced heuristic Network Generation Method here were compared. It turned out that the proposed novel method could find sparser networks and gave better fits to the experimental data. Reinhard.Guthke@hki-jena.de.

  12. Model-based image reconstruction in X-ray computed tomography

    NARCIS (Netherlands)

    Zbijewski, Wojciech Bartosz

    2006-01-01

    The thesis investigates the applications of iterative, statistical reconstruction (SR) algorithms in X-ray Computed Tomography. Emphasis is put on various aspects of system modeling in statistical reconstruction. Fundamental issues such as effects of object discretization and algorithm

  13. A reconstruction of Maxwell model for effective thermal conductivity of composite materials

    International Nuclear Information System (INIS)

    Xu, J.Z.; Gao, B.Z.; Kang, F.Y.

    2016-01-01

    Highlights: • Deficiencies were found in classical Maxwell model for effective thermal conductivity. • Maxwell model was reconstructed based on potential mean-field theory. • Reconstructed Maxwell model was extended with particle–particle contact resistance. • Predictions by reconstructed Maxwell model agree excellently with experimental data. - Abstract: Composite materials consisting of high thermal conductive fillers and polymer matrix are often used as thermal interface materials to dissipate heat generated from mechanical and electronic devices. The prediction of effective thermal conductivity of composites remains as a critical issue due to its dependence on considerably factors. Most models for prediction are based on the analog between electric potential and temperature that satisfy the Laplace equation under steady condition. Maxwell was the first to derive the effective electric resistivity of composites by examining the far-field spherical harmonic solution of Laplace equation perturbed by a sphere of different resistivity, and his model was considered as classical. However, a close review of Maxwell’s derivation reveals that there exist several controversial issues (deficiencies) inherent in his model. In this study, we reconstruct the Maxwell model based on a potential mean-field theory to resolve these issues. For composites made of continuum matrix and particle fillers, the contact resistance among particles was introduced in the reconstruction of Maxwell model. The newly reconstructed Maxwell model with contact resistivity as a fitting parameter is shown to fit excellently to experimental data over wide ranges of particle concentration and mean particle diameter. The scope of applicability of the reconstructed Maxwell model is also discussed using the contact resistivity as a parameter.

  14. Reconstruction of Novel Viewpoint Image Using GRNN

    Institute of Scientific and Technical Information of China (English)

    李战委; 孙济洲; 张志强

    2003-01-01

    A neural-statistical approach to the reconstruction of novel viewpoint image using general regression neural networks(GRNN) is presented. Different color value will be obtained by watching the same surface point of an object from different viewpoints due to specular reflection, and the difference is related to the position of viewpoint. The relationship between the position of viewpoint and the color of image is non-linear, neural network is introduced to make curve fitting, where the inputs of neural network are only a few calibrated images with obvious specular reflection. By training the neural network, network model is obtained. By inputing an arbitrary virtual viewpoint to the model, the image of the virtual viewpoint can be computed. By using the method presented here, novel viewpoint image with photo-realistic property can be obtained, especially images with obvious specular reflection can accurately be generated. The method is an image-based rendering method, geometric model of the scene and position of lighting are not needed.

  15. BUMPER: the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction

    Science.gov (United States)

    Holden, Phil; Birks, John; Brooks, Steve; Bush, Mark; Hwang, Grace; Matthews-Bird, Frazer; Valencia, Bryan; van Woesik, Robert

    2017-04-01

    We describe the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction (BUMPER), a Bayesian transfer function for inferring past climate and other environmental variables from microfossil assemblages. The principal motivation for a Bayesian approach is that the palaeoenvironment is treated probabilistically, and can be updated as additional data become available. Bayesian approaches therefore provide a reconstruction-specific quantification of the uncertainty in the data and in the model parameters. BUMPER is fully self-calibrating, straightforward to apply, and computationally fast, requiring 2 seconds to build a 100-taxon model from a 100-site training-set on a standard personal computer. We apply the model's probabilistic framework to generate thousands of artificial training-sets under ideal assumptions. We then use these to demonstrate both the general applicability of the model and the sensitivity of reconstructions to the characteristics of the training-set, considering assemblage richness, taxon tolerances, and the number of training sites. We demonstrate general applicability to real data, considering three different organism types (chironomids, diatoms, pollen) and different reconstructed variables. In all of these applications an identically configured model is used, the only change being the input files that provide the training-set environment and taxon-count data.

  16. Developing Mesoscale Model of Fibrin-Platelet Network Representing Blood Clotting =

    Science.gov (United States)

    Sun, Yueyi; Nikolov, Svetoslav; Bowie, Sam; Alexeev, Alexander; Lam, Wilbur; Myers, David

    Blood clotting disorders which prevent the body's natural ability to achieve hemostasis can lead to a variety of life threatening conditions such as, excessive bleeding, stroke, or heart attack. Treatment of these disorders is highly dependent on understanding the underlying physics behind the clotting process. Since clotting is a highly complex multi scale mechanism developing a fully atomistic model is currently not possible. We develop a mesoscale model based on dissipative particle dynamics (DPD) to gain fundamental understanding of the underlying principles controlling the clotting process. In our study, we examine experimental data on clot contraction using stacks of confocal microscopy images to estimate the crosslink density in the fibrin networks and platelet location. Using this data we reconstruct the platelet rich fibrin network and study how platelet-fibrin interactions affect clotting. Furthermore, we probe how different system parameters affect clot contraction. ANSF CAREER Award DMR-1255288.

  17. Inverse problems in eddy current testing using neural network

    Science.gov (United States)

    Yusa, N.; Cheng, W.; Miya, K.

    2000-05-01

    Reconstruction of crack in conductive material is one of the most important issues in the field of eddy current testing. Although many attempts to reconstruct cracks have been made, most of them deal with only artificial cracks machined with electro-discharge. However, in the case of natural cracks like stress corrosion cracking or inter-granular attack, there must be contact region and therefore their conductivity is not necessarily zero. In this study, an attempt to reconstruct natural cracks using neural network is presented. The neural network was trained through numerical simulated data obtained by the fast forward solver that calculated unflawed potential data a priori to save computational time. The solver is based on A-φ method discretized by using FEM-BEM A natural crack was modeled as an area whose conductivity was less than that of a specimen. The distribution of conductivity in that area was reconstructed as well. It took much time to train the network, but the speed of reconstruction was extremely fast after once it was trained. Well-trained network gave good reconstruction result.

  18. Right adrenal vein: comparison between adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    Science.gov (United States)

    Noda, Y; Goshima, S; Nagata, S; Miyoshi, T; Kawada, H; Kawai, N; Tanahashi, Y; Matsuo, M

    2018-06-01

    To compare right adrenal vein (RAV) visualisation and contrast enhancement degree on adrenal venous phase images reconstructed using adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) techniques. This prospective study was approved by the institutional review board, and written informed consent was waived. Fifty-seven consecutive patients who underwent adrenal venous phase imaging were enrolled. The same raw data were reconstructed using ASiR 40% and MBIR. The expert and beginner independently reviewed computed tomography (CT) images. RAV visualisation rates, background noise, and CT attenuation of the RAV, right adrenal gland, inferior vena cava (IVC), hepatic vein, and bilateral renal veins were compared between the two reconstruction techniques. RAV visualisation rates were higher with MBIR than with ASiR (95% versus 88%, p=0.13 in expert and 93% versus 75%, p=0.002 in beginner, respectively). RAV visualisation confidence ratings with MBIR were significantly greater than with ASiR (pASiR (pASiR (p=0.0013 and 0.02). Reconstruction of adrenal venous phase images using MBIR significantly reduces background noise, leading to an improvement in the RAV visualisation compared with ASiR. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  19. Last millennium Northern Hemisphere summer temperatures from tree rings: Part II, spatially resolved reconstructions

    Science.gov (United States)

    Anchukaitis, Kevin J.; Wilson, Rob; Briffa, Keith R.; Büntgen, Ulf; Cook, Edward R.; D'Arrigo, Rosanne; Davi, Nicole; Esper, Jan; Frank, David; Gunnarson, Björn E.; Hegerl, Gabi; Helama, Samuli; Klesse, Stefan; Krusic, Paul J.; Linderholm, Hans W.; Myglan, Vladimir; Osborn, Timothy J.; Zhang, Peng; Rydval, Milos; Schneider, Lea; Schurer, Andrew; Wiles, Greg; Zorita, Eduardo

    2017-05-01

    Climate field reconstructions from networks of tree-ring proxy data can be used to characterize regional-scale climate changes, reveal spatial anomaly patterns associated with atmospheric circulation changes, radiative forcing, and large-scale modes of ocean-atmosphere variability, and provide spatiotemporal targets for climate model comparison and evaluation. Here we use a multiproxy network of tree-ring chronologies to reconstruct spatially resolved warm season (May-August) mean temperatures across the extratropical Northern Hemisphere (40-90°N) using Point-by-Point Regression (PPR). The resulting annual maps of temperature anomalies (750-1988 CE) reveal a consistent imprint of volcanism, with 96% of reconstructed grid points experiencing colder conditions following eruptions. Solar influences are detected at the bicentennial (de Vries) frequency, although at other time scales the influence of insolation variability is weak. Approximately 90% of reconstructed grid points show warmer temperatures during the Medieval Climate Anomaly when compared to the Little Ice Age, although the magnitude varies spatially across the hemisphere. Estimates of field reconstruction skill through time and over space can guide future temporal extension and spatial expansion of the proxy network.

  20. Modelling Spatial Compositional Data: Reconstructions of past land cover and uncertainties

    DEFF Research Database (Denmark)

    Pirzamanbein, Behnaz; Lindström, Johan; Poska, Anneli

    2018-01-01

    In this paper, we construct a hierarchical model for spatial compositional data, which is used to reconstruct past land-cover compositions (in terms of coniferous forest, broadleaved forest, and unforested/open land) for five time periods during the past $6\\,000$ years over Europe. The model...... to a fast MCMC algorithm. Reconstructions are obtained by combining pollen-based estimates of vegetation cover at a limited number of locations with scenarios of past deforestation and output from a dynamic vegetation model. To evaluate uncertainties in the predictions a novel way of constructing joint...... confidence regions for the entire composition at each prediction location is proposed. The hierarchical model's ability to reconstruct past land cover is evaluated through cross validation for all time periods, and by comparing reconstructions for the recent past to a present day European forest map...

  1. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    International Nuclear Information System (INIS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-01-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  2. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2016-06-15

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  3. Reconstruction of the central carbon metabolism of Aspergillus niger

    DEFF Research Database (Denmark)

    David, Helga; Åkesson, Mats Fredrik; Nielsen, Jens

    2003-01-01

    The topology of central carbon metabolism of Aspergillus niger was identified and the metabolic network reconstructed, by integrating genomic, biochemical and physiological information available for this microorganism and other related fungi. The reconstructed network may serve as a valuable...... of metabolic fluxes using metabolite balancing. This framework was employed to perform an in silico characterisation of the phenotypic behaviour of A. niger grown on different carbon sources. The effects on growth of single reaction deletions were assessed and essential biochemical reactions were identified...... for different carbon sources. Furthermore, application of the stoichiometric model for assessing the metabolic capabilities of A. niger to produce metabolites was evaluated by using succinate production as a case study....

  4. Brain Network Modelling

    DEFF Research Database (Denmark)

    Andersen, Kasper Winther

    Three main topics are presented in this thesis. The first and largest topic concerns network modelling of functional Magnetic Resonance Imaging (fMRI) and Diffusion Weighted Imaging (DWI). In particular nonparametric Bayesian methods are used to model brain networks derived from resting state f...... for their ability to reproduce node clustering and predict unseen data. Comparing the models on whole brain networks, BCD and IRM showed better reproducibility and predictability than IDM, suggesting that resting state networks exhibit community structure. This also points to the importance of using models, which...... allow for complex interactions between all pairs of clusters. In addition, it is demonstrated how the IRM can be used for segmenting brain structures into functionally coherent clusters. A new nonparametric Bayesian network model is presented. The model builds upon the IRM and can be used to infer...

  5. Active numerical model of human body for reconstruction of falls from height.

    Science.gov (United States)

    Milanowicz, Marcin; Kędzior, Krzysztof

    2017-01-01

    Falls from height constitute the largest group of incidents out of approximately 90,000 occupational accidents occurring each year in Poland. Reconstruction of the exact course of a fall from height is generally difficult due to lack of sufficient information from the accident scene. This usually results in several contradictory versions of an incident and impedes, for example, determination of the liability in a judicial process. In similar situations, in many areas of human activity, researchers apply numerical simulation. They use it to model physical phenomena to reconstruct their real course over time; e.g. numerical human body models are frequently used for investigation and reconstruction of road accidents. However, they are validated in terms of specific road traffic accidents and are considerably limited when applied to the reconstruction of other types of accidents. The objective of the study was to develop an active numerical human body model to be used for reconstruction of accidents associated with falling from height. Development of the model involved extension and adaptation of the existing Pedestrian human body model (available in the MADYMO package database) for the purposes of reconstruction of falls from height by taking into account the human reaction to the loss of balance. The model was developed by using the results of experimental tests of the initial phase of the fall from height. The active numerical human body model covering 28 sets of initial conditions related to various human reactions to the loss of balance was developed. The application of the model was illustrated by using it to reconstruct a real fall from height. From among the 28 sets of initial conditions, those whose application made it possible to reconstruct the most probable version of the incident was selected. The selection was based on comparison of the results of the reconstruction with information contained in the accident report. Results in the form of estimated

  6. Human eyeball model reconstruction and quantitative analysis.

    Science.gov (United States)

    Xing, Qi; Wei, Qi

    2014-01-01

    Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.

  7. The performance of diphoton primary vertex reconstruction methods in H → γγ+Met channel of ATLAS experiment

    Science.gov (United States)

    Tomiwa, K. G.

    2017-09-01

    The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.

  8. Image quality of iterative reconstruction in cranial CT imaging: comparison of model-based iterative reconstruction (MBIR) and adaptive statistical iterative reconstruction (ASiR).

    Science.gov (United States)

    Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S

    2015-01-01

    The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p ASiR was 2 (p ASiR (p ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.

  9. A Markov model for the temporal dynamics of balanced random networks of finite size

    Science.gov (United States)

    Lagzi, Fereshteh; Rotter, Stefan

    2014-01-01

    The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between

  10. A dynamic evolutionary clustering perspective: Community detection in signed networks by reconstructing neighbor sets

    Science.gov (United States)

    Chen, Jianrui; Wang, Hua; Wang, Lina; Liu, Weiwei

    2016-04-01

    Community detection in social networks has been intensively studied in recent years. In this paper, a novel similarity measurement is defined according to social balance theory for signed networks. Inter-community positive links are found and deleted due to their low similarity. The positive neighbor sets are reconstructed by this method. Then, differential equations are proposed to imitate the constantly changing states of nodes. Each node will update its state based on the difference between its state and average state of its positive neighbors. Nodes in the same community will evolve together with time and nodes in the different communities will evolve far away. Communities are detected ultimately when states of nodes are stable. Experiments on real world and synthetic networks are implemented to verify detection performance. The thorough comparisons demonstrate the presented method is more efficient than two acknowledged better algorithms.

  11. Summer drought reconstruction in northeastern Spain inferred from a tree ring latewood network since 1734

    Science.gov (United States)

    Tejedor, E.; Saz, M. A.; Esper, J.; Cuadrat, J. M.; de Luis, M.

    2017-08-01

    Drought recurrence in the Mediterranean is regarded as a fundamental factor for socioeconomic development and the resilience of natural systems in context of global change. However, knowledge of past droughts has been hampered by the absence of high-resolution proxies. We present a drought reconstruction for the northeast of the Iberian Peninsula based on a new dendrochronology network considering the Standardized Evapotranspiration Precipitation Index (SPEI). A total of 774 latewood width series from 387 trees of P. sylvestris and P. uncinata was combined in an interregional chronology. The new chronology, calibrated against gridded climate data, reveals a robust relationship with the SPEI representing drought conditions of July and August. We developed a summer drought reconstruction for the period 1734-2013 representative for the northeastern and central Iberian Peninsula. We identified 16 extremely dry and 17 extremely wet summers and four decadal scale dry and wet periods, including 2003-2013 as the driest episode of the reconstruction.

  12. On a multistable competitive network model in the case of an inhomogeneous growth rate spectrum: With an application to priming

    International Nuclear Information System (INIS)

    Frank, T.D.

    2009-01-01

    A stability analysis of a network model proposed by Haken is carried out for the case of an inhomogeneous spectrum of growth rates. The degree of multistability as a function of the coupling strength between network units is determined. An application to priming shows that the network can reconstruct the fundamental phenomenon that primed items have shorter recall latencies than non-primed items when assuming that learning affects the inhomogeneity of the growth rate spectrum.

  13. Reconstructing gene regulatory networks from knock-out data using Gaussian Noise Model and Pearson Correlation Coefficient.

    Science.gov (United States)

    Mohamed Salleh, Faridah Hani; Arif, Shereena Mohd; Zainudin, Suhaila; Firdaus-Raih, Mohd

    2015-12-01

    A gene regulatory network (GRN) is a large and complex network consisting of interacting elements that, over time, affect each other's state. The dynamics of complex gene regulatory processes are difficult to understand using intuitive approaches alone. To overcome this problem, we propose an algorithm for inferring the regulatory interactions from knock-out data using a Gaussian model combines with Pearson Correlation Coefficient (PCC). There are several problems relating to GRN construction that have been outlined in this paper. We demonstrated the ability of our proposed method to (1) predict the presence of regulatory interactions between genes, (2) their directionality and (3) their states (activation or suppression). The algorithm was applied to network sizes of 10 and 50 genes from DREAM3 datasets and network sizes of 10 from DREAM4 datasets. The predicted networks were evaluated based on AUROC and AUPR. We discovered that high false positive values were generated by our GRN prediction methods because the indirect regulations have been wrongly predicted as true relationships. We achieved satisfactory results as the majority of sub-networks achieved AUROC values above 0.5. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A new algorithm for $H\\rightarrow\\tau\\bar{\\tau}$ invariant mass reconstruction using Deep Neural Networks

    CERN Document Server

    Dietrich, Felix

    2017-01-01

    Reconstructing the invariant mass in a Higgs boson decay event containing tau leptons turns out to be a challenging endeavour. The aim of this summer student project is to implement a new algorithm for this task, using deep neural networks and machine learning. The results are compared to SVFit, an existing algorithm that uses dynamical likelihood techniques. A neural network is found that reaches the accuracy of SVFit at low masses and even surpasses it at higher masses, while at the same time providing results a thousand times faster.

  15. Assessing Women's Preferences and Preference Modeling for Breast Reconstruction Decision-Making.

    Science.gov (United States)

    Sun, Clement S; Cantor, Scott B; Reece, Gregory P; Crosby, Melissa A; Fingeret, Michelle C; Markey, Mia K

    2014-03-01

    Women considering breast reconstruction must make challenging trade-offs amongst issues that often conflict. It may be useful to quantify possible outcomes using a single summary measure to aid a breast cancer patient in choosing a form of breast reconstruction. In this study, we used multiattribute utility theory to combine multiple objectives to yield a summary value using nine different preference models. We elicited the preferences of 36 women, aged 32 or older with no history of breast cancer, for the patient-reported outcome measures of breast satisfaction, psychosocial well-being, chest well-being, abdominal well-being, and sexual wellbeing as measured by the BREAST-Q in addition to time lost to reconstruction and out-of-pocket cost. Participants ranked hypothetical breast reconstruction outcomes. We examined each multiattribute utility preference model and assessed how often each model agreed with participants' rankings. The median amount of time required to assess preferences was 34 minutes. Agreement among the nine preference models with the participants ranged from 75.9% to 78.9%. None of the preference models performed significantly worse than the best performing risk averse multiplicative model. We hypothesize an average theoretical agreement of 94.6% for this model if participant error is included. There was a statistically significant positive correlation with more unequal distribution of weight given to the seven attributes. We recommend the risk averse multiplicative model for modeling the preferences of patients considering different forms of breast reconstruction because it agreed most often with the participants in this study.

  16. Verifying three-dimensional skull model reconstruction using cranial index of symmetry.

    Science.gov (United States)

    Kung, Woon-Man; Chen, Shuo-Tsung; Lin, Chung-Hsiang; Lu, Yu-Mei; Chen, Tzu-Hsuan; Lin, Muh-Shi

    2013-01-01

    Difficulty exists in scalp adaptation for cranioplasty with customized computer-assisted design/manufacturing (CAD/CAM) implant in situations of excessive wound tension and sub-cranioplasty dead space. To solve this clinical problem, the CAD/CAM technique should include algorithms to reconstruct a depressed contour to cover the skull defect. Satisfactory CAM-derived alloplastic implants are based on highly accurate three-dimensional (3-D) CAD modeling. Thus, it is quite important to establish a symmetrically regular CAD/CAM reconstruction prior to depressing the contour. The purpose of this study is to verify the aesthetic outcomes of CAD models with regular contours using cranial index of symmetry (CIS). From January 2011 to June 2012, decompressive craniectomy (DC) was performed for 15 consecutive patients in our institute. 3-D CAD models of skull defects were reconstructed using commercial software. These models were checked in terms of symmetry by CIS scores. CIS scores of CAD reconstructions were 99.24±0.004% (range 98.47-99.84). CIS scores of these CAD models were statistically significantly greater than 95%, identical to 99.5%, but lower than 99.6% (ppairs signed rank test). These data evidenced the highly accurate symmetry of these CAD models with regular contours. CIS calculation is beneficial to assess aesthetic outcomes of CAD-reconstructed skulls in terms of cranial symmetry. This enables further accurate CAD models and CAM cranial implants with depressed contours, which are essential in patients with difficult scalp adaptation.

  17. Reconstructing plateau icefields: Evaluating empirical and modelled approaches

    Science.gov (United States)

    Pearce, Danni; Rea, Brice; Barr, Iestyn

    2013-04-01

    Glacial landforms are widely utilised to reconstruct former glacier geometries with a common aim to estimate the Equilibrium Line Altitudes (ELAs) and from these, infer palaeoclimatic conditions. Such inferences may be studied on a regional scale and used to correlate climatic gradients across large distances (e.g., Europe). In Britain, the traditional approach uses geomorphological mapping with hand contouring to derive the palaeo-ice surface. Recently, ice surface modelling enables an equilibrium profile reconstruction tuned using the geomorphology. Both methods permit derivation of palaeo-climate but no study has compared the two methods for the same ice-mass. This is important because either approach may result in differences in glacier limits, ELAs and palaeo-climate. This research uses both methods to reconstruct a plateau icefield and quantifies the results from a cartographic and geometrical aspect. Detailed geomorphological mapping of the Tweedsmuir Hills in the Southern Uplands, Scotland (c. 320 km2) was conducted to examine the extent of Younger Dryas (YD; 12.9 -11.7 cal. ka BP) glaciation. Landform evidence indicates a plateau icefield configuration of two separate ice-masses during the YD covering an area c. 45 km2 and 25 km2. The interpreted age is supported by new radiocarbon dating of basal stratigraphies and Terrestrial Cosmogenic Nuclide Analysis (TCNA) of in situ boulders. Both techniques produce similar configurations however; the model results in a coarser resolution requiring further processing if a cartographic map is required. When landforms are absent or fragmentary (e.g., trimlines and lateral moraines), like in many accumulation zones on plateau icefields, the geomorphological approach increasingly relies on extrapolation between lines of evidence and on the individual's perception of how the ice-mass ought to look. In some locations this results in an underestimation of the ice surface compared to the modelled surface most likely due to

  18. CoryneRegNet: an ontology-based data warehouse of corynebacterial transcription factors and regulatory networks.

    Science.gov (United States)

    Baumbach, Jan; Brinkrolf, Karina; Czaja, Lisa F; Rahmann, Sven; Tauch, Andreas

    2006-02-14

    The application of DNA microarray technology in post-genomic analysis of bacterial genome sequences has allowed the generation of huge amounts of data related to regulatory networks. This data along with literature-derived knowledge on regulation of gene expression has opened the way for genome-wide reconstruction of transcriptional regulatory networks. These large-scale reconstructions can be converted into in silico models of bacterial cells that allow a systematic analysis of network behavior in response to changing environmental conditions. CoryneRegNet was designed to facilitate the genome-wide reconstruction of transcriptional regulatory networks of corynebacteria relevant in biotechnology and human medicine. During the import and integration process of data derived from experimental studies or literature knowledge CoryneRegNet generates links to genome annotations, to identified transcription factors and to the corresponding cis-regulatory elements. CoryneRegNet is based on a multi-layered, hierarchical and modular concept of transcriptional regulation and was implemented by using the relational database management system MySQL and an ontology-based data structure. Reconstructed regulatory networks can be visualized by using the yFiles JAVA graph library. As an application example of CoryneRegNet, we have reconstructed the global transcriptional regulation of a cellular module involved in SOS and stress response of corynebacteria. CoryneRegNet is an ontology-based data warehouse that allows a pertinent data management of regulatory interactions along with the genome-scale reconstruction of transcriptional regulatory networks. These models can further be combined with metabolic networks to build integrated models of cellular function including both metabolism and its transcriptional regulation.

  19. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    International Nuclear Information System (INIS)

    Chen, G; Pan, X; Stayman, J; Samei, E

    2014-01-01

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  20. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  1. AUTOMATIC TEXTURE RECONSTRUCTION OF 3D CITY MODEL FROM OBLIQUE IMAGES

    Directory of Open Access Journals (Sweden)

    J. Kang

    2016-06-01

    Full Text Available In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  2. A new method of morphological comparison for bony reconstructive surgery: maxillary reconstruction using scapular tip bone

    Science.gov (United States)

    Chan, Harley; Gilbert, Ralph W.; Pagedar, Nitin A.; Daly, Michael J.; Irish, Jonathan C.; Siewerdsen, Jeffrey H.

    2010-02-01

    esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall root-mean-square (RMS) conformance was 3.71+/- 0.16 mm

  3. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    Science.gov (United States)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  4. Porous media: Analysis, reconstruction and percolation

    DEFF Research Database (Denmark)

    Rogon, Thomas Alexander

    1995-01-01

    functions of Gaussian fields and spatial autocorrelation functions of binary fields. An enhanced approach which embodies semi-analytical solutions for the conversions has been made. The scope and limitations of the method have been analysed in terms of realizability of different model correlation functions...... stereological methods. The measured sample autocorrelations are modeled by analytical correlation functions. A method for simulating porous networks from their porosity and spatial correlation originally developed by Joshi (14) is presented. This method is based on a conversion between spatial autocorrelation...... in binary fields. Percolation threshold of reconstructed porous media has been determined for different discretizations of a selected model correlation function. Also critical exponents such as the correlation length exponent v, the strength of the infinite network and the mean size of finite clusters have...

  5. A Taxonomic Reduced-Space Pollen Model for Paleoclimate Reconstruction

    Science.gov (United States)

    Wahl, E. R.; Schoelzel, C.

    2010-12-01

    Paleoenvironmental reconstruction from fossil pollen often attempts to take advantage of the rich taxonomic diversity in such data. Here, a taxonomically "reduced-space" reconstruction model is explored that would be parsimonious in introducing parameters needing to be estimated within a Bayesian Hierarchical Modeling context. This work involves a refinement of the traditional pollen ratio method. This method is useful when one (or a few) dominant pollen type(s) in a region have a strong positive correlation with a climate variable of interest and another (or a few) dominant pollen type(s) have a strong negative correlation. When, e.g., counts of pollen taxa a and b (r >0) are combined with pollen types c and d (r logistic generalized linear model (GLM). The GLM can readily model this relationship in the forward form, pollen = g(climate), which is more physically realistic than inverse models often used in paleoclimate reconstruction [climate = f(pollen)]. The specification of the model is: rnum Bin(n,p), where E(r|T) = p = exp(η)/[1+exp(η)], and η = α + β(T); r is the pollen ratio formed as above, rnum is the ratio numerator, n is the ratio denominator (i.e., the sum of pollen counts), the denominator-specific count is (n - rnum), and T is the temperature at each site corresponding to a specific value of r. Ecological and empirical screening identified the model (Spruce+Birch) / (Spruce+Birch+Oak+Hickory) for use in temperate eastern N. America. α and β were estimated using both "traditional" and Bayesian GLM algorithms (in R). Although it includes only four pollen types, the ratio model yields more explained variation ( 80%) in the pollen-temperature relationship of the study region than a 64-taxon modern analog technique (MAT). Thus, the new pollen ratio method represents an information-rich, reduced space data model that can be efficiently employed in a BHM framework. The ratio model can directly reconstruct past temperature by solving the GLM equations

  6. Reconstruction of neutron spectra using neural networks starting from the Bonner spheres spectrometric system

    International Nuclear Information System (INIS)

    Ortiz R, J.M.; Martinez B, M.R.; Arteaga A, T.; Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2005-01-01

    The artificial neural networks (RN) have been used successfully to solve a wide variety of problems. However to determine an appropriate set of values of the structural parameters and of learning of these, it continues being even a difficult task. Contrary to previous works, here a set of neural networks is designed to reconstruct neutron spectra starting from the counting rates coming from the detectors of the Bonner spheres system, using a systematic and experimental strategy for the robust design of multilayer neural networks of the feed forward type of inverse propagation. The robust design is formulated as a design problem of Taguchi parameters. It was selected a set of 53 neutron spectra, compiled by the International Atomic Energy Agency, the counting rates were calculated that would take place in a Bonner spheres system, the set was arranged according to the wave form of those spectra. With these data and applying the Taguchi methodology to determine the best parameters of the network topology, it was trained and it proved the same one with the spectra. (Author)

  7. Reconstruction of daily erythemal UV radiation values for the last century - The benefit of modelled ozone

    Science.gov (United States)

    Junk, J.; Feister, U.; Rozanov, E.; Krzyścin, J. W.

    2013-05-01

    Solar erythemal UV radiation (UVER) is highly relevant for numerous biological processes that affect plants, animals, and human health. Nevertheless, long-term UVER records are scarce. As significant declines in the column ozone concentration were observed in the past and a recovery of the stratospheric ozone layer is anticipated by the middle of the 21st century, there is a strong interest in the temporal variation of UVER time series. Therefore, we combined groundbased measurements of different meteorological variables with modeled ozone data sets to reconstruct time series of daily totals of UVER at the Meteorological Observatory Potsdam, Germany. Artificial neural networks were trained with measured UVER, sunshine duration, the day of year, measured and modeled total column ozone, as well as the minimum solar zenith angle. This allows for the reconstruction of daily totals of UVER for the period from 1901 to 1999. Additionally, analyses of the long-term variations from 1901 until 1999 of the reconstructed, new UVER data set are presented. The time series of monthly and annual totals of UVER provide a long-term meteorological basis for epidemiological investigations in human health and occupational medicine for the region of Potsdam and Berlin. A strong benefit of our ANN-approach is the fact that it can be easily adapted to different geographical locations, as successfully tested in the framework of the COSTAction 726.

  8. Collaborative networks: Reference modeling

    NARCIS (Netherlands)

    Camarinha-Matos, L.M.; Afsarmanesh, H.

    2008-01-01

    Collaborative Networks: Reference Modeling works to establish a theoretical foundation for Collaborative Networks. Particular emphasis is put on modeling multiple facets of collaborative networks and establishing a comprehensive modeling framework that captures and structures diverse perspectives of

  9. AUTOMATIC EXTRACTION AND TOPOLOGY RECONSTRUCTION OF URBAN VIADUCTS FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2015-08-01

    Full Text Available Urban viaducts are important infrastructures for the transportation system of a city. In this paper, an original method is proposed to automatically extract urban viaducts and reconstruct topology of the viaduct network just with airborne LiDAR point cloud data. It will greatly simplify the effort-taking procedure of viaducts extraction and reconstruction. In our method, the point cloud first is filtered to divide all the points into ground points and none-ground points. Region growth algorithm is adopted to find the viaduct points from the none-ground points by the features generated from its general prescriptive designation rules. Then, the viaduct points are projected into 2D images to extract the centerline of every viaduct and generate cubic functions to represent passages of viaducts by least square fitting, with which the topology of the viaduct network can be rebuilt by combining the height information. Finally, a topological graph of the viaducts network is produced. The full-automatic method can potentially benefit the application of urban navigation and city model reconstruction.

  10. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    Science.gov (United States)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  11. Novel Low Cost 3D Surface Model Reconstruction System for Plant Phenotyping

    Directory of Open Access Journals (Sweden)

    Suxing Liu

    2017-09-01

    Full Text Available Accurate high-resolution three-dimensional (3D models are essential for a non-invasive analysis of phenotypic characteristics of plants. Previous limitations in 3D computer vision algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present an image-based 3D plant reconstruction system that can be achieved by using a single camera and a rotation stand. Our method is based on the structure from motion method, with a SIFT image feature descriptor. In order to improve the quality of the 3D models, we segmented the plant objects based on the PlantCV platform. We also deducted the optimal number of images needed for reconstructing a high-quality model. Experiments showed that an accurate 3D model of the plant was successfully could be reconstructed by our approach. This 3D surface model reconstruction system provides a simple and accurate computational platform for non-destructive, plant phenotyping.

  12. Food Reconstruction Using Isotopic Transferred Signals (FRUITS): A Bayesian Model for Diet Reconstruction

    Czech Academy of Sciences Publication Activity Database

    Fernandes, R.; Millard, A.R.; Brabec, Marek; Nadeau, M.J.; Grootes, P.

    2014-01-01

    Roč. 9, č. 2 (2014), Art . no. e87436 E-ISSN 1932-6203 Institutional support: RVO:67985807 Keywords : ancienit diet reconstruction * stable isotope measurements * mixture model * Bayesian estimation * Dirichlet prior Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.234, year: 2014

  13. CoryneRegNet: An ontology-based data warehouse of corynebacterial transcription factors and regulatory networks

    Directory of Open Access Journals (Sweden)

    Czaja Lisa F

    2006-02-01

    Full Text Available Abstract Background The application of DNA microarray technology in post-genomic analysis of bacterial genome sequences has allowed the generation of huge amounts of data related to regulatory networks. This data along with literature-derived knowledge on regulation of gene expression has opened the way for genome-wide reconstruction of transcriptional regulatory networks. These large-scale reconstructions can be converted into in silico models of bacterial cells that allow a systematic analysis of network behavior in response to changing environmental conditions. Description CoryneRegNet was designed to facilitate the genome-wide reconstruction of transcriptional regulatory networks of corynebacteria relevant in biotechnology and human medicine. During the import and integration process of data derived from experimental studies or literature knowledge CoryneRegNet generates links to genome annotations, to identified transcription factors and to the corresponding cis-regulatory elements. CoryneRegNet is based on a multi-layered, hierarchical and modular concept of transcriptional regulation and was implemented by using the relational database management system MySQL and an ontology-based data structure. Reconstructed regulatory networks can be visualized by using the yFiles JAVA graph library. As an application example of CoryneRegNet, we have reconstructed the global transcriptional regulation of a cellular module involved in SOS and stress response of corynebacteria. Conclusion CoryneRegNet is an ontology-based data warehouse that allows a pertinent data management of regulatory interactions along with the genome-scale reconstruction of transcriptional regulatory networks. These models can further be combined with metabolic networks to build integrated models of cellular function including both metabolism and its transcriptional regulation.

  14. Reconstruction of hyperspectral image using matting model for classification

    Science.gov (United States)

    Xie, Weiying; Li, Yunsong; Ge, Chiru

    2016-05-01

    Although hyperspectral images (HSIs) captured by satellites provide much information in spectral regions, some bands are redundant or have large amounts of noise, which are not suitable for image analysis. To address this problem, we introduce a method for reconstructing the HSI with noise reduction and contrast enhancement using a matting model for the first time. The matting model refers to each spectral band of an HSI that can be decomposed into three components, i.e., alpha channel, spectral foreground, and spectral background. First, one spectral band of an HSI with more refined information than most other bands is selected, and is referred to as an alpha channel of the HSI to estimate the hyperspectral foreground and hyperspectral background. Finally, a combination operation is applied to reconstruct the HSI. In addition, the support vector machine (SVM) classifier and three sparsity-based classifiers, i.e., orthogonal matching pursuit (OMP), simultaneous OMP, and OMP based on first-order neighborhood system weighted classifiers, are utilized on the reconstructed HSI and the original HSI to verify the effectiveness of the proposed method. Specifically, using the reconstructed HSI, the average accuracy of the SVM classifier can be improved by as much as 19%.

  15. Connections model for tomographic images reconstruction; Modelo conexionista para reconstrucao de imagens tomograficas

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, R.G.S.; Pela, C.A.; Roque, S.F. A.C. [Departamento de Fisica e Matematica (FFCLRP) USP. Av. Bandeirantes, 3900- 14040- 901- Ribeirao Preto, Sao Paulo (Brazil)

    1998-12-31

    This paper shows an artificial neural network with an adequately topology for tomographic image reconstruction. The associated error function is derived and the learning algorithm is make. The simulated results are presented and demonstrate the existence of a generalized solution for nets with linear activation function. (Author)

  16. Identifying time-delayed gene regulatory networks via an evolvable hierarchical recurrent neural network.

    Science.gov (United States)

    Kordmahalleh, Mina Moradi; Sefidmazgi, Mohammad Gorji; Harrison, Scott H; Homaifar, Abdollah

    2017-01-01

    The modeling of genetic interactions within a cell is crucial for a basic understanding of physiology and for applied areas such as drug design. Interactions in gene regulatory networks (GRNs) include effects of transcription factors, repressors, small metabolites, and microRNA species. In addition, the effects of regulatory interactions are not always simultaneous, but can occur after a finite time delay, or as a combined outcome of simultaneous and time delayed interactions. Powerful biotechnologies have been rapidly and successfully measuring levels of genetic expression to illuminate different states of biological systems. This has led to an ensuing challenge to improve the identification of specific regulatory mechanisms through regulatory network reconstructions. Solutions to this challenge will ultimately help to spur forward efforts based on the usage of regulatory network reconstructions in systems biology applications. We have developed a hierarchical recurrent neural network (HRNN) that identifies time-delayed gene interactions using time-course data. A customized genetic algorithm (GA) was used to optimize hierarchical connectivity of regulatory genes and a target gene. The proposed design provides a non-fully connected network with the flexibility of using recurrent connections inside the network. These features and the non-linearity of the HRNN facilitate the process of identifying temporal patterns of a GRN. Our HRNN method was implemented with the Python language. It was first evaluated on simulated data representing linear and nonlinear time-delayed gene-gene interaction models across a range of network sizes and variances of noise. We then further demonstrated the capability of our method in reconstructing GRNs of the Saccharomyces cerevisiae synthetic network for in vivo benchmarking of reverse-engineering and modeling approaches (IRMA). We compared the performance of our method to TD-ARACNE, HCC-CLINDE, TSNI and ebdbNet across different network

  17. Occluded object reconstruction for first responders with augmented reality glasses using conditional generative adversarial networks

    OpenAIRE

    Yun, Kyongsik; Lu, Thomas; Chow, Edward

    2018-01-01

    Firefighters suffer a variety of life-threatening risks, including line-of-duty deaths, injuries, and exposures to hazardous substances. Support for reducing these risks is important. We built a partially occluded object reconstruction method on augmented reality glasses for first responders. We used a deep learning based on conditional generative adversarial networks to train associations between the various images of flammable and hazardous objects and their partially occluded counterparts....

  18. Verifying three-dimensional skull model reconstruction using cranial index of symmetry.

    Directory of Open Access Journals (Sweden)

    Woon-Man Kung

    Full Text Available BACKGROUND: Difficulty exists in scalp adaptation for cranioplasty with customized computer-assisted design/manufacturing (CAD/CAM implant in situations of excessive wound tension and sub-cranioplasty dead space. To solve this clinical problem, the CAD/CAM technique should include algorithms to reconstruct a depressed contour to cover the skull defect. Satisfactory CAM-derived alloplastic implants are based on highly accurate three-dimensional (3-D CAD modeling. Thus, it is quite important to establish a symmetrically regular CAD/CAM reconstruction prior to depressing the contour. The purpose of this study is to verify the aesthetic outcomes of CAD models with regular contours using cranial index of symmetry (CIS. MATERIALS AND METHODS: From January 2011 to June 2012, decompressive craniectomy (DC was performed for 15 consecutive patients in our institute. 3-D CAD models of skull defects were reconstructed using commercial software. These models were checked in terms of symmetry by CIS scores. RESULTS: CIS scores of CAD reconstructions were 99.24±0.004% (range 98.47-99.84. CIS scores of these CAD models were statistically significantly greater than 95%, identical to 99.5%, but lower than 99.6% (p<0.001, p = 0.064, p = 0.021 respectively, Wilcoxon matched pairs signed rank test. These data evidenced the highly accurate symmetry of these CAD models with regular contours. CONCLUSIONS: CIS calculation is beneficial to assess aesthetic outcomes of CAD-reconstructed skulls in terms of cranial symmetry. This enables further accurate CAD models and CAM cranial implants with depressed contours, which are essential in patients with difficult scalp adaptation.

  19. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    Science.gov (United States)

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  20. First Gridded Spatial Field Reconstructions of Snow from Tree Rings

    Science.gov (United States)

    Coulthard, B. L.; Anchukaitis, K. J.; Pederson, G. T.; Alder, J. R.; Hostetler, S. W.; Gray, S. T.

    2017-12-01

    Western North America's mountain snowpacks provide critical water resources for human populations and ecosystems. Warmer temperatures and changing precipitation patterns will increasingly alter the quantity, extent, and persistence of snow in coming decades. A comprehensive understanding of the causes and range of long-term variability in this system is required for forecasting future anomalies, but snowpack observations are limited and sparse. While individual tree ring-based annual snowpack reconstructions have been developed for specific regions and mountain ranges, we present here the first collection of spatially-explicit gridded field reconstructions of seasonal snowpack within the American Rocky Mountains. Capitalizing on a new western North American snow-sensitive network of over 700 tree-ring chronologies, as well as recent advances in PRISM-based snow modeling, our gridded reconstructions offer a full space-time characterization of snow and associated water resource fluctuations over several centuries. The quality of reconstructions is evaluated against existing observations, proxy-records, and an independently-developed first-order monthly snow model.

  1. A protocol for generating a high-quality genome-scale metabolic reconstruction.

    Science.gov (United States)

    Thiele, Ines; Palsson, Bernhard Ø

    2010-01-01

    Network reconstructions are a common denominator in systems biology. Bottom-up metabolic network reconstructions have been developed over the last 10 years. These reconstructions represent structured knowledge bases that abstract pertinent information on the biochemical transformations taking place within specific target organisms. The conversion of a reconstruction into a mathematical format facilitates a myriad of computational biological studies, including evaluation of network content, hypothesis testing and generation, analysis of phenotypic characteristics and metabolic engineering. To date, genome-scale metabolic reconstructions for more than 30 organisms have been published and this number is expected to increase rapidly. However, these reconstructions differ in quality and coverage that may minimize their predictive potential and use as knowledge bases. Here we present a comprehensive protocol describing each step necessary to build a high-quality genome-scale metabolic reconstruction, as well as the common trials and tribulations. Therefore, this protocol provides a helpful manual for all stages of the reconstruction process.

  2. Application of neural network to CT

    International Nuclear Information System (INIS)

    Ma, Xiao-Feng; Takeda, Tatsuoki

    1999-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multilayer neural network. Multilayer neural networks are extensively investigated and practically applied to solution of various problems such as inverse problems or time series prediction problems. From learning an input-output mapping from a set of examples, neural networks can be regarded as synthesizing an approximation of multidimensional function (that is, solving the problem of hypersurface reconstruction, including smoothing and interpolation). From this viewpoint, neural networks are well suited to the solution of CT image reconstruction. Though a conventionally used object function of a neural network is composed of a sum of squared errors of the output data, we can define an object function composed of a sum of residue of an integral equation. By employing an appropriate line integral for this integral equation, we can construct a neural network that can be used for CT. We applied this method to some model problems and obtained satisfactory results. As it is not necessary to discretized the integral equation using this reconstruction method, therefore it is application to the problem of complicated geometrical shapes is also feasible. Moreover, in neural networks, interpolation is performed quite smoothly, as a result, inverse mapping can be achieved smoothly even in case of including experimental and numerical errors, However, use of conventional back propagation technique for optimization leads to an expensive computation cost. To overcome this drawback, 2nd order optimization methods or parallel computing will be applied in future. (J.P.N.)

  3. Track reconstruction in discrete detectors by neutral networks

    Energy Technology Data Exchange (ETDEWEB)

    Glazov, A A; Kisel` , I V; Konotopskaya, E V; Neskoromnyj, V N; Ososkov, G A

    1993-12-31

    On the basis of applying neutral networks to the track recognition problem the investigations are made according to the specific properties of such discrete detectors as multiwire proportional chambers. These investigations result in the modification of the so-called rotor model in a neutral neural network. The energy function of a network in this modification contains only one cost term. This speeds up calculations considerably. The reduction of the energy function is done by the neuron selection with the help of simplegeometrical and energetical criteria. Besides, the cellular automata were applied to preliminary selection of data that made it possible to create an initial network configuration with the energy closer to its global minimum. The algorithm was tested on 10{sup 4} real three-prong events obtained from the ARES-spectrometer. The results are satisfactory including the noise robustness and good resolution of nearby going tracks. 12 refs.; 10 figs.

  4. Track reconstruction in discrete detectors by neutral networks

    International Nuclear Information System (INIS)

    Glazov, A.A.; Kisel', I.V.; Konotopskaya, E.V.; Neskoromnyj, V.N.; Ososkov, G.A.

    1992-01-01

    On the basis of applying neutral networks to the track recognition problem the investigations are made according to the specific properties of such discrete detectors as multiwire proportional chambers. These investigations result in the modification of the so-called rotor model in a neutral neural network. The energy function of a network in this modification contains only one cost term. This speeds up calculations considerably. The reduction of the energy function is done by the neuron selection with the help of simplegeometrical and energetical criteria. Besides, the cellular automata were applied to preliminary selection of data that made it possible to create an initial network configuration with the energy closer to its global minimum. The algorithm was tested on 10 4 real three-prong events obtained from the ARES-spectrometer. The results are satisfactory including the noise robustness and good resolution of nearby going tracks. 12 refs.; 10 figs

  5. Irrigation network design and reconstruction and its analysis by simulation model

    Directory of Open Access Journals (Sweden)

    Čistý Milan

    2014-06-01

    Full Text Available There are many problems related to pipe network rehabilitation, the main one being how to provide an increase in the hydraulic capacity of a system. Because of its complexity the conventional optimizations techniques are poorly suited for solving this task. In recent years some successful attempts to apply modern heuristic methods to this problem have been published. The main part of the paper deals with applying such technique, namely the harmony search methodology, to network rehabilitation optimization considering both technical and economic aspects of the problem. A case study of the sprinkler irrigation system is presented in detail. Two alternatives of the rehabilitation design are compared. The modified linear programming method is used first with new diameters proposed in the existing network so it could satisfy the increased demand conditions with the unchanged topology. This solution is contrasted to the looped one obtained using a harmony search algorithm

  6. A neighbourhood evolving network model

    International Nuclear Information System (INIS)

    Cao, Y.J.; Wang, G.Z.; Jiang, Q.Y.; Han, Z.X.

    2006-01-01

    Many social, technological, biological and economical systems are best described by evolved network models. In this short Letter, we propose and study a new evolving network model. The model is based on the new concept of neighbourhood connectivity, which exists in many physical complex networks. The statistical properties and dynamics of the proposed model is analytically studied and compared with those of Barabasi-Albert scale-free model. Numerical simulations indicate that this network model yields a transition between power-law and exponential scaling, while the Barabasi-Albert scale-free model is only one of its special (limiting) cases. Particularly, this model can be used to enhance the evolving mechanism of complex networks in the real world, such as some social networks development

  7. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  8. The benefit of modeled ozone data for the reconstruction of a 99-year UV radiation time series

    Science.gov (United States)

    Junk, J.; Feister, U.; Helbig, A.; GöRgen, K.; Rozanov, E.; KrzyśCin, J. W.; Hoffmann, L.

    2012-08-01

    Solar erythemal UV radiation (UVER) is highly relevant for numerous biological processes that affect plants, animals, and human health. Nevertheless, long-term UVER records are scarce. As significant declines in the column ozone concentration were observed in the past and a recovery of the stratospheric ozone layer is anticipated by the middle of the 21st century, there is a strong interest in the temporal variation of UVERtime series. Therefore, we combined ground-based measurements of different meteorological variables with modeled ozone data sets to reconstruct time series of daily totals of UVER at the Meteorological Observatory, Potsdam, Germany. Artificial neural networks were trained with measured UVER, sunshine duration, the day of year, measured and modeled total column ozone, as well as the minimum solar zenith angle. This allows for the reconstruction of daily totals of UVERfor the period from 1901 to 1999. Additionally, analyses of the long-term variations from 1901 until 1999 of the reconstructed, new UVER data set are presented. The time series of monthly and annual totals of UVERprovide a long-term meteorological basis for epidemiological investigations in human health and occupational medicine for the region of Potsdam and Berlin. A strong benefit of our ANN-approach is the fact that it can be easily adapted to different geographical locations, as successfully tested in the framework of the COSTAction 726.

  9. A novel neural network based image reconstruction model with scale and rotation invariance for target identification and classification for Active millimetre wave imaging

    Science.gov (United States)

    Agarwal, Smriti; Bisht, Amit Singh; Singh, Dharmendra; Pathak, Nagendra Prasad

    2014-12-01

    Millimetre wave imaging (MMW) is gaining tremendous interest among researchers, which has potential applications for security check, standoff personal screening, automotive collision-avoidance, and lot more. Current state-of-art imaging techniques viz. microwave and X-ray imaging suffers from lower resolution and harmful ionizing radiation, respectively. In contrast, MMW imaging operates at lower power and is non-ionizing, hence, medically safe. Despite these favourable attributes, MMW imaging encounters various challenges as; still it is very less explored area and lacks suitable imaging methodology for extracting complete target information. Keeping in view of these challenges, a MMW active imaging radar system at 60 GHz was designed for standoff imaging application. A C-scan (horizontal and vertical scanning) methodology was developed that provides cross-range resolution of 8.59 mm. The paper further details a suitable target identification and classification methodology. For identification of regular shape targets: mean-standard deviation based segmentation technique was formulated and further validated using a different target shape. For classification: probability density function based target material discrimination methodology was proposed and further validated on different dataset. Lastly, a novel artificial neural network based scale and rotation invariant, image reconstruction methodology has been proposed to counter the distortions in the image caused due to noise, rotation or scale variations. The designed neural network once trained with sample images, automatically takes care of these deformations and successfully reconstructs the corrected image for the test targets. Techniques developed in this paper are tested and validated using four different regular shapes viz. rectangle, square, triangle and circle.

  10. A Novel Hybrid Model for Drawing Trace Reconstruction from Multichannel Surface Electromyographic Activity.

    Science.gov (United States)

    Chen, Yumiao; Yang, Zhongliang

    2017-01-01

    Recently, several researchers have considered the problem of reconstruction of handwriting and other meaningful arm and hand movements from surface electromyography (sEMG). Although much progress has been made, several practical limitations may still affect the clinical applicability of sEMG-based techniques. In this paper, a novel three-step hybrid model of coordinate state transition, sEMG feature extraction and gene expression programming (GEP) prediction is proposed for reconstructing drawing traces of 12 basic one-stroke shapes from multichannel surface electromyography. Using a specially designed coordinate data acquisition system, we recorded the coordinate data of drawing traces collected in accordance with the time series while 7-channel EMG signals were recorded. As a widely-used time domain feature, Root Mean Square (RMS) was extracted with the analysis window. The preliminary reconstruction models can be established by GEP. Then, the original drawing traces can be approximated by a constructed prediction model. Applying the three-step hybrid model, we were able to convert seven channels of EMG activity recorded from the arm muscles into smooth reconstructions of drawing traces. The hybrid model can yield a mean accuracy of 74% in within-group design (one set of prediction models for all shapes) and 86% in between-group design (one separate set of prediction models for each shape), averaged for the reconstructed x and y coordinates. It can be concluded that it is feasible for the proposed three-step hybrid model to improve the reconstruction ability of drawing traces from sEMG.

  11. 4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties

    Science.gov (United States)

    Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.

    2018-05-01

    4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved  >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated

  12. Sparse dynamical Boltzmann machine for reconstructing complex networks with binary dynamics

    Science.gov (United States)

    Chen, Yu-Zhong; Lai, Ying-Cheng

    2018-03-01

    Revealing the structure and dynamics of complex networked systems from observed data is a problem of current interest. Is it possible to develop a completely data-driven framework to decipher the network structure and different types of dynamical processes on complex networks? We develop a model named sparse dynamical Boltzmann machine (SDBM) as a structural estimator for complex networks that host binary dynamical processes. The SDBM attains its topology according to that of the original system and is capable of simulating the original binary dynamical process. We develop a fully automated method based on compressive sensing and a clustering algorithm to construct the SDBM. We demonstrate, for a variety of representative dynamical processes on model and real world complex networks, that the equivalent SDBM can recover the network structure of the original system and simulates its dynamical behavior with high precision.

  13. A priori motion models for four-dimensional reconstruction in gated cardiac SPECT

    International Nuclear Information System (INIS)

    Lalush, D.S.; Tsui, B.M.W.; Cui, Lin

    1996-01-01

    We investigate the benefit of incorporating a priori assumptions about cardiac motion in a fully four-dimensional (4D) reconstruction algorithm for gated cardiac SPECT. Previous work has shown that non-motion-specific 4D Gibbs priors enforcing smoothing in time and space can control noise while preserving resolution. In this paper, we evaluate methods for incorporating known heart motion in the Gibbs prior model. The new model is derived by assigning motion vectors to each 4D voxel, defining the movement of that volume of activity into the neighboring time frames. Weights for the Gibbs cliques are computed based on these open-quotes most likelyclose quotes motion vectors. To evaluate, we employ the mathematical cardiac-torso (MCAT) phantom with a new dynamic heart model that simulates the beating and twisting motion of the heart. Sixteen realistically-simulated gated datasets were generated, with noise simulated to emulate a real Tl-201 gated SPECT study. Reconstructions were performed using several different reconstruction algorithms, all modeling nonuniform attenuation and three-dimensional detector response. These include ML-EM with 4D filtering, 4D MAP-EM without prior motion assumption, and 4D MAP-EM with prior motion assumptions. The prior motion assumptions included both the correct motion model and incorrect models. Results show that reconstructions using the 4D prior model can smooth noise and preserve time-domain resolution more effectively than 4D linear filters. We conclude that modeling of motion in 4D reconstruction algorithms can be a powerful tool for smoothing noise and preserving temporal resolution in gated cardiac studies

  14. Assessment of the impact of modeling axial compression on PET image reconstruction.

    Science.gov (United States)

    Belzunce, Martin A; Reader, Andrew J

    2017-10-01

    To comprehensively evaluate both the acceleration and image-quality impacts of axial compression and its degree of modeling in fully 3D PET image reconstruction. Despite being used since the very dawn of 3D PET reconstruction, there are still no extensive studies on the impact of axial compression and its degree of modeling during reconstruction on the end-point reconstructed image quality. In this work, an evaluation of the impact of axial compression on the image quality is performed by extensively simulating data with span values from 1 to 121. In addition, two methods for modeling the axial compression in the reconstruction were evaluated. The first method models the axial compression in the system matrix, while the second method uses an unmatched projector/backprojector, where the axial compression is modeled only in the forward projector. The different system matrices were analyzed by computing their singular values and the point response functions for small subregions of the FOV. The two methods were evaluated with simulated and real data for the Biograph mMR scanner. For the simulated data, the axial compression with span values lower than 7 did not show a decrease in the contrast of the reconstructed images. For span 11, the standard sinogram size of the mMR scanner, losses of contrast in the range of 5-10 percentage points were observed when measured for a hot lesion. For higher span values, the spatial resolution was degraded considerably. However, impressively, for all span values of 21 and lower, modeling the axial compression in the system matrix compensated for the spatial resolution degradation and obtained similar contrast values as the span 1 reconstructions. Such approaches have the same processing times as span 1 reconstructions, but they permit significant reduction in storage requirements for the fully 3D sinograms. For higher span values, the system has a large condition number and it is therefore difficult to recover accurately the higher

  15. IdentiCS – Identification of coding sequence and in silico reconstruction of the metabolic network directly from unannotated low-coverage bacterial genome sequence

    Directory of Open Access Journals (Sweden)

    Zeng An-Ping

    2004-08-01

    Full Text Available Abstract Background A necessary step for a genome level analysis of the cellular metabolism is the in silico reconstruction of the metabolic network from genome sequences. The available methods are mainly based on the annotation of genome sequences including two successive steps, the prediction of coding sequences (CDS and their function assignment. The annotation process takes time. The available methods often encounter difficulties when dealing with unfinished error-containing genomic sequence. Results In this work a fast method is proposed to use unannotated genome sequence for predicting CDSs and for an in silico reconstruction of metabolic networks. Instead of using predicted genes or CDSs to query public databases, entries from public DNA or protein databases are used as queries to search a local database of the unannotated genome sequence to predict CDSs. Functions are assigned to the predicted CDSs simultaneously. The well-annotated genome of Salmonella typhimurium LT2 is used as an example to demonstrate the applicability of the method. 97.7% of the CDSs in the original annotation are correctly identified. The use of SWISS-PROT-TrEMBL databases resulted in an identification of 98.9% of CDSs that have EC-numbers in the published annotation. Furthermore, two versions of sequences of the bacterium Klebsiella pneumoniae with different genome coverage (3.9 and 7.9 fold, respectively are examined. The results suggest that a 3.9-fold coverage of the bacterial genome could be sufficiently used for the in silico reconstruction of the metabolic network. Compared to other gene finding methods such as CRITICA our method is more suitable for exploiting sequences of low genome coverage. Based on the new method, a program called IdentiCS (Identification of Coding Sequences from Unfinished Genome Sequences is delivered that combines the identification of CDSs with the reconstruction, comparison and visualization of metabolic networks (free to download

  16. A Maximum Parsimony Model to Reconstruct Phylogenetic Network in Honey Bee Evolution

    OpenAIRE

    Usha Chouhan; K. R. Pardasani

    2007-01-01

    Phylogenies ; The evolutionary histories of groups of species are one of the most widely used tools throughout the life sciences, as well as objects of research with in systematic, evolutionary biology. In every phylogenetic analysis reconstruction produces trees. These trees represent the evolutionary histories of many groups of organisms, bacteria due to horizontal gene transfer and plants due to process of hybridization. The process of gene transfer in bacteria and hyb...

  17. Telecommunications network modelling, planning and design

    CERN Document Server

    Evans, Sharon

    2003-01-01

    Telecommunication Network Modelling, Planning and Design addresses sophisticated modelling techniques from the perspective of the communications industry and covers some of the major issues facing telecommunications network engineers and managers today. Topics covered include network planning for transmission systems, modelling of SDH transport network structures and telecommunications network design and performance modelling, as well as network costs and ROI modelling and QoS in 3G networks.

  18. Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure

    Directory of Open Access Journals (Sweden)

    Hesheng Zhang

    2016-01-01

    Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.

  19. Detection and three-dimensional reconstruction of a vascular network from serial sections

    Energy Technology Data Exchange (ETDEWEB)

    Ip, H H.S.

    1983-07-01

    The process of three-dimensional reconstruction from serial sections includes aligning adjacent sections, segmenting the desired objects and constructing a computer internal model of the reconstructed object. Computational methodologies taking advantage of the parallel processing facilities of CLIP4 are presented for automating these tasks. The author is interested in the detailed structure of the carotid body which is a highly vascularized organ with the largest blood flow rate of any tissue in the body (Biscoe (1971), Seidl (1975), Lubbers et al. (1977), Clarke and Daly (1982)). It plays an important role in monitoring the chemical composition of arterial blood (p(o/sub 2/), p(co/sub 2/), ph). The aim of the investigation in the paper is to reconstruct the total vasculature of the organ and to make an analytical study of the geometrical configuration of its vessels. 15 references.

  20. Modeling economic costs of disasters and recovery involving positive effects of reconstruction: analysis using a dynamic CGE model

    Science.gov (United States)

    Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.

    2013-11-01

    Disaster damages have negative effects on economy, whereas reconstruction investments have positive effects. The aim of this study is to model economic causes of disasters and recovery involving positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and further avoid double-counting problem. In order to factor both shocks in CGE model, direct loss is set as the amount of capital stock reduced on supply side of economy; A portion of investments restore the capital stock in existing period; An investment-driven dynamic model is formulated due to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction respectively. The study showed that output from S1 is found to be closer to real data than that from S2. S2 overestimates economic loss by roughly two times that under S1. The gap in economic aggregate between S1 and S0 is reduced to 3% in 2011, a level that should take another four years to achieve under S2.

  1. Improving head and neck CTA with hybrid and model-based iterative reconstruction techniques

    NARCIS (Netherlands)

    Niesten, J. M.; van der Schaaf, I. C.; Vos, P. C.; Willemink, MJ; Velthuis, B. K.

    2015-01-01

    AIM: To compare image quality of head and neck computed tomography angiography (CTA) reconstructed with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and model-based iterative reconstruction (MIR) algorithms. MATERIALS AND METHODS: The raw data of 34 studies were

  2. Comprehensive Reconstruction and Visualization of Non-Coding Regulatory Networks in Human

    Science.gov (United States)

    Bonnici, Vincenzo; Russo, Francesco; Bombieri, Nicola; Pulvirenti, Alfredo; Giugno, Rosalba

    2014-01-01

    Research attention has been powered to understand the functional roles of non-coding RNAs (ncRNAs). Many studies have demonstrated their deregulation in cancer and other human disorders. ncRNAs are also present in extracellular human body fluids such as serum and plasma, giving them a great potential as non-invasive biomarkers. However, non-coding RNAs have been relatively recently discovered and a comprehensive database including all of them is still missing. Reconstructing and visualizing the network of ncRNAs interactions are important steps to understand their regulatory mechanism in complex systems. This work presents ncRNA-DB, a NoSQL database that integrates ncRNAs data interactions from a large number of well established on-line repositories. The interactions involve RNA, DNA, proteins, and diseases. ncRNA-DB is available at http://ncrnadb.scienze.univr.it/ncrnadb/. It is equipped with three interfaces: web based, command-line, and a Cytoscape app called ncINetView. By accessing only one resource, users can search for ncRNAs and their interactions, build a network annotated with all known ncRNAs and associated diseases, and use all visual and mining features available in Cytoscape. PMID:25540777

  3. Comprehensive reconstruction and visualization of non-coding regulatory networks in human.

    Science.gov (United States)

    Bonnici, Vincenzo; Russo, Francesco; Bombieri, Nicola; Pulvirenti, Alfredo; Giugno, Rosalba

    2014-01-01

    Research attention has been powered to understand the functional roles of non-coding RNAs (ncRNAs). Many studies have demonstrated their deregulation in cancer and other human disorders. ncRNAs are also present in extracellular human body fluids such as serum and plasma, giving them a great potential as non-invasive biomarkers. However, non-coding RNAs have been relatively recently discovered and a comprehensive database including all of them is still missing. Reconstructing and visualizing the network of ncRNAs interactions are important steps to understand their regulatory mechanism in complex systems. This work presents ncRNA-DB, a NoSQL database that integrates ncRNAs data interactions from a large number of well established on-line repositories. The interactions involve RNA, DNA, proteins, and diseases. ncRNA-DB is available at http://ncrnadb.scienze.univr.it/ncrnadb/. It is equipped with three interfaces: web based, command-line, and a Cytoscape app called ncINetView. By accessing only one resource, users can search for ncRNAs and their interactions, build a network annotated with all known ncRNAs and associated diseases, and use all visual and mining features available in Cytoscape.

  4. Modeling the dynamics of the lead bismuth eutectic experimental accelerator driven system by an infinite impulse response locally recurrent neural network

    International Nuclear Information System (INIS)

    Zio, Enrico; Pedroni, Nicola; Broggi, Matteo; Golea, Lucia Roxana

    2009-01-01

    In this paper, an infinite impulse response locally recurrent neural network (IIR-LRNN) is employed for modelling the dynamics of the Lead Bismuth Eutectic eXperimental Accelerator Driven System (LBE-XADS). The network is trained by recursive back-propagation (RBP) and its ability in estimating transients is tested under various conditions. The results demonstrate the robustness of the locally recurrent scheme in the reconstruction of complex nonlinear dynamic relationships

  5. Coevolutionary modeling in network formation

    KAUST Repository

    Al-Shyoukh, Ibrahim

    2014-12-03

    Network coevolution, the process of network topology evolution in feedback with dynamical processes over the network nodes, is a common feature of many engineered and natural networks. In such settings, the change in network topology occurs at a comparable time scale to nodal dynamics. Coevolutionary modeling offers the possibility to better understand how and why network structures emerge. For example, social networks can exhibit a variety of structures, ranging from almost uniform to scale-free degree distributions. While current models of network formation can reproduce these structures, coevolutionary modeling can offer a better understanding of the underlying dynamics. This paper presents an overview of recent work on coevolutionary models of network formation, with an emphasis on the following three settings: (i) dynamic flow of benefits and costs, (ii) transient link establishment costs, and (iii) latent preferential attachment.

  6. Coevolutionary modeling in network formation

    KAUST Repository

    Al-Shyoukh, Ibrahim; Chasparis, Georgios; Shamma, Jeff S.

    2014-01-01

    Network coevolution, the process of network topology evolution in feedback with dynamical processes over the network nodes, is a common feature of many engineered and natural networks. In such settings, the change in network topology occurs at a comparable time scale to nodal dynamics. Coevolutionary modeling offers the possibility to better understand how and why network structures emerge. For example, social networks can exhibit a variety of structures, ranging from almost uniform to scale-free degree distributions. While current models of network formation can reproduce these structures, coevolutionary modeling can offer a better understanding of the underlying dynamics. This paper presents an overview of recent work on coevolutionary models of network formation, with an emphasis on the following three settings: (i) dynamic flow of benefits and costs, (ii) transient link establishment costs, and (iii) latent preferential attachment.

  7. Modeling online social signed networks

    Science.gov (United States)

    Li, Le; Gu, Ke; Zeng, An; Fan, Ying; Di, Zengru

    2018-04-01

    People's online rating behavior can be modeled by user-object bipartite networks directly. However, few works have been devoted to reveal the hidden relations between users, especially from the perspective of signed networks. We analyze the signed monopartite networks projected by the signed user-object bipartite networks, finding that the networks are highly clustered with obvious community structure. Interestingly, the positive clustering coefficient is remarkably higher than the negative clustering coefficient. Then, a Signed Growing Network model (SGN) based on local preferential attachment is proposed to generate a user's signed network that has community structure and high positive clustering coefficient. Other structural properties of the modeled networks are also found to be similar to the empirical networks.

  8. Three Dimensional Dynamic Model Based Wind Field Reconstruction from Lidar Data

    International Nuclear Information System (INIS)

    Raach, Steffen; Schlipf, David; Haizmann, Florian; Cheng, Po Wen

    2014-01-01

    Using the inflowing horizontal and vertical wind shears for individual pitch controller is a promising method if blade bending measurements are not available. Due to the limited information provided by a lidar system the reconstruction of shears in real-time is a challenging task especially for the horizontal shear in the presence of changing wind direction. The internal model principle has shown to be a promising approach to estimate the shears and directions in 10 minutes averages with real measurement data. The static model based wind vector field reconstruction is extended in this work taking into account a dynamic reconstruction model based on Taylor's Frozen Turbulence Hypothesis. The presented method provides time series over several seconds of the wind speed, shears and direction, which can be directly used in advanced optimal preview control. Therefore, this work is an important step towards the application of preview individual blade pitch control under realistic wind conditions. The method is tested using a turbulent wind field and a detailed lidar simulator. For the simulation, the turbulent wind field structure is flowing towards the lidar system and is continuously misaligned with respect to the horizontal axis of the wind turbine. Taylor's Frozen Turbulence Hypothesis is taken into account to model the wind evolution. For the reconstruction, the structure is discretized into several stages where each stage is reduced to an effective wind speed, superposed with a linear horizontal and vertical wind shear. Previous lidar measurements are shifted using again Taylor's Hypothesis. The wind field reconstruction problem is then formulated as a nonlinear optimization problem, which minimizes the residual between the assumed wind model and the lidar measurements to obtain the misalignment angle and the effective wind speed and the wind shears for each stage. This method shows good results in reconstructing the wind characteristics of a three

  9. Reconstruction and signal propagation analysis of the Syk signaling network in breast cancer cells.

    Directory of Open Access Journals (Sweden)

    Aurélien Naldi

    2017-03-01

    Full Text Available The ability to build in-depth cell signaling networks from vast experimental data is a key objective of computational biology. The spleen tyrosine kinase (Syk protein, a well-characterized key player in immune cell signaling, was surprisingly first shown by our group to exhibit an onco-suppressive function in mammary epithelial cells and corroborated by many other studies, but the molecular mechanisms of this function remain largely unsolved. Based on existing proteomic data, we report here the generation of an interaction-based network of signaling pathways controlled by Syk in breast cancer cells. Pathway enrichment of the Syk targets previously identified by quantitative phospho-proteomics indicated that Syk is engaged in cell adhesion, motility, growth and death. Using the components and interactions of these pathways, we bootstrapped the reconstruction of a comprehensive network covering Syk signaling in breast cancer cells. To generate in silico hypotheses on Syk signaling propagation, we developed a method allowing to rank paths between Syk and its targets. We first annotated the network according to experimental datasets. We then combined shortest path computation with random walk processes to estimate the importance of individual interactions and selected biologically relevant pathways in the network. Molecular and cell biology experiments allowed to distinguish candidate mechanisms that underlie the impact of Syk on the regulation of cortactin and ezrin, both involved in actin-mediated cell adhesion and motility. The Syk network was further completed with the results of our biological validation experiments. The resulting Syk signaling sub-networks can be explored via an online visualization platform.

  10. The SENSE-Isomorphism Theoretical Image Voxel Estimation (SENSE-ITIVE) Model for Reconstruction and Observing Statistical Properties of Reconstruction Operators

    Science.gov (United States)

    Bruce, Iain P.; Karaman, M. Muge; Rowe, Daniel B.

    2012-01-01

    The acquisition of sub-sampled data from an array of receiver coils has become a common means of reducing data acquisition time in MRI. Of the various techniques used in parallel MRI, SENSitivity Encoding (SENSE) is one of the most common, making use of a complex-valued weighted least squares estimation to unfold the aliased images. It was recently shown in Bruce et al. [Magn. Reson. Imag. 29(2011):1267–1287] that when the SENSE model is represented in terms of a real-valued isomorphism, it assumes a skew-symmetric covariance between receiver coils, as well as an identity covariance structure between voxels. In this manuscript, we show that not only is the skew-symmetric coil covariance unlike that of real data, but the estimated covariance structure between voxels over a time series of experimental data is not an identity matrix. As such, a new model, entitled SENSE-ITIVE, is described with both revised coil and voxel covariance structures. Both the SENSE and SENSE-ITIVE models are represented in terms of real-valued isomorphisms, allowing for a statistical analysis of reconstructed voxel means, variances, and correlations resulting from the use of different coil and voxel covariance structures used in the reconstruction processes to be conducted. It is shown through both theoretical and experimental illustrations that the miss-specification of the coil and voxel covariance structures in the SENSE model results in a lower standard deviation in each voxel of the reconstructed images, and thus an artificial increase in SNR, compared to the standard deviation and SNR of the SENSE-ITIVE model where both the coil and voxel covariances are appropriately accounted for. It is also shown that there are differences in the correlations induced by the reconstruction operations of both models, and consequently there are differences in the correlations estimated throughout the course of reconstructed time series. These differences in correlations could result in meaningful

  11. Network Thermodynamic Curation of Human and Yeast Genome-Scale Metabolic Models

    Science.gov (United States)

    Martínez, Verónica S.; Quek, Lake-Ee; Nielsen, Lars K.

    2014-01-01

    Genome-scale models are used for an ever-widening range of applications. Although there has been much focus on specifying the stoichiometric matrix, the predictive power of genome-scale models equally depends on reaction directions. Two-thirds of reactions in the two eukaryotic reconstructions Homo sapiens Recon 1 and Yeast 5 are specified as irreversible. However, these specifications are mainly based on biochemical textbooks or on their similarity to other organisms and are rarely underpinned by detailed thermodynamic analysis. In this study, a to our knowledge new workflow combining network-embedded thermodynamic and flux variability analysis was used to evaluate existing irreversibility constraints in Recon 1 and Yeast 5 and to identify new ones. A total of 27 and 16 new irreversible reactions were identified in Recon 1 and Yeast 5, respectively, whereas only four reactions were found with directions incorrectly specified against thermodynamics (three in Yeast 5 and one in Recon 1). The workflow further identified for both models several isolated internal loops that require further curation. The framework also highlighted the need for substrate channeling (in human) and ATP hydrolysis (in yeast) for the essential reaction catalyzed by phosphoribosylaminoimidazole carboxylase in purine metabolism. Finally, the framework highlighted differences in proline metabolism between yeast (cytosolic anabolism and mitochondrial catabolism) and humans (exclusively mitochondrial metabolism). We conclude that network-embedded thermodynamics facilitates the specification and validation of irreversibility constraints in compartmentalized metabolic models, at the same time providing further insight into network properties. PMID:25028891

  12. Discussion of Source Reconstruction Models Using 3D MCG Data

    Science.gov (United States)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  13. Modelling the physics in iterative reconstruction for transmission computed tomography

    Science.gov (United States)

    Nuyts, Johan; De Man, Bruno; Fessler, Jeffrey A.; Zbijewski, Wojciech; Beekman, Freek J.

    2013-01-01

    There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of X-ray CT imaging. IR has the ability to significantly reduce patient dose, it provides the flexibility to reconstruct images from arbitrary X-ray system geometries and it allows to include detailed models of photon transport and detection physics, to accurately correct for a wide variety of image degrading effects. This paper reviews discretisation issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. Widespread implementation of IR with highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling. PMID:23739261

  14. Distributed 3-D iterative reconstruction for quantitative SPECT

    International Nuclear Information System (INIS)

    Ju, Z.W.; Frey, E.C.; Tsui, B.M.W.

    1995-01-01

    The authors describe a distributed three dimensional (3-D) iterative reconstruction library for quantitative single-photon emission computed tomography (SPECT). This library includes 3-D projector-backprojector pairs (PBPs) and distributed 3-D iterative reconstruction algorithms. The 3-D PBPs accurately and efficiently model various combinations of the image degrading factors including attenuation, detector response and scatter response. These PBPs were validated by comparing projection data computed using the projectors with that from direct Monte Carlo (MC) simulations. The distributed 3-D iterative algorithms spread the projection-backprojection operations for all the projection angles over a heterogeneous network of single or multi-processor computers to reduce the reconstruction time. Based on a master/slave paradigm, these distributed algorithms provide dynamic load balancing and fault tolerance. The distributed algorithms were verified by comparing images reconstructed using both the distributed and non-distributed algorithms. Computation times for distributed 3-D reconstructions running on up to 4 identical processors were reduced by a factor approximately 80--90% times the number of the processors participating, compared to those for non-distributed 3-D reconstructions running on a single processor. When combined with faster affordable computers, this library provides an efficient means for implementing accurate reconstruction and compensation methods to improve quality and quantitative accuracy in SPECT images

  15. Birth-death models and coalescent point processes: the shape and probability of reconstructed phylogenies.

    Science.gov (United States)

    Lambert, Amaury; Stadler, Tanja

    2013-12-01

    Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L

    2018-02-01

    This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  17. Statistical Models for Social Networks

    NARCIS (Netherlands)

    Snijders, Tom A. B.; Cook, KS; Massey, DS

    2011-01-01

    Statistical models for social networks as dependent variables must represent the typical network dependencies between tie variables such as reciprocity, homophily, transitivity, etc. This review first treats models for single (cross-sectionally observed) networks and then for network dynamics. For

  18. Statistical shape model-based reconstruction of a scaled, patient-specific surface model of the pelvis from a single standard AP x-ray radiograph

    Energy Technology Data Exchange (ETDEWEB)

    Zheng Guoyan [Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, CH-3014 Bern (Switzerland)

    2010-04-15

    Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction. The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark

  19. A neural network technique for remeshing of bone microstructure.

    Science.gov (United States)

    Fischer, Anath; Holdstein, Yaron

    2012-01-01

    Today, there is major interest within the biomedical community in developing accurate noninvasive means for the evaluation of bone microstructure and bone quality. Recent improvements in 3D imaging technology, among them development of micro-CT and micro-MRI scanners, allow in-vivo 3D high-resolution scanning and reconstruction of large specimens or even whole bone models. Thus, the tendency today is to evaluate bone features using 3D assessment techniques rather than traditional 2D methods. For this purpose, high-quality meshing methods are required. However, the 3D meshes produced from current commercial systems usually are of low quality with respect to analysis and rapid prototyping. 3D model reconstruction of bone is difficult due to the complexity of bone microstructure. The small bone features lead to a great deal of neighborhood ambiguity near each vertex. The relatively new neural network method for mesh reconstruction has the potential to create or remesh 3D models accurately and quickly. A neural network (NN), which resembles an artificial intelligence (AI) algorithm, is a set of interconnected neurons, where each neuron is capable of making an autonomous arithmetic calculation. Moreover, each neuron is affected by its surrounding neurons through the structure of the network. This paper proposes an extension of the growing neural gas (GNN) neural network technique for remeshing a triangular manifold mesh that represents bone microstructure. This method has the advantage of reconstructing the surface of a genus-n freeform object without a priori knowledge regarding the original object, its topology, or its shape.

  20. Agent-based modeling and network dynamics

    CERN Document Server

    Namatame, Akira

    2016-01-01

    The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...

  1. Modeling Epidemic Network Failures

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Fagertun, Anna Manolova

    2013-01-01

    This paper presents the implementation of a failure propagation model for transport networks when multiple failures occur resulting in an epidemic. We model the Susceptible Infected Disabled (SID) epidemic model and validate it by comparing it to analytical solutions. Furthermore, we evaluate...... the SID model’s behavior and impact on the network performance, as well as the severity of the infection spreading. The simulations are carried out in OPNET Modeler. The model provides an important input to epidemic connection recovery mechanisms, and can due to its flexibility and versatility be used...... to evaluate multiple epidemic scenarios in various network types....

  2. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    Science.gov (United States)

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  3. Use of an object model in three dimensional image reconstruction. Application in medical imaging

    International Nuclear Information System (INIS)

    Delageniere-Guillot, S.

    1993-02-01

    Threedimensional image reconstruction from projections corresponds to a set of techniques which give information on the inner structure of the studied object. These techniques are mainly used in medical imaging or in non destructive evaluation. Image reconstruction is an ill-posed problem. So the inversion has to be regularized. This thesis deals with the introduction of a priori information within the reconstruction algorithm. The knowledge is introduced through an object model. The proposed scheme is applied to the medical domain for cone beam geometry. We address two specific problems. First, we study the reconstruction of high contrast objects. This can be applied to bony morphology (bone/soft tissue) or to angiography (vascular structures opacified by injection of contrast agent). With noisy projections, the filtering steps of standard methods tend to smooth the natural transitions of the investigated object. In order to regularize the reconstruction but to keep contrast, we introduce a model of classes which involves the Markov random fields theory. We develop a reconstruction scheme: analytic reconstruction-reprojection. Then, we address the case of an object changing during the acquisition. This can be applied to angiography when the contrast agent is moving through the vascular tree. The problem is then stated as a dynamic reconstruction. We define an evolution AR model and we use an algebraic reconstruction method. We represent the object at a particular moment as an intermediary state between the state of the object at the beginning and at the end of the acquisition. We test both methods on simulated and real data, and we prove how the use of an a priori model can improve the results. (author)

  4. Multiscale Pore Throat Network Reconstruction of Tight Porous Media Constrained by Mercury Intrusion Capillary Pressure and Nuclear Magnetic Resonance Measurements

    Science.gov (United States)

    Xu, R.; Prodanovic, M.

    2017-12-01

    Due to the low porosity and permeability of tight porous media, hydrocarbon productivity strongly depends on the pore structure. Effective characterization of pore/throat sizes and reconstruction of their connectivity in tight porous media remains challenging. Having a representative pore throat network, however, is valuable for calculation of other petrophysical properties such as permeability, which is time-consuming and costly to obtain by experimental measurements. Due to a wide range of length scales encountered, a combination of experimental methods is usually required to obtain a comprehensive picture of the pore-body and pore-throat size distributions. In this work, we combine mercury intrusion capillary pressure (MICP) and nuclear magnetic resonance (NMR) measurements by percolation theory to derive pore-body size distribution, following the work by Daigle et al. (2015). However, in their work, the actual pore-throat sizes and the distribution of coordination numbers are not well-defined. To compensate for that, we build a 3D unstructured two-scale pore throat network model initialized by the measured porosity and the calculated pore-body size distributions, with a tunable pore-throat size and coordination number distribution, which we further determine by matching the capillary pressure vs. saturation curve from MICP measurement, based on the fact that the mercury intrusion process is controlled by both the pore/throat size distributions and the connectivity of the pore system. We validate our model by characterizing several core samples from tight Middle East carbonate, and use the network model to predict the apparent permeability of the samples under single phase fluid flow condition. Results show that the permeability we get is in reasonable agreement with the Coreval experimental measurements. The pore throat network we get can be used to further calculate relative permeability curves and simulate multiphase flow behavior, which will provide valuable

  5. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  6. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl; Sim, Alex

    2014-07-07

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  7. Network bandwidth utilization forecast model on high bandwidth networks

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-03-30

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  8. RMBNToolbox: random models for biochemical networks

    Directory of Open Access Journals (Sweden)

    Niemi Jari

    2007-05-01

    Full Text Available Abstract Background There is an increasing interest to model biochemical and cell biological networks, as well as to the computational analysis of these models. The development of analysis methodologies and related software is rapid in the field. However, the number of available models is still relatively small and the model sizes remain limited. The lack of kinetic information is usually the limiting factor for the construction of detailed simulation models. Results We present a computational toolbox for generating random biochemical network models which mimic real biochemical networks. The toolbox is called Random Models for Biochemical Networks. The toolbox works in the Matlab environment, and it makes it possible to generate various network structures, stoichiometries, kinetic laws for reactions, and parameters therein. The generation can be based on statistical rules and distributions, and more detailed information of real biochemical networks can be used in situations where it is known. The toolbox can be easily extended. The resulting network models can be exported in the format of Systems Biology Markup Language. Conclusion While more information is accumulating on biochemical networks, random networks can be used as an intermediate step towards their better understanding. Random networks make it possible to study the effects of various network characteristics to the overall behavior of the network. Moreover, the construction of artificial network models provides the ground truth data needed in the validation of various computational methods in the fields of parameter estimation and data analysis.

  9. Metabolic network modeling of microbial interactions in natural and engineered environmental systems

    Directory of Open Access Journals (Sweden)

    Octavio ePerez-Garcia

    2016-05-01

    Full Text Available We review approaches to characterize metabolic interactions within microbial communities using Stoichiometric Metabolic Network (SMN models for applications in environmental and industrial biotechnology. SMN models are computational tools used to evaluate the metabolic engineering potential of various organisms. They have successfully been applied to design and optimize the microbial production of antibiotics, alcohols and amino acids by single strains. To date however, such models have been rarely applied to analyze and control the metabolism of more complex microbial communities. This is largely attributed to the diversity of microbial community functions, metabolisms and interactions. Here, we firstly review different types of microbial interaction and describe their relevance for natural and engineered environmental processes. Next, we provide a general description of the essential methods of the SMN modeling workflow including the steps of network reconstruction, simulation through Flux Balance Analysis (FBA, experimental data gathering, and model calibration. Then we broadly describe and compare four approaches to model microbial interactions using metabolic networks, i.e. i lumped networks, ii compartment per guild networks, iii bi-level optimization simulations and iv dynamic-SMN methods. These approaches can be used to integrate and analyze diverse microbial physiology, ecology and molecular community data. All of them (except the lumped approach are suitable for incorporating species abundance data but so far they have been used only to model simple communities of two to eight different species. Interactions based on substrate exchange and competition can be directly modeled using the above approaches. However, interactions based on metabolic feedbacks, such as product inhibition and synthropy require extensions to current models, incorporating gene regulation and compounding accumulation mechanisms. SMN models of microbial

  10. Reconstructing Climate Change: The Model-Data Ping-Pong

    Science.gov (United States)

    Stocker, T. F.

    2017-12-01

    When Cesare Emiliani, the father of paleoceanography, made the first attempts at a quantitative reconstruction of Pleistocene climate change in the early 1950s, climate models were not yet conceived. The understanding of paleoceanographic records was therefore limited, and scientists had to resort to plausibility arguments to interpret their data. With the advent of coupled climate models in the early 1970s, for the first time hypotheses about climate processes and climate change could be tested in a dynamically consistent framework. However, only a model hierarchy can cope with the long time scales and the multi-component physical-biogeochemical Earth System. There are many examples how climate models have inspired the interpretation of paleoclimate data on the one hand, and conversely, how data have questioned long-held concepts and models. In this lecture I critically revisit a few examples of this model-data ping-pong, such as the bipolar seesaw, the mid-Holocene greenhouse gas increase, millennial and rapid CO2 changes reconstructed from polar ice cores, and the interpretation of novel paleoceanographic tracers. These examples also highlight many of the still unsolved questions and provide guidance for future research. The combination of high-resolution paleoceanographic data and modeling has never been more relevant than today. It will be the key for an appropriate risk assessment of impacts on the Earth System that are already underway in the Anthropocene.

  11. Dynamic Regulatory Network Reconstruction for Alzheimer’s Disease Based on Matrix Decomposition Techniques

    Directory of Open Access Journals (Sweden)

    Wei Kong

    2014-01-01

    Full Text Available Alzheimer’s disease (AD is the most common form of dementia and leads to irreversible neurodegenerative damage of the brain. Finding the dynamic responses of genes, signaling proteins, transcription factor (TF activities, and regulatory networks of the progressively deteriorative progress of AD would represent a significant advance in discovering the pathogenesis of AD. However, the high throughput technologies of measuring TF activities are not yet available on a genome-wide scale. In this study, based on DNA microarray gene expression data and a priori information of TFs, network component analysis (NCA algorithm is applied to determining the TF activities and regulatory influences on TGs of incipient, moderate, and severe AD. Based on that, the dynamical gene regulatory networks of the deteriorative courses of AD were reconstructed. To select significant genes which are differentially expressed in different courses of AD, independent component analysis (ICA, which is better than the traditional clustering methods and can successfully group one gene in different meaningful biological processes, was used. The molecular biological analysis showed that the changes of TF activities and interactions of signaling proteins in mitosis, cell cycle, immune response, and inflammation play an important role in the deterioration of AD.

  12. Forward modeling of tree-ring data: a case study with a global network

    Science.gov (United States)

    Breitenmoser, P. D.; Frank, D.; Brönnimann, S.

    2012-04-01

    Information derived from tree-rings is one of the most powerful tools presently available for studying past climatic variability as well as identifying fundamental relationships between tree-growth and climate. Climate reconstructions are typically performed by extending linear relationships, established during the overlapping period of instrumental and climate proxy archives into the past. Such analyses, however, are limited by methodological assumptions, including stationarity and linearity of the climate-proxy relationship. We investigate climate and tree-ring data using the Vaganov-Shashkin-Lite (VS-Lite) forward model of tree-ring width formation to examine the relations among actual tree growth and climate (as inferred from the simulated chronologies) to reconstruct past climate variability. The VS-lite model has been shown to produce skill comparable to that achieved using classical dendrochronological statistical modeling techniques when applied on simulations of a network of North American tree-ring chronologies. Although the detailed mechanistic processes such as photosynthesis, storage, or cell processes are not modeled directly, the net effect of the dominating nonlinear climatic controls on tree-growth are implemented into the model by the principle of limiting factors and threshold growth response functions. The VS-lite model requires as inputs only latitude, monthly mean temperature and monthly accumulated precipitation. Hence, this simple, process-based model enables ring-width simulation at any location where monthly climate records exist. In this study, we analyse the growth response of simulated tree-rings to monthly climate conditions obtained from the 20th century reanalysis project back to 1871. These simulated tree-ring chronologies are compared to the climate-driven variability in worldwide observed tree-ring chronologies from the International Tree Ring Database. Results point toward the suitability of the relationship among actual tree

  13. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  14. "Growing trees backwards": Description of a stand reconstruction model

    Science.gov (United States)

    Jonathan D. Bakker; Andrew J. Sanchez Meador; Peter Z. Fule; David W. Huffman; Margaret M. Moore

    2008-01-01

    We describe an individual-tree model that uses contemporary measurements to "grow trees backward" and reconstruct past tree diameters and stand structure in ponderosa pine dominated stands of the Southwest. Model inputs are contemporary structural measurements of all snags, logs, stumps, and living trees, and radial growth measurements, if available. Key...

  15. Step patterns on vicinal reconstructed surfaces

    Science.gov (United States)

    Vilfan, Igor

    1996-04-01

    Step patterns on vicinal (2 × 1) reconstructed surfaces of noble metals Au(110) and Pt(110), miscut towards the (100) orientation, are investigated. The free energy of the reconstructed surface with a network of crossing opposite steps is calculated in the strong chirality regime when the steps cannot make overhangs. It is explained why the steps are not perpendicular to the direction of the miscut but form in equilibrium a network of crossing steps which make the surface to look like a fish skin. The network formation is the consequence of competition between the — predominantly elastic — energy loss and entropy gain. It is in agreement with recent scanning tunnelling microscopy observations on vicinal Au(110) and Pt(110) surfaces.

  16. A program to compute the soft Robinson-Foulds distance between phylogenetic networks.

    Science.gov (United States)

    Lu, Bingxin; Zhang, Louxin; Leong, Hon Wai

    2017-03-14

    Over the past two decades, phylogenetic networks have been studied to model reticulate evolutionary events. The relationships among phylogenetic networks, phylogenetic trees and clusters serve as the basis for reconstruction and comparison of phylogenetic networks. To understand these relationships, two problems are raised: the tree containment problem, which asks whether a phylogenetic tree is displayed in a phylogenetic network, and the cluster containment problem, which asks whether a cluster is represented at a node in a phylogenetic network. Both the problems are NP-complete. A fast exponential-time algorithm for the cluster containment problem on arbitrary networks is developed and implemented in C. The resulting program is further extended into a computer program for fast computation of the Soft Robinson-Foulds distance between phylogenetic networks. Two computer programs are developed for facilitating reconstruction and validation of phylogenetic network models in evolutionary and comparative genomics. Our simulation tests indicated that they are fast enough for use in practice. Additionally, the distribution of the Soft Robinson-Foulds distance between phylogenetic networks is demonstrated to be unlikely normal by our simulation data.

  17. Indian-ink perfusion based method for reconstructing continuous vascular networks in whole mouse brain.

    Directory of Open Access Journals (Sweden)

    Songchao Xue

    Full Text Available The topology of the cerebral vasculature, which is the energy transport corridor of the brain, can be used to study cerebral circulatory pathways. Limited by the restrictions of the vascular markers and imaging methods, studies on cerebral vascular structure now mainly focus on either observation of the macro vessels in a whole brain or imaging of the micro vessels in a small region. Simultaneous vascular studies of arteries, veins and capillaries have not been achieved in the whole brain of mammals. Here, we have combined the improved gelatin-Indian ink vessel perfusion process with Micro-Optical Sectioning Tomography for imaging the vessel network of an entire mouse brain. With 17 days of work, an integral dataset for the entire cerebral vessels was acquired. The voxel resolution is 0.35×0.4×2.0 µm(3 for the whole brain. Besides the observations of fine and complex vascular networks in the reconstructed slices and entire brain views, a representative continuous vascular tracking has been demonstrated in the deep thalamus. This study provided an effective method for studying the entire macro and micro vascular networks of mouse brain simultaneously.

  18. Conceptualising forensic science and forensic reconstruction. Part I: A conceptual model.

    Science.gov (United States)

    Morgan, R M

    2017-11-01

    There has been a call for forensic science to actively return to the approach of scientific endeavour. The importance of incorporating an awareness of the requirements of the law in its broadest sense, and embedding research into both practice and policy within forensic science, is arguably critical to achieving such an endeavour. This paper presents a conceptual model (FoRTE) that outlines the holistic nature of trace evidence in the 'endeavour' of forensic reconstruction. This model offers insights into the different components intrinsic to transparent, reproducible and robust reconstructions in forensic science. The importance of situating evidence within the whole forensic science process (from crime scene to court), of developing evidence bases to underpin each stage, of frameworks that offer insights to the interaction of different lines of evidence, and the role of expertise in decision making are presented and their interactions identified. It is argued that such a conceptual model has value in identifying the future steps for harnessing the value of trace evidence in forensic reconstruction. It also highlights that there is a need to develop a nuanced approach to reconstructions that incorporates both empirical evidence bases and expertise. A conceptual understanding has the potential to ensure that the endeavour of forensic reconstruction has its roots in 'problem-solving' science, and can offer transparency and clarity in the conclusions and inferences drawn from trace evidence, thereby enabling the value of trace evidence to be realised in investigations and the courts. Copyright © 2017 The Author. Published by Elsevier B.V. All rights reserved.

  19. A study of epileptogenic network structures in rat hippocampal cultures using first spike latencies during synchronization events

    International Nuclear Information System (INIS)

    Raghavan, Mohan; Amrutur, Bharadwaj; Srinivas, Kalyan V; Sikdar, Sujit K

    2012-01-01

    Study of hypersynchronous activity is of prime importance for combating epilepsy. Studies on network structure typically reconstruct the network by measuring various aspects of the interaction between neurons and subsequently measure the properties of the reconstructed network. In sub-sampled networks such methods lead to significant errors in reconstruction. Using rat hippocampal neurons cultured on a multi-electrode array dish and a glutamate injury model of epilepsy in vitro, we studied synchronous activity in neuronal networks. Using the first spike latencies in various neurons during a network burst, we extract various recurring spatio-temporal onset patterns in the networks. Comparing the patterns seen in control and injured networks, we observe that injured networks express a wide diversity in their foci (origin) and activation pattern, while control networks show limited diversity. Furthermore, we note that onset patterns in glutamate injured networks show a positive correlation between synchronization delay and physical distance between neurons, while control networks do not. (paper)

  20. Detection and localization of change points in temporal networks with the aid of stochastic block models

    Science.gov (United States)

    De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan

    2016-11-01

    A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.

  1. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  2. Network thermodynamic curation of human and yeast genome-scale metabolic models.

    Science.gov (United States)

    Martínez, Verónica S; Quek, Lake-Ee; Nielsen, Lars K

    2014-07-15

    Genome-scale models are used for an ever-widening range of applications. Although there has been much focus on specifying the stoichiometric matrix, the predictive power of genome-scale models equally depends on reaction directions. Two-thirds of reactions in the two eukaryotic reconstructions Homo sapiens Recon 1 and Yeast 5 are specified as irreversible. However, these specifications are mainly based on biochemical textbooks or on their similarity to other organisms and are rarely underpinned by detailed thermodynamic analysis. In this study, a to our knowledge new workflow combining network-embedded thermodynamic and flux variability analysis was used to evaluate existing irreversibility constraints in Recon 1 and Yeast 5 and to identify new ones. A total of 27 and 16 new irreversible reactions were identified in Recon 1 and Yeast 5, respectively, whereas only four reactions were found with directions incorrectly specified against thermodynamics (three in Yeast 5 and one in Recon 1). The workflow further identified for both models several isolated internal loops that require further curation. The framework also highlighted the need for substrate channeling (in human) and ATP hydrolysis (in yeast) for the essential reaction catalyzed by phosphoribosylaminoimidazole carboxylase in purine metabolism. Finally, the framework highlighted differences in proline metabolism between yeast (cytosolic anabolism and mitochondrial catabolism) and humans (exclusively mitochondrial metabolism). We conclude that network-embedded thermodynamics facilitates the specification and validation of irreversibility constraints in compartmentalized metabolic models, at the same time providing further insight into network properties. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  3. Expediting model-based optoacoustic reconstructions with tomographic symmetries

    International Nuclear Information System (INIS)

    Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-01-01

    Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated

  4. Artificial neural network approach to predict surgical site infection after free-flap reconstruction in patients receiving surgery for head and neck cancer.

    Science.gov (United States)

    Kuo, Pao-Jen; Wu, Shao-Chun; Chien, Peng-Chen; Chang, Shu-Shya; Rau, Cheng-Shyuan; Tai, Hsueh-Ling; Peng, Shu-Hui; Lin, Yi-Chun; Chen, Yi-Chun; Hsieh, Hsiao-Yun; Hsieh, Ching-Hua

    2018-03-02

    The aim of this study was to develop an effective surgical site infection (SSI) prediction model in patients receiving free-flap reconstruction after surgery for head and neck cancer using artificial neural network (ANN), and to compare its predictive power with that of conventional logistic regression (LR). There were 1,836 patients with 1,854 free-flap reconstructions and 438 postoperative SSIs in the dataset for analysis. They were randomly assigned tin ratio of 7:3 into a training set and a test set. Based on comprehensive characteristics of patients and diseases in the absence or presence of operative data, prediction of SSI was performed at two time points (pre-operatively and post-operatively) with a feed-forward ANN and the LR models. In addition to the calculated accuracy, sensitivity, and specificity, the predictive performance of ANN and LR were assessed based on area under the curve (AUC) measures of receiver operator characteristic curves and Brier score. ANN had a significantly higher AUC (0.892) of post-operative prediction and AUC (0.808) of pre-operative prediction than LR (both P <0.0001). In addition, there was significant higher AUC of post-operative prediction than pre-operative prediction by ANN (p<0.0001). With the highest AUC and the lowest Brier score (0.090), the post-operative prediction by ANN had the highest overall predictive performance. The post-operative prediction by ANN had the highest overall performance in predicting SSI after free-flap reconstruction in patients receiving surgery for head and neck cancer.

  5. Generalized Network Psychometrics : Combining Network and Latent Variable Models

    NARCIS (Netherlands)

    Epskamp, S.; Rhemtulla, M.; Borsboom, D.

    2017-01-01

    We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between

  6. Modeling genome-wide dynamic regulatory network in mouse lungs with influenza infection using high-dimensional ordinary differential equations.

    Science.gov (United States)

    Wu, Shuang; Liu, Zhi-Ping; Qiu, Xing; Wu, Hulin

    2014-01-01

    The immune response to viral infection is regulated by an intricate network of many genes and their products. The reverse engineering of gene regulatory networks (GRNs) using mathematical models from time course gene expression data collected after influenza infection is key to our understanding of the mechanisms involved in controlling influenza infection within a host. A five-step pipeline: detection of temporally differentially expressed genes, clustering genes into co-expressed modules, identification of network structure, parameter estimate refinement, and functional enrichment analysis, is developed for reconstructing high-dimensional dynamic GRNs from genome-wide time course gene expression data. Applying the pipeline to the time course gene expression data from influenza-infected mouse lungs, we have identified 20 distinct temporal expression patterns in the differentially expressed genes and constructed a module-based dynamic network using a linear ODE model. Both intra-module and inter-module annotations and regulatory relationships of our inferred network show some interesting findings and are highly consistent with existing knowledge about the immune response in mice after influenza infection. The proposed method is a computationally efficient, data-driven pipeline bridging experimental data, mathematical modeling, and statistical analysis. The application to the influenza infection data elucidates the potentials of our pipeline in providing valuable insights into systematic modeling of complicated biological processes.

  7. Model-based iterative reconstruction for reduction of radiation dose in abdominopelvic CT: comparison to adaptive statistical iterative reconstruction.

    Science.gov (United States)

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2013-12-01

    To evaluate dose reduction and image quality of abdominopelvic computed tomography (CT) reconstructed with model-based iterative reconstruction (MBIR) compared to adaptive statistical iterative reconstruction (ASIR). In this prospective study, 85 patients underwent referential-, low-, and ultralow-dose unenhanced abdominopelvic CT. Images were reconstructed with ASIR for low-dose (L-ASIR) and ultralow-dose CT (UL-ASIR), and with MBIR for ultralow-dose CT (UL-MBIR). Image noise was measured in the abdominal aorta and iliopsoas muscle. Subjective image analyses and a lesion detection study (adrenal nodules) were conducted by two blinded radiologists. A reference standard was established by a consensus panel of two different radiologists using referential-dose CT reconstructed with filtered back projection. Compared to low-dose CT, there was a 63% decrease in dose-length product with ultralow-dose CT. UL-MBIR had significantly lower image noise than L-ASIR and UL-ASIR (all pASIR and UL-ASIR (all pASIR in diagnostic acceptability (p>0.65), or diagnostic performance for adrenal nodules (p>0.87). MBIR significantly improves image noise and streak artifacts compared to ASIR, and can achieve radiation dose reduction without severely compromising image quality.

  8. Joint model of motion and anatomy for PET image reconstruction

    International Nuclear Information System (INIS)

    Qiao Feng; Pan Tinsu; Clark, John W. Jr.; Mawlawi, Osama

    2007-01-01

    Anatomy-based positron emission tomography (PET) image enhancement techniques have been shown to have the potential for improving PET image quality. However, these techniques assume an accurate alignment between the anatomical and the functional images, which is not always valid when imaging the chest due to respiratory motion. In this article, we present a joint model of both motion and anatomical information by integrating a motion-incorporated PET imaging system model with an anatomy-based maximum a posteriori image reconstruction algorithm. The mismatched anatomical information due to motion can thus be effectively utilized through this joint model. A computer simulation and a phantom study were conducted to assess the efficacy of the joint model, whereby motion and anatomical information were either modeled separately or combined. The reconstructed images in each case were compared to corresponding reference images obtained using a quadratic image prior based maximum a posteriori reconstruction algorithm for quantitative accuracy. Results of these studies indicated that while modeling anatomical information or motion alone improved the PET image quantitation accuracy, a larger improvement in accuracy was achieved when using the joint model. In the computer simulation study and using similar image noise levels, the improvement in quantitation accuracy compared to the reference images was 5.3% and 19.8% when using anatomical or motion information alone, respectively, and 35.5% when using the joint model. In the phantom study, these results were 5.6%, 5.8%, and 19.8%, respectively. These results suggest that motion compensation is important in order to effectively utilize anatomical information in chest imaging using PET. The joint motion-anatomy model presented in this paper provides a promising solution to this problem

  9. Bayesian Network Webserver: a comprehensive tool for biological network modeling.

    Science.gov (United States)

    Ziebarth, Jesse D; Bhattacharya, Anindya; Cui, Yan

    2013-11-01

    The Bayesian Network Webserver (BNW) is a platform for comprehensive network modeling of systems genetics and other biological datasets. It allows users to quickly and seamlessly upload a dataset, learn the structure of the network model that best explains the data and use the model to understand relationships between network variables. Many datasets, including those used to create genetic network models, contain both discrete (e.g. genotype) and continuous (e.g. gene expression traits) variables, and BNW allows for modeling hybrid datasets. Users of BNW can incorporate prior knowledge during structure learning through an easy-to-use structural constraint interface. After structure learning, users are immediately presented with an interactive network model, which can be used to make testable hypotheses about network relationships. BNW, including a downloadable structure learning package, is available at http://compbio.uthsc.edu/BNW. (The BNW interface for adding structural constraints uses HTML5 features that are not supported by current version of Internet Explorer. We suggest using other browsers (e.g. Google Chrome or Mozilla Firefox) when accessing BNW). ycui2@uthsc.edu. Supplementary data are available at Bioinformatics online.

  10. Eight challenges for network epidemic models

    Directory of Open Access Journals (Sweden)

    Lorenzo Pellis

    2015-03-01

    Full Text Available Networks offer a fertile framework for studying the spread of infection in human and animal populations. However, owing to the inherent high-dimensionality of networks themselves, modelling transmission through networks is mathematically and computationally challenging. Even the simplest network epidemic models present unanswered questions. Attempts to improve the practical usefulness of network models by including realistic features of contact networks and of host–pathogen biology (e.g. waning immunity have made some progress, but robust analytical results remain scarce. A more general theory is needed to understand the impact of network structure on the dynamics and control of infection. Here we identify a set of challenges that provide scope for active research in the field of network epidemic models.

  11. Complex networks-based energy-efficient evolution model for wireless sensor networks

    Energy Technology Data Exchange (ETDEWEB)

    Zhu Hailin [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China)], E-mail: zhuhailin19@gmail.com; Luo Hong [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China); Peng Haipeng; Li Lixiang; Luo Qun [Information Secure Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, P.O. Box 145, Beijing 100876 (China)

    2009-08-30

    Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.

  12. Complex networks-based energy-efficient evolution model for wireless sensor networks

    International Nuclear Information System (INIS)

    Zhu Hailin; Luo Hong; Peng Haipeng; Li Lixiang; Luo Qun

    2009-01-01

    Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.

  13. Reconstructing building mass models from UAV images

    KAUST Repository

    Li, Minglei

    2015-07-26

    We present an automatic reconstruction pipeline for large scale urban scenes from aerial images captured by a camera mounted on an unmanned aerial vehicle. Using state-of-the-art Structure from Motion and Multi-View Stereo algorithms, we first generate a dense point cloud from the aerial images. Based on the statistical analysis of the footprint grid of the buildings, the point cloud is classified into different categories (i.e., buildings, ground, trees, and others). Roof structures are extracted for each individual building using Markov random field optimization. Then, a contour refinement algorithm based on pivot point detection is utilized to refine the contour of patches. Finally, polygonal mesh models are extracted from the refined contours. Experiments on various scenes as well as comparisons with state-of-the-art reconstruction methods demonstrate the effectiveness and robustness of the proposed method.

  14. Deep Convolutional Networks for Event Reconstruction and Particle Tagging on NOvA and DUNE

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep Convolutional Neural Networks (CNNs) have been widely applied in computer vision to solve complex problems in image recognition and analysis. In recent years many efforts have emerged to extend the use of this technology to HEP applications, including the Convolutional Visual Network (CVN), our implementation for identification of neutrino events. In this presentation I will describe the core concepts of CNNs, the details of our particular implementation in the Caffe framework and our application to identify NOvA events. NOvA is a long baseline neutrino experiment whose main goal is the measurement of neutrino oscillations. This relies on the accurate identification and reconstruction of the neutrino flavor in the interactions we observe. In 2016 the NOvA experiment released results for the observation of oscillations in the ν μ → ν e channel, the first HEP result employing CNNs. I will also discuss our approach at event identification on NOvA as well as recent developments in the application of CNN...

  15. Brand Marketing Model on Social Networks

    Directory of Open Access Journals (Sweden)

    Jolita Jezukevičiūtė

    2014-04-01

    Full Text Available The paper analyzes the brand and its marketing solutions onsocial networks. This analysis led to the creation of improvedbrand marketing model on social networks, which will contributeto the rapid and cheap organization brand recognition, increasecompetitive advantage and enhance consumer loyalty. Therefore,the brand and a variety of social networks are becoming a hotresearch area for brand marketing model on social networks.The world‘s most successful brand marketing models exploratoryanalysis of a single case study revealed a brand marketingsocial networking tools that affect consumers the most. Basedon information analysis and methodological studies, develop abrand marketing model on social networks.

  16. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  17. Image reconstruction by domain-transform manifold learning

    Science.gov (United States)

    Zhu, Bo; Liu, Jeremiah Z.; Cauley, Stephen F.; Rosen, Bruce R.; Rosen, Matthew S.

    2018-03-01

    Image reconstruction is essential for imaging applications across the physical and life sciences, including optical and radar systems, magnetic resonance imaging, X-ray computed tomography, positron emission tomography, ultrasound imaging and radio astronomy. During image acquisition, the sensor encodes an intermediate representation of an object in the sensor domain, which is subsequently reconstructed into an image by an inversion of the encoding function. Image reconstruction is challenging because analytic knowledge of the exact inverse transform may not exist a priori, especially in the presence of sensor non-idealities and noise. Thus, the standard reconstruction approach involves approximating the inverse function with multiple ad hoc stages in a signal processing chain, the composition of which depends on the details of each acquisition strategy, and often requires expert parameter tuning to optimize reconstruction performance. Here we present a unified framework for image reconstruction—automated transform by manifold approximation (AUTOMAP)—which recasts image reconstruction as a data-driven supervised learning task that allows a mapping between the sensor and the image domain to emerge from an appropriate corpus of training data. We implement AUTOMAP with a deep neural network and exhibit its flexibility in learning reconstruction transforms for various magnetic resonance imaging acquisition strategies, using the same network architecture and hyperparameters. We further demonstrate that manifold learning during training results in sparse representations of domain transforms along low-dimensional data manifolds, and observe superior immunity to noise and a reduction in reconstruction artefacts compared with conventional handcrafted reconstruction methods. In addition to improving the reconstruction performance of existing acquisition methodologies, we anticipate that AUTOMAP and other learned reconstruction approaches will accelerate the development

  18. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.

    2006-01-01

    The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...

  19. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    Science.gov (United States)

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  20. Sensing of complex buildings and reconstruction into photo-realistic 3D models

    NARCIS (Netherlands)

    Heredia Soriano, F.J.

    2012-01-01

    The 3D reconstruction of indoor and outdoor environments has received an interest only recently, as companies began to recognize that using reconstructed models is a way to generate revenue through location-based services and advertisements. A great amount of research has been done in the field of

  1. Reconstruction of in-plane strain maps using hybrid dense sensor network composed of sensing skin

    International Nuclear Information System (INIS)

    Downey, Austin; Laflamme, Simon; Ubertini, Filippo

    2016-01-01

    The authors have recently developed a soft-elastomeric capacitive (SEC)-based thin film sensor for monitoring strain on mesosurfaces. Arranged in a network configuration, the sensing system is analogous to a biological skin, where local strain can be monitored over a global area. Under plane stress conditions, the sensor output contains the additive measurement of the two principal strain components over the monitored surface. In applications where the evaluation of strain maps is useful, in structural health monitoring for instance, such signal must be decomposed into linear strain components along orthogonal directions. Previous work has led to an algorithm that enabled such decomposition by leveraging a dense sensor network configuration with the addition of assumed boundary conditions. Here, we significantly improve the algorithm’s accuracy by leveraging mature off-the-shelf solutions to create a hybrid dense sensor network (HDSN) to improve on the boundary condition assumptions. The system’s boundary conditions are enforced using unidirectional RSGs and assumed virtual sensors. Results from an extensive experimental investigation demonstrate the good performance of the proposed algorithm and its robustness with respect to sensors’ layout. Overall, the proposed algorithm is seen to effectively leverage the advantages of a hybrid dense network for application of the thin film sensor to reconstruct surface strain fields over large surfaces. (paper)

  2. Bayesian hierarchical models for regional climate reconstructions of the last glacial maximum

    Science.gov (United States)

    Weitzel, Nils; Hense, Andreas; Ohlwein, Christian

    2017-04-01

    Spatio-temporal reconstructions of past climate are important for the understanding of the long term behavior of the climate system and the sensitivity to forcing changes. Unfortunately, they are subject to large uncertainties, have to deal with a complex proxy-climate structure, and a physically reasonable interpolation between the sparse proxy observations is difficult. Bayesian Hierarchical Models (BHMs) are a class of statistical models that is well suited for spatio-temporal reconstructions of past climate because they permit the inclusion of multiple sources of information (e.g. records from different proxy types, uncertain age information, output from climate simulations) and quantify uncertainties in a statistically rigorous way. BHMs in paleoclimatology typically consist of three stages which are modeled individually and are combined using Bayesian inference techniques. The data stage models the proxy-climate relation (often named transfer function), the process stage models the spatio-temporal distribution of the climate variables of interest, and the prior stage consists of prior distributions of the model parameters. For our BHMs, we translate well-known proxy-climate transfer functions for pollen to a Bayesian framework. In addition, we can include Gaussian distributed local climate information from preprocessed proxy records. The process stage combines physically reasonable spatial structures from prior distributions with proxy records which leads to a multivariate posterior probability distribution for the reconstructed climate variables. The prior distributions that constrain the possible spatial structure of the climate variables are calculated from climate simulation output. We present results from pseudoproxy tests as well as new regional reconstructions of temperatures for the last glacial maximum (LGM, ˜ 21,000 years BP). These reconstructions combine proxy data syntheses with information from climate simulations for the LGM that were

  3. A Hybrid Model Based on Wavelet Decomposition-Reconstruction in Track Irregularity State Forecasting

    Directory of Open Access Journals (Sweden)

    Chaolong Jia

    2015-01-01

    Full Text Available Wavelet is able to adapt to the requirements of time-frequency signal analysis automatically and can focus on any details of the signal and then decompose the function into the representation of a series of simple basis functions. It is of theoretical and practical significance. Therefore, this paper does subdivision on track irregularity time series based on the idea of wavelet decomposition-reconstruction and tries to find the best fitting forecast model of detail signal and approximate signal obtained through track irregularity time series wavelet decomposition, respectively. On this ideology, piecewise gray-ARMA recursive based on wavelet decomposition and reconstruction (PG-ARMARWDR and piecewise ANN-ARMA recursive based on wavelet decomposition and reconstruction (PANN-ARMARWDR models are proposed. Comparison and analysis of two models have shown that both these models can achieve higher accuracy.

  4. Use of a model for 3D image reconstruction

    International Nuclear Information System (INIS)

    Delageniere, S.; Grangeat, P.

    1991-01-01

    We propose a software for 3D image reconstruction in transmission tomography. This software is based on the use of a model and of the RADON algorithm developed at LETI. The introduction of a markovian model helps us to enhance contrast and straitened the natural transitions existing in the objects studied, whereas standard transform methods smoothe them

  5. Automated reconstruction of 3D models from real environments

    Science.gov (United States)

    Sequeira, V.; Ng, K.; Wolfart, E.; Gonçalves, J. G. M.; Hogg, D.

    This paper describes an integrated approach to the construction of textured 3D scene models of building interiors from laser range data and visual images. This approach has been implemented in a collection of algorithms and sensors within a prototype device for 3D reconstruction, known as the EST (Environmental Sensor for Telepresence). The EST can take the form of a push trolley or of an autonomous mobile platform. The Autonomous EST (AEST) has been designed to provide an integrated solution for automating the creation of complete models. Embedded software performs several functions, including triangulation of the range data, registration of video texture, registration and integration of data acquired from different capture points. Potential applications include facilities management for the construction industry and creating reality models to be used in general areas of virtual reality, for example, virtual studios, virtualised reality for content-related applications (e.g., CD-ROMs), social telepresence, architecture and others. The paper presents the main components of the EST/AEST, and presents some example results obtained from the prototypes. The reconstructed model is encoded in VRML format so that it is possible to access and view the model via the World Wide Web.

  6. UROX 2.0: an interactive tool for fitting atomic models into electron-microscopy reconstructions

    International Nuclear Information System (INIS)

    Siebert, Xavier; Navaza, Jorge

    2009-01-01

    UROX is software designed for the interactive fitting of atomic models into electron-microscopy reconstructions. The main features of the software are presented, along with a few examples. Electron microscopy of a macromolecular structure can lead to three-dimensional reconstructions with resolutions that are typically in the 30–10 Å range and sometimes even beyond 10 Å. Fitting atomic models of the individual components of the macromolecular structure (e.g. those obtained by X-ray crystallography or nuclear magnetic resonance) into an electron-microscopy map allows the interpretation of the latter at near-atomic resolution, providing insight into the interactions between the components. Graphical software is presented that was designed for the interactive fitting and refinement of atomic models into electron-microscopy reconstructions. Several characteristics enable it to be applied over a wide range of cases and resolutions. Firstly, calculations are performed in reciprocal space, which results in fast algorithms. This allows the entire reconstruction (or at least a sizeable portion of it) to be used by taking into account the symmetry of the reconstruction both in the calculations and in the graphical display. Secondly, atomic models can be placed graphically in the map while the correlation between the model-based electron density and the electron-microscopy reconstruction is computed and displayed in real time. The positions and orientations of the models are refined by a least-squares minimization. Thirdly, normal-mode calculations can be used to simulate conformational changes between the atomic model of an individual component and its corresponding density within a macromolecular complex determined by electron microscopy. These features are illustrated using three practical cases with different symmetries and resolutions. The software, together with examples and user instructions, is available free of charge at http://mem.ibs.fr/UROX/

  7. Modeling Network Interdiction Tasks

    Science.gov (United States)

    2015-09-17

    118 xiii Table Page 36 Computation times for weighted, 100-node random networks for GAND Approach testing in Python ...in Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 38 Accuracy measures for weighted, 100-node random networks for GAND...networks [15:p. 1]. A common approach to modeling network interdiction is to formulate the problem in terms of a two-stage strategic game between two

  8. A TRACER METHOD FOR COMPUTING TYPE IA SUPERNOVA YIELDS: BURNING MODEL CALIBRATION, RECONSTRUCTION OF THICKENED FLAMES, AND VERIFICATION FOR PLANAR DETONATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Townsley, Dean M.; Miles, Broxton J. [Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL (United States); Timmes, F. X. [School of Earth and Space Exploration, Arizona State University, Tempe, AZ (United States); Calder, Alan C. [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY (United States); Brown, Edward F., E-mail: Dean.M.Townsley@ua.edu [The Joint Institute for Nuclear Astrophysics, Michigan State University, East Lansing, MI (United States)

    2016-07-01

    We refine our previously introduced parameterized model for explosive carbon–oxygen fusion during thermonuclear Type Ia supernovae (SNe Ia) by adding corrections to post-processing of recorded Lagrangian fluid-element histories to obtain more accurate isotopic yields. Deflagration and detonation products are verified for propagation in a medium of uniform density. A new method is introduced for reconstructing the temperature–density history within the artificially thick model deflagration front. We obtain better than 5% consistency between the electron capture computed by the burning model and yields from post-processing. For detonations, we compare to a benchmark calculation of the structure of driven steady-state planar detonations performed with a large nuclear reaction network and error-controlled integration. We verify that, for steady-state planar detonations down to a density of 5 × 10{sup 6} g cm{sup −3}, our post-processing matches the major abundances in the benchmark solution typically to better than 10% for times greater than 0.01 s after the passage of the shock front. As a test case to demonstrate the method, presented here with post-processing for the first time, we perform a two-dimensional simulation of a SN Ia in the scenario of a Chandrasekhar-mass deflagration–detonation transition (DDT). We find that reconstruction of deflagration tracks leads to slightly more complete silicon burning than without reconstruction. The resulting abundance structure of the ejecta is consistent with inferences from spectroscopic studies of observed SNe Ia. We confirm the absence of a central region of stable Fe-group material for the multi-dimensional DDT scenario. Detailed isotopic yields are tabulated and change only modestly when using deflagration reconstruction.

  9. Mobility Models for Next Generation Wireless Networks Ad Hoc, Vehicular and Mesh Networks

    CERN Document Server

    Santi, Paolo

    2012-01-01

    Mobility Models for Next Generation Wireless Networks: Ad Hoc, Vehicular and Mesh Networks provides the reader with an overview of mobility modelling, encompassing both theoretical and practical aspects related to the challenging mobility modelling task. It also: Provides up-to-date coverage of mobility models for next generation wireless networksOffers an in-depth discussion of the most representative mobility models for major next generation wireless network application scenarios, including WLAN/mesh networks, vehicular networks, wireless sensor networks, and

  10. Reconstructing pre-acidification pH for an acidified Scottish loch: A comparison of palaeolimnological and modelling approaches

    International Nuclear Information System (INIS)

    Battarbee, R.W.; Monteith, D.T.; Juggins, S.; Evans, C.D.; Jenkins, A.; Simpson, G.L.

    2005-01-01

    We reconstruct the pre-acidification pH of the Round Loch of Glenhead for 1800 AD using three diatom-pH transfer functions and a diatom-cladocera modern analogue technique (MAT), and compare these palaeo-data with hindcast data for the loch using the dynamic catchment acidification model MAGIC. We assess the accuracy of the transfer functions by comparing pH inferences from contemporary sediment and sediment trap diatom samples from the lake with measured pH from the UK Acid Waters Monitoring Network. The results from the transfer functions estimate the pH in 1800 to have been between 5.5. and 5.7, the MAT approach estimates pH at 5.8 and the MAGIC hindcast (for 1850) is pH 6.1. Whilst we have no independent method of assessing which of these values is most accurate, the disagreement between the two approaches indicates that further work is needed to resolve the discrepancies. - Methods of reconstructing pre-acidification pH for an acidified Scottish loch compared

  11. Complex Networks in Psychological Models

    Science.gov (United States)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  12. Reconstruction of missing daily streamflow data using dynamic regression models

    Science.gov (United States)

    Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault

    2015-12-01

    River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.

  13. Performance measurement of PSF modeling reconstruction (True X) on Siemens Biograph TruePoint TrueV PET/CT.

    Science.gov (United States)

    Lee, Young Sub; Kim, Jin Su; Kim, Kyeong Min; Kang, Joo Hyun; Lim, Sang Moo; Kim, Hee-Joung

    2014-05-01

    The Siemens Biograph TruePoint TrueV (B-TPTV) positron emission tomography (PET) scanner performs 3D PET reconstruction using a system matrix with point spread function (PSF) modeling (called the True X reconstruction). PET resolution was dramatically improved with the True X method. In this study, we assessed the spatial resolution and image quality on a B-TPTV PET scanner. In addition, we assessed the feasibility of animal imaging with a B-TPTV PET and compared it with a microPET R4 scanner. Spatial resolution was measured at center and at 8 cm offset from the center in transverse plane with warm background activity. True X, ordered subset expectation maximization (OSEM) without PSF modeling, and filtered back-projection (FBP) reconstruction methods were used. Percent contrast (% contrast) and percent background variability (% BV) were assessed according to NEMA NU2-2007. The recovery coefficient (RC), non-uniformity, spill-over ratio (SOR), and PET imaging of the Micro Deluxe Phantom were assessed to compare image quality of B-TPTV PET with that of the microPET R4. When True X reconstruction was used, spatial resolution was RC with True X reconstruction was higher than that with the FBP method and the OSEM without PSF modeling method on the microPET R4. The non-uniformity with True X reconstruction was higher than that with FBP and OSEM without PSF modeling on microPET R4. SOR with True X reconstruction was better than that with FBP or OSEM without PSF modeling on the microPET R4. This study assessed the performance of the True X reconstruction. Spatial resolution with True X reconstruction was improved by 45 % and its % contrast was significantly improved compared to those with the conventional OSEM without PSF modeling reconstruction algorithm. The noise level was higher than that with the other reconstruction algorithm. Therefore, True X reconstruction should be used with caution when quantifying PET data.

  14. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  15. APPLICATION OF 3D MODELING IN 3D PRINTING FOR THE LOWER JAW RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Yu. Yu. Dikov

    2015-01-01

    Full Text Available Aim of study: improvement of functional and aesthetic results of microsurgery reconstructions of the lower jaw due to the use of the methodology of 3D modeling and 3D printing. Application of this methodology has been demonstrated on the example of treatment of 4 patients with locally distributed tumors of the mouth cavity, who underwent excision of the tumor with simultaneous reconstruction of the lower jaw with revascularized fibular graft.Before, one patient has already undergo segmental resection of the lower jaw with the defect replacement with the avascular ileac graft and a reconstruction plate. Then, a relapse of the disease and lysis of the graft has developed with him. Modeling of the graft according to the shape of the lower jaw was performed by making osteotomies of the bone part of the graft using three-dimensional virtual models created by computed tomography data. Then these 3D models were printed with a 3D printer of plastic with the scale of 1:1 with the fused deposition modeling (FDM technology and were used during the surgery in the course of modeling of the graft. Sterilizing of the plastic model was performed in the formalin chamber.This methodology allowed more specific reconstruction of the resected fragment of the lower jaw and get better functional and aesthetic results and prepare patients to further dental rehabilitation. Advantages of this methodology are the possibility of simultaneous performance of stages of reconstruction and resection and shortening of the time of surgery.

  16. Gossip spread in social network Models

    Science.gov (United States)

    Johansson, Tobias

    2017-04-01

    Gossip almost inevitably arises in real social networks. In this article we investigate the relationship between the number of friends of a person and limits on how far gossip about that person can spread in the network. How far gossip travels in a network depends on two sets of factors: (a) factors determining gossip transmission from one person to the next and (b) factors determining network topology. For a simple model where gossip is spread among people who know the victim it is known that a standard scale-free network model produces a non-monotonic relationship between number of friends and expected relative spread of gossip, a pattern that is also observed in real networks (Lind et al., 2007). Here, we study gossip spread in two social network models (Toivonen et al., 2006; Vázquez, 2003) by exploring the parameter space of both models and fitting them to a real Facebook data set. Both models can produce the non-monotonic relationship of real networks more accurately than a standard scale-free model while also exhibiting more realistic variability in gossip spread. Of the two models, the one given in Vázquez (2003) best captures both the expected values and variability of gossip spread.

  17. Diffusion archeology for diffusion progression history reconstruction

    OpenAIRE

    Sefer, Emre; Kingsford, Carl

    2015-01-01

    Diffusion through graphs can be used to model many real-world processes, such as the spread of diseases, social network memes, computer viruses, or water contaminants. Often, a real-world diffusion cannot be directly observed while it is occurring — perhaps it is not noticed until some time has passed, continuous monitoring is too costly, or privacy concerns limit data access. This leads to the need to reconstruct how the present state of the diffusion came to be from partial d...

  18. A Pore Scale Flow Simulation of Reconstructed Model Based on the Micro Seepage Experiment

    Directory of Open Access Journals (Sweden)

    Jianjun Liu

    2017-01-01

    Full Text Available Researches on microscopic seepage mechanism and fine description of reservoir pore structure play an important role in effective development of low and ultralow permeability reservoir. The typical micro pore structure model was established by two ways of the conventional model reconstruction method and the built-in graphics function method of Comsol® in this paper. A pore scale flow simulation was conducted on the reconstructed model established by two different ways using creeping flow interface and Brinkman equation interface, respectively. The results showed that the simulation of the two models agreed well in the distribution of velocity, pressure, Reynolds number, and so on. And it verified the feasibility of the direct reconstruction method from graphic file to geometric model, which provided a new way for diversifying the numerical study of micro seepage mechanism.

  19. Internet2-based 3D PET image reconstruction using a PC cluster

    International Nuclear Information System (INIS)

    Shattuck, D.W.; Rapela, J.; Asma, E.; Leahy, R.M.; Chatzioannou, A.; Qi, J.

    2002-01-01

    We describe an approach to fast iterative reconstruction from fully three-dimensional (3D) PET data using a network of PentiumIII PCs configured as a Beowulf cluster. To facilitate the use of this system, we have developed a browser-based interface using Java. The system compresses PET data on the user's machine, sends these data over a network, and instructs the PC cluster to reconstruct the image. The cluster implements a parallelized version of our preconditioned conjugate gradient method for fully 3D MAP image reconstruction. We report on the speed-up factors using the Beowulf approach and the impacts of communication latencies in the local cluster network and the network connection between the user's machine and our PC cluster. (author)

  20. Role of rheology in reconstructing slab morphology in global mantle models

    Science.gov (United States)

    Bello, Léa; Coltice, Nicolas; Tackley, Paul; Müller, Dietmar

    2015-04-01

    Reconstructing the 3D structure of the Earth's mantle has been a challenge for geodynamicists for about 40 years. Although numerical models and computational capabilities have incredibly progressed, parameterizations used for modeling convection forced by plate motions are far from being Earth-like. Among the set of parameters, rheology is fundamental because it defines in a non-linear way the dynamics of slabs and plumes, and the organization of the lithosphere. Previous studies have employed diverse viscosity laws, most of them being temperature and depth dependent with relatively small viscosity contrasts. In this study, we evaluate the role of the temperature dependence of viscosity (variations up to 6 orders of magnitude) on reconstructing slab evolution in 3D spherical models of convection driven by plate history models. We also investigate the importance of pseudo-plasticity in such models. We show that strong temperature dependence of viscosity combined with pseudo-plasticity produce laterally and vertically continuous slabs, and flat subduction where trench retreat is fast (North, Central and South America). Moreover, pseudo-plasticity allows a consistent coupling between imposed plate motions and global convection, which is not possible with temperature-dependent viscosity only. However, even our most sophisticated model is not able to reproduce unambiguously stagnant slabs probably because of the simplicity of material properties we use here. The differences between models employing different viscosity laws are very large, larger than the differences between two models with the same rheology but using two different plate reconstructions or initial conditions.

  1. A Graph-Based Approach for 3D Building Model Reconstruction from Airborne LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Bin Wu

    2017-01-01

    Full Text Available 3D building model reconstruction is of great importance for environmental and urban applications. Airborne light detection and ranging (LiDAR is a very useful data source for acquiring detailed geometric and topological information of building objects. In this study, we employed a graph-based method based on hierarchical structure analysis of building contours derived from LiDAR data to reconstruct urban building models. The proposed approach first uses a graph theory-based localized contour tree method to represent the topological structure of buildings, then separates the buildings into different parts by analyzing their topological relationships, and finally reconstructs the building model by integrating all the individual models established through the bipartite graph matching process. Our approach provides a more complete topological and geometrical description of building contours than existing approaches. We evaluated the proposed method by applying it to the Lujiazui region in Shanghai, China, a complex and large urban scene with various types of buildings. The results revealed that complex buildings could be reconstructed successfully with a mean modeling error of 0.32 m. Our proposed method offers a promising solution for 3D building model reconstruction from airborne LiDAR point clouds.

  2. Brand Marketing Model on Social Networks

    OpenAIRE

    Jolita Jezukevičiūtė; Vida Davidavičienė

    2014-01-01

    The paper analyzes the brand and its marketing solutions onsocial networks. This analysis led to the creation of improvedbrand marketing model on social networks, which will contributeto the rapid and cheap organization brand recognition, increasecompetitive advantage and enhance consumer loyalty. Therefore,the brand and a variety of social networks are becoming a hotresearch area for brand marketing model on social networks.The world‘s most successful brand marketing models exploratoryanalys...

  3. Analysis, reconstruction and manipulation using arterial snakes

    KAUST Repository

    Li, Guo

    2010-01-01

    Man-made objects often consist of detailed and interleaving structures, which are created using cane, coils, metal wires, rods, etc. The delicate structures, although manufactured using simple procedures, are challenging to scan and reconstruct. We observe that such structures are inherently 1D, and hence are naturally represented using an arrangement of generating curves. We refer to the resultant surfaces as arterial surfaces. In this paper we approach for analyzing, reconstructing, and manipulating such arterial surfaces. The core of the algorithm is a novel deformable model, called arterial snake, that simultaneously captures the topology and geometry of the arterial objects. The recovered snakes produce a natural decomposition of the raw scans, with the decomposed parts often capturing meaningful object sections. We demonstrate the robustness of our algorithm on a variety of arterial objects corrupted with noise, outliers, and with large parts missing. We present a range of applications including reconstruction, topology repairing, and manipulation of arterial surfaces by directly controlling the underlying curve network and the associated sectional profiles, which are otherwise challenging to perform. © 2010 ACM.

  4. A model of coauthorship networks

    Science.gov (United States)

    Zhou, Guochang; Li, Jianping; Xie, Zonglin

    2017-10-01

    A natural way of representing the coauthorship of authors is to use a generalization of graphs known as hypergraphs. A random geometric hypergraph model is proposed here to model coauthorship networks, which is generated by placing nodes on a region of Euclidean space randomly and uniformly, and connecting some nodes if the nodes satisfy particular geometric conditions. Two kinds of geometric conditions are designed to model the collaboration patterns of academic authorities and basic researches respectively. The conditions give geometric expressions of two causes of coauthorship: the authority and similarity of authors. By simulation and calculus, we show that the forepart of the degree distribution of the network generated by the model is mixture Poissonian, and the tail is power-law, which are similar to these of some coauthorship networks. Further, we show more similarities between the generated network and real coauthorship networks: the distribution of cardinalities of hyperedges, high clustering coefficient, assortativity, and small-world property

  5. REAL-TIME VIDEO SCALING BASED ON CONVOLUTION NEURAL NETWORK ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    S Safinaz

    2017-08-01

    Full Text Available In recent years, video super resolution techniques becomes mandatory requirements to get high resolution videos. Many super resolution techniques researched but still video super resolution or scaling is a vital challenge. In this paper, we have presented a real-time video scaling based on convolution neural network architecture to eliminate the blurriness in the images and video frames and to provide better reconstruction quality while scaling of large datasets from lower resolution frames to high resolution frames. We compare our outcomes with multiple exiting algorithms. Our extensive results of proposed technique RemCNN (Reconstruction error minimization Convolution Neural Network shows that our model outperforms the existing technologies such as bicubic, bilinear, MCResNet and provide better reconstructed motioning images and video frames. The experimental results shows that our average PSNR result is 47.80474 considering upscale-2, 41.70209 for upscale-3 and 36.24503 for upscale-4 for Myanmar dataset which is very high in contrast to other existing techniques. This results proves our proposed model real-time video scaling based on convolution neural network architecture’s high efficiency and better performance.

  6. A Hierarchical Building Segmentation in Digital Surface Models for 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    Yiming Yan

    2017-01-01

    Full Text Available In this study, a hierarchical method for segmenting buildings in a digital surface model (DSM, which is used in a novel framework for 3D reconstruction, is proposed. Most 3D reconstructions of buildings are model-based. However, the limitations of these methods are overreliance on completeness of the offline-constructed models of buildings, and the completeness is not easily guaranteed since in modern cities buildings can be of a variety of types. Therefore, a model-free framework using high precision DSM and texture-images buildings was introduced. There are two key problems with this framework. The first one is how to accurately extract the buildings from the DSM. Most segmentation methods are limited by either the terrain factors or the difficult choice of parameter-settings. A level-set method are employed to roughly find the building regions in the DSM, and then a recently proposed ‘occlusions of random textures model’ are used to enhance the local segmentation of the buildings. The second problem is how to generate the facades of buildings. Synergizing with the corresponding texture-images, we propose a roof-contour guided interpolation of building facades. The 3D reconstruction results achieved by airborne-like images and satellites are compared. Experiments show that the segmentation method has good performance, and 3D reconstruction is easily performed by our framework, and better visualization results can be obtained by airborne-like images, which can be further replaced by UAV images.

  7. EGFR Signal-Network Reconstruction Demonstrates Metabolic Crosstalk in EMT.

    Directory of Open Access Journals (Sweden)

    Kumari Sonal Choudhary

    2016-06-01

    Full Text Available Epithelial to mesenchymal transition (EMT is an important event during development and cancer metastasis. There is limited understanding of the metabolic alterations that give rise to and take place during EMT. Dysregulation of signalling pathways that impact metabolism, including epidermal growth factor receptor (EGFR, are however a hallmark of EMT and metastasis. In this study, we report the investigation into EGFR signalling and metabolic crosstalk of EMT through constraint-based modelling and analysis of the breast epithelial EMT cell model D492 and its mesenchymal counterpart D492M. We built an EGFR signalling network for EMT based on stoichiometric coefficients and constrained the network with gene expression data to build epithelial (EGFR_E and mesenchymal (EGFR_M networks. Metabolic alterations arising from differential expression of EGFR genes was derived from a literature review of AKT regulated metabolic genes. Signaling flux differences between EGFR_E and EGFR_M models subsequently allowed metabolism in D492 and D492M cells to be assessed. Higher flux within AKT pathway in the D492 cells compared to D492M suggested higher glycolytic activity in D492 that we confirmed experimentally through measurements of glucose uptake and lactate secretion rates. The signaling genes from the AKT, RAS/MAPK and CaM pathways were predicted to revert D492M to D492 phenotype. Follow-up analysis of EGFR signaling metabolic crosstalk in three additional breast epithelial cell lines highlighted variability in in vitro cell models of EMT. This study shows that the metabolic phenotype may be predicted by in silico analyses of gene expression data of EGFR signaling genes, but this phenomenon is cell-specific and does not follow a simple trend.

  8. EGFR Signal-Network Reconstruction Demonstrates Metabolic Crosstalk in EMT.

    Science.gov (United States)

    Choudhary, Kumari Sonal; Rohatgi, Neha; Halldorsson, Skarphedinn; Briem, Eirikur; Gudjonsson, Thorarinn; Gudmundsson, Steinn; Rolfsson, Ottar

    2016-06-01

    Epithelial to mesenchymal transition (EMT) is an important event during development and cancer metastasis. There is limited understanding of the metabolic alterations that give rise to and take place during EMT. Dysregulation of signalling pathways that impact metabolism, including epidermal growth factor receptor (EGFR), are however a hallmark of EMT and metastasis. In this study, we report the investigation into EGFR signalling and metabolic crosstalk of EMT through constraint-based modelling and analysis of the breast epithelial EMT cell model D492 and its mesenchymal counterpart D492M. We built an EGFR signalling network for EMT based on stoichiometric coefficients and constrained the network with gene expression data to build epithelial (EGFR_E) and mesenchymal (EGFR_M) networks. Metabolic alterations arising from differential expression of EGFR genes was derived from a literature review of AKT regulated metabolic genes. Signaling flux differences between EGFR_E and EGFR_M models subsequently allowed metabolism in D492 and D492M cells to be assessed. Higher flux within AKT pathway in the D492 cells compared to D492M suggested higher glycolytic activity in D492 that we confirmed experimentally through measurements of glucose uptake and lactate secretion rates. The signaling genes from the AKT, RAS/MAPK and CaM pathways were predicted to revert D492M to D492 phenotype. Follow-up analysis of EGFR signaling metabolic crosstalk in three additional breast epithelial cell lines highlighted variability in in vitro cell models of EMT. This study shows that the metabolic phenotype may be predicted by in silico analyses of gene expression data of EGFR signaling genes, but this phenomenon is cell-specific and does not follow a simple trend.

  9. Graph-based unsupervised segmentation algorithm for cultured neuronal networks' structure characterization and modeling.

    Science.gov (United States)

    de Santos-Sierra, Daniel; Sendiña-Nadal, Irene; Leyva, Inmaculada; Almendral, Juan A; Ayali, Amir; Anava, Sarit; Sánchez-Ávila, Carmen; Boccaletti, Stefano

    2015-06-01

    Large scale phase-contrast images taken at high resolution through the life of a cultured neuronal network are analyzed by a graph-based unsupervised segmentation algorithm with a very low computational cost, scaling linearly with the image size. The processing automatically retrieves the whole network structure, an object whose mathematical representation is a matrix in which nodes are identified neurons or neurons' clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocytochemistry techniques, our non invasive measures entitle us to perform a longitudinal analysis during the maturation of a single culture. Such an analysis furnishes the way of individuating the main physical processes underlying the self-organization of the neurons' ensemble into a complex network, and drives the formulation of a phenomenological model yet able to describe qualitatively the overall scenario observed during the culture growth. © 2014 International Society for Advancement of Cytometry.

  10. CT angiography after carotid artery stenting: assessment of the utility of adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide [Tottori University, Division of Radiology, Department of Pathophysiological Therapeutic Science, Faculty of Medicine, Yonago (Japan); Sakamoto, Makoto; Watanabe, Takashi [Tottori University, Division of Neurosurgery, Department of Brain and Neurosciences, Faculty of Medicine, Yonago (Japan); Iwata, Naoki; Kishimoto, Junichi [Tottori University, Division of Clinical Radiology Faculty of Medicine, Yonago (Japan); Kaminou, Toshio [Osaka Minami Medical Center, Department of Radiology, Osaka (Japan)

    2014-11-15

    Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)

  11. CT angiography after carotid artery stenting: assessment of the utility of adaptive statistical iterative reconstruction and model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide; Sakamoto, Makoto; Watanabe, Takashi; Iwata, Naoki; Kishimoto, Junichi; Kaminou, Toshio

    2014-01-01

    Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)

  12. Stochastic methods of data modeling: application to the reconstruction of non-regular data

    International Nuclear Information System (INIS)

    Buslig, Leticia

    2014-01-01

    This research thesis addresses two issues or applications related to IRSN studies. The first one deals with the mapping of measurement data (the IRSN must regularly control the radioactivity level in France and, for this purpose, uses a network of sensors distributed among the French territory). The objective is then to predict, by means of reconstruction model which used observations, maps which will be used to inform the population. The second application deals with the taking of uncertainties into account in complex computation codes (the IRSN must perform safety studies to assess the risks of loss of integrity of a nuclear reactor in case of hypothetical accidents, and for this purpose, codes are used which simulate physical phenomena occurring within an installation). Some input parameters are not precisely known, and the author therefore tries to assess the impact of some uncertainties on simulated values. She notably aims at seeing whether variations of input parameters may push the system towards a behaviour which is very different from that obtained with parameters having a reference value, or even towards a state in which safety conditions are not met. The precise objective of this second part is then to a reconstruction model which is not costly (in terms of computation time) and to perform simulation in relevant areas (strong gradient areas, threshold overrun areas, so on). Two issues are then important: the choice of the approximation model and the construction of the experiment plan. The model is based on a kriging-type stochastic approach, and an important part of the work addresses the development of new numerical techniques of experiment planning. The first part proposes a generic criterion of adaptive planning, and reports its analysis and implementation. In the second part, an alternative to error variance addition is developed. Methodological developments are tested on analytic functions, and then applied to the cases of measurement mapping and

  13. Brand marketing model on social networks

    OpenAIRE

    Jezukevičiūtė, Jolita; Davidavičienė, Vida

    2014-01-01

    Paper analyzes the brand and its marketing solutions on social networks. This analysis led to the creation of improved brand marketing model on social networks, which will contribute to the rapid and cheap organization brand recognition, increase competitive advantage and enhance consumer loyalty. Therefore, the brand and a variety of social networks are becoming a hot research area for brand marketing model on social networks. The world‘s most successful brand marketing models exploratory an...

  14. Hybrid light transport model based bioluminescence tomography reconstruction for early gastric cancer detection

    Science.gov (United States)

    Chen, Xueli; Liang, Jimin; Hu, Hao; Qu, Xiaochao; Yang, Defu; Chen, Duofang; Zhu, Shouping; Tian, Jie

    2012-03-01

    Gastric cancer is the second cause of cancer-related death in the world, and it remains difficult to cure because it has been in late-stage once that is found. Early gastric cancer detection becomes an effective approach to decrease the gastric cancer mortality. Bioluminescence tomography (BLT) has been applied to detect early liver cancer and prostate cancer metastasis. However, the gastric cancer commonly originates from the gastric mucosa and grows outwards. The bioluminescent light will pass through a non-scattering region constructed by gastric pouch when it transports in tissues. Thus, the current BLT reconstruction algorithms based on the approximation model of radiative transfer equation are not optimal to handle this problem. To address the gastric cancer specific problem, this paper presents a novel reconstruction algorithm that uses a hybrid light transport model to describe the bioluminescent light propagation in tissues. The radiosity theory integrated with the diffusion equation to form the hybrid light transport model is utilized to describe light propagation in the non-scattering region. After the finite element discretization, the hybrid light transport model is converted into a minimization problem which fuses an l1 norm based regularization term to reveal the sparsity of bioluminescent source distribution. The performance of the reconstruction algorithm is first demonstrated with a digital mouse based simulation with the reconstruction error less than 1mm. An in situ gastric cancer-bearing nude mouse based experiment is then conducted. The primary result reveals the ability of the novel BLT reconstruction algorithm in early gastric cancer detection.

  15. ISTA-Net: Iterative Shrinkage-Thresholding Algorithm Inspired Deep Network for Image Compressive Sensing

    KAUST Repository

    Zhang, Jian; Ghanem, Bernard

    2017-01-01

    and the performance/speed of network-based ones. We propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $l_1$ norm CS reconstruction model. ISTA-Net essentially

  16. A Comparison of Manual Neuronal Reconstruction from Biocytin Histology or 2-Photon Imaging: Morphometry and Computer Modeling

    Directory of Open Access Journals (Sweden)

    Arne Vladimir Blackman

    2014-07-01

    Full Text Available Accurate 3D reconstruction of neurons is vital for applications linking anatomy and physiology. Reconstructions are typically created using Neurolucida after biocytin histology (BH. An alternative inexpensive and fast method is to use freeware such as Neuromantic to reconstruct from fluorescence imaging (FI stacks acquired using 2-photon laser-scanning microscopy during physiological recording. We compare these two methods with respect to morphometry, cell classification, and multicompartmental modeling in the NEURON simulation environment. Quantitative morphological analysis of the same cells reconstructed using both methods reveals that whilst biocytin reconstructions facilitate tracing of more distal collaterals, both methods are comparable in representing the overall morphology: automated clustering of reconstructions from both methods successfully separates neocortical basket cells from pyramidal cells but not BH from FI reconstructions. BH reconstructions suffer more from tissue shrinkage and compression artifacts than FI reconstructions do. FI reconstructions, on the other hand, consistently have larger process diameters. Consequently, significant differences in NEURON modeling of excitatory post-synaptic potential (EPSP forward propagation are seen between the two methods, with FI reconstructions exhibiting smaller depolarizations. Simulated action potential backpropagation (bAP, however, is indistinguishable between reconstructions obtained with the two methods. In our hands, BH reconstructions are necessary for NEURON modeling and detailed morphological tracing, and thus remain state of the art, although they are more labor intensive, more expensive, and suffer from a higher failure rate. However, for a subset of anatomical applications such as cell type identification, FI reconstructions are superior, because of indistinguishable classification performance with greater ease of use, essentially 100% success rate, and lower cost.

  17. An Implementation of Parallel and Networked Computing Schemes for the Real-Time Image Reconstruction Based on Electrical Tomography

    International Nuclear Information System (INIS)

    Park, Sook Hee

    2001-02-01

    This thesis implements and analyzes the parallel and networked computing libraries based on the multiprocessor computer architecture as well as networked computers, aiming at improving the computation speed of ET(Electrical Tomography) system which requires enormous CPU time in reconstructing the unknown internal state of the target object. As an instance of the typical tomography technology, ET partitions the cross-section of the target object into the tiny elements and calculates the resistivity of them with signal values measured at the boundary electrodes surrounding the surface of the object after injecting the predetermined current pattern through the object. The number of elements is determined considering the trade-off between the accuracy of the reconstructed image and the computation time. As the elements become more finer, the number of element increases, and the system can get the better image. However, the reconstruction time increases polynomially with the number of partitioned elements since the procedure consists of a number of time consuming matrix operations such as multiplication, inverse, pseudo inverse, Jacobian and so on. Consequently, the demand for improving computation speed via multiple processor grows indispensably. Moreover, currently released PCs can be stuffed with up to 4 CPUs interconnected to the shared memory while some operating systems enable the application process to benefit from such computer by allocating the threaded job to each CPU, resulting in concurrent processing. In addition, a networked computing or cluster computing environment is commonly available to almost every computer which contains communication protocol and is connected to local or global network. After partitioning the given job(numerical operation), each CPU or computer calculates the partial result independently, and the results are merged via common memory to produce the final result. It is desirable to adopt the commonly used library such as Matlab to

  18. Hierarchical model generation for architecture reconstruction using laser-scanned point clouds

    Science.gov (United States)

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2014-06-01

    Architecture reconstruction using terrestrial laser scanner is a prevalent and challenging research topic. We introduce an automatic, hierarchical architecture generation framework to produce full geometry of architecture based on a novel combination of facade structures detection, detailed windows propagation, and hierarchical model consolidation. Our method highlights the generation of geometric models automatically fitting the design information of the architecture from sparse, incomplete, and noisy point clouds. First, the planar regions detected in raw point clouds are interpreted as three-dimensional clusters. Then, the boundary of each region extracted by projecting the points into its corresponding two-dimensional plane is classified to obtain detailed shape structure elements (e.g., windows and doors). Finally, a polyhedron model is generated by calculating the proposed local structure model, consolidated structure model, and detailed window model. Experiments on modeling the scanned real-life buildings demonstrate the advantages of our method, in which the reconstructed models not only correspond to the information of architectural design accurately, but also satisfy the requirements for visualization and analysis.

  19. Simple method of modelling of digital holograms registering and their optical reconstruction

    International Nuclear Information System (INIS)

    Evtikhiev, N N; Cheremkhin, P A; Krasnov, V V; Kurbatova, E A; Molodtsov, D Yu; Porshneva, L A; Rodin, V G

    2016-01-01

    The technique of modeling of digital hologram recording and image optical reconstruction from these holograms is described. The method takes into account characteristics of the object, digital camera's photosensor and spatial light modulator used for digital holograms displaying. Using the technique, equipment can be chosen for experiments for obtaining good reconstruction quality and/or holograms diffraction efficiency. Numerical experiments were conducted. (paper)

  20. Target-Centric Network Modeling

    DEFF Research Database (Denmark)

    Mitchell, Dr. William L.; Clark, Dr. Robert M.

    In Target-Centric Network Modeling: Case Studies in Analyzing Complex Intelligence Issues, authors Robert Clark and William Mitchell take an entirely new approach to teaching intelligence analysis. Unlike any other book on the market, it offers case study scenarios using actual intelligence...... reporting formats, along with a tested process that facilitates the production of a wide range of analytical products for civilian, military, and hybrid intelligence environments. Readers will learn how to perform the specific actions of problem definition modeling, target network modeling......, and collaborative sharing in the process of creating a high-quality, actionable intelligence product. The case studies reflect the complexity of twenty-first century intelligence issues by dealing with multi-layered target networks that cut across political, economic, social, technological, and military issues...

  1. Charged particle track reconstruction using artificial neural networks

    International Nuclear Information System (INIS)

    Glover, C.; Fu, P.; Gabriel, T.; Handler, T.

    1992-01-01

    This paper summarizes the current state of our research in developing and applying artificial neural network (ANN) algorithm described here is based on a crude model of the retina. It takes as input the coordinates of each charged particle's interaction point (''hit'') in the tracking chamber. The algorithm's output is a set of vectors pointing to other hits that most likely to form a track

  2. Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D. [UCSF Benioff Children' s Hospital, Department of Radiology and Biomedical Imaging, San Francisco, CA (United States)

    2014-07-15

    Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model

  3. Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D.

    2014-01-01

    Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model

  4. A Complex Network Approach to Distributional Semantic Models.

    Directory of Open Access Journals (Sweden)

    Akira Utsumi

    Full Text Available A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models.

  5. The transformation of trust in China's alternative food networks: disruption, reconstruction, and development

    Directory of Open Access Journals (Sweden)

    Raymond Yu. Wang

    2015-06-01

    Full Text Available Food safety issues in China have received much scholarly attention, yet few studies systematically examined this matter through the lens of trust. More importantly, little is known about the transformation of different types of trust in the dynamic process of food production, provision, and consumption. We consider trust as an evolving interdependent relationship between different actors. We used the Beijing County Fair, a prominent ecological farmers' market in China, as an example to examine the transformation of trust in China's alternative food networks. We argue that although there has been a disruption of institutional trust among the general public since 2008 when the melamine-tainted milk scandal broke out, reconstruction of individual trust and development of organizational trust have been observed, along with the emergence and increasing popularity of alternative food networks. Based on more than six months of fieldwork on the emerging ecological agriculture sector in 13 provinces across China as well as monitoring of online discussions and posts, we analyze how various social factors - including but not limited to direct and indirect reciprocity, information, endogenous institutions, and altruism - have simultaneously contributed to the transformation of trust in China's alternative food networks. The findings not only complement current social theories of trust, but also highlight an important yet understudied phenomenon whereby informal social mechanisms have been partially substituting for formal institutions and gradually have been building trust against the backdrop of the food safety crisis in China.

  6. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  7. Model simulations and proxy-based reconstructions for the European region in the past millennium (Invited)

    Science.gov (United States)

    Zorita, E.

    2009-12-01

    One of the objectives when comparing simulations of past climates to proxy-based climate reconstructions is to asses the skill of climate models to simulate climate change. This comparison may accomplished at large spatial scales, for instance the evolution of simulated and reconstructed Northern Hemisphere annual temperature, or at regional or point scales. In both approaches a 'fair' comparison has to take into account different aspects that affect the inevitable uncertainties and biases in the simulations and in the reconstructions. These efforts face a trade-off: climate models are believed to be more skillful at large hemispheric scales, but climate reconstructions are these scales are burdened by the spatial distribution of available proxies and by methodological issues surrounding the statistical method used to translate the proxy information into large-spatial averages. Furthermore, the internal climatic noise at large hemispheric scales is low, so that the sampling uncertainty tends to be also low. On the other hand, the skill of climate models at regional scales is limited by the coarse spatial resolution, which hinders a faithful representation of aspects important for the regional climate. At small spatial scales, the reconstruction of past climate probably faces less methodological problems if information from different proxies is available. The internal climatic variability at regional scales is, however, high. In this contribution some examples of the different issues faced when comparing simulation and reconstructions at small spatial scales in the past millennium are discussed. These examples comprise reconstructions from dendrochronological data and from historical documentary data in Europe and climate simulations with global and regional models. These examples indicate that the centennial climate variations can offer a reasonable target to assess the skill of global climate models and of proxy-based reconstructions, even at small spatial scales

  8. Dynamic sporulation gene co-expression networks for Bacillus subtilis 168 and the food-borne isolate Bacillus amyloliquefaciens: a transcriptomic model.

    Science.gov (United States)

    Omony, Jimmy; de Jong, Anne; Krawczyk, Antonina O; Eijlander, Robyn T; Kuipers, Oscar P

    2018-02-09

    Sporulation is a survival strategy, adapted by bacterial cells in response to harsh environmental adversities. The adaptation potential differs between strains and the variations may arise from differences in gene regulation. Gene networks are a valuable way of studying such regulation processes and establishing associations between genes. We reconstructed and compared sporulation gene co-expression networks (GCNs) of the model laboratory strain Bacillus subtilis 168 and the food-borne industrial isolate Bacillus amyloliquefaciens. Transcriptome data obtained from samples of six stages during the sporulation process were used for network inference. Subsequently, a gene set enrichment analysis was performed to compare the reconstructed GCNs of B. subtilis 168 and B. amyloliquefaciens with respect to biological functions, which showed the enriched modules with coherent functional groups associated with sporulation. On basis of the GCNs and time-evolution of differentially expressed genes, we could identify novel candidate genes strongly associated with sporulation in B. subtilis 168 and B. amyloliquefaciens. The GCNs offer a framework for exploring transcription factors, their targets, and co-expressed genes during sporulation. Furthermore, the methodology described here can conveniently be applied to other species or biological processes.

  9. Empirical Bayes conditional independence graphs for regulatory network recovery

    Science.gov (United States)

    Mahdi, Rami; Madduri, Abishek S.; Wang, Guoqing; Strulovici-Barel, Yael; Salit, Jacqueline; Hackett, Neil R.; Crystal, Ronald G.; Mezey, Jason G.

    2012-01-01

    Motivation: Computational inference methods that make use of graphical models to extract regulatory networks from gene expression data can have difficulty reconstructing dense regions of a network, a consequence of both computational complexity and unreliable parameter estimation when sample size is small. As a result, identification of hub genes is of special difficulty for these methods. Methods: We present a new algorithm, Empirical Light Mutual Min (ELMM), for large network reconstruction that has properties well suited for recovery of graphs with high-degree nodes. ELMM reconstructs the undirected graph of a regulatory network using empirical Bayes conditional independence testing with a heuristic relaxation of independence constraints in dense areas of the graph. This relaxation allows only one gene of a pair with a putative relation to be aware of the network connection, an approach that is aimed at easing multiple testing problems associated with recovering densely connected structures. Results: Using in silico data, we show that ELMM has better performance than commonly used network inference algorithms including GeneNet, ARACNE, FOCI, GENIE3 and GLASSO. We also apply ELMM to reconstruct a network among 5492 genes expressed in human lung airway epithelium of healthy non-smokers, healthy smokers and individuals with chronic obstructive pulmonary disease assayed using microarrays. The analysis identifies dense sub-networks that are consistent with known regulatory relationships in the lung airway and also suggests novel hub regulatory relationships among a number of genes that play roles in oxidative stress and secretion. Availability and implementation: Software for running ELMM is made available at http://mezeylab.cb.bscb.cornell.edu/Software.aspx. Contact: ramimahdi@yahoo.com or jgm45@cornell.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22685074

  10. Fast Tomographic Reconstruction From Limited Data Using Artificial Neural Networks

    NARCIS (Netherlands)

    D.M. Pelt (Daniël); K.J. Batenburg (Joost)

    2013-01-01

    htmlabstractImage reconstruction from a small number of projections is a challenging problem in tomography. Advanced algorithms that incorporate prior knowledge can sometimes produce accurate reconstructions, but they typically require long computation times. Furthermore, the required prior

  11. New Markov-Shannon Entropy models to assess connectivity quality in complex networks: from molecular to cellular pathway, Parasite-Host, Neural, Industry, and Legal-Social networks.

    Science.gov (United States)

    Riera-Fernández, Pablo; Munteanu, Cristian R; Escobar, Manuel; Prado-Prado, Francisco; Martín-Romalde, Raquel; Pereira, David; Villalba, Karen; Duardo-Sánchez, Aliuska; González-Díaz, Humberto

    2012-01-21

    Graph and Complex Network theory is expanding its application to different levels of matter organization such as molecular, biological, technological, and social networks. A network is a set of items, usually called nodes, with connections between them, which are called links or edges. There are many different experimental and/or theoretical methods to assign node-node links depending on the type of network we want to construct. Unfortunately, the use of a method for experimental reevaluation of the entire network is very expensive in terms of time and resources; thus the development of cheaper theoretical methods is of major importance. In addition, different methods to link nodes in the same type of network are not totally accurate in such a way that they do not always coincide. In this sense, the development of computational methods useful to evaluate connectivity quality in complex networks (a posteriori of network assemble) is a goal of major interest. In this work, we report for the first time a new method to calculate numerical quality scores S(L(ij)) for network links L(ij) (connectivity) based on the Markov-Shannon Entropy indices of order k-th (θ(k)) for network nodes. The algorithm may be summarized as follows: (i) first, the θ(k)(j) values are calculated for all j-th nodes in a complex network already constructed; (ii) A Linear Discriminant Analysis (LDA) is used to seek a linear equation that discriminates connected or linked (L(ij)=1) pairs of nodes experimentally confirmed from non-linked ones (L(ij)=0); (iii) the new model is validated with external series of pairs of nodes; (iv) the equation obtained is used to re-evaluate the connectivity quality of the network, connecting/disconnecting nodes based on the quality scores calculated with the new connectivity function. This method was used to study different types of large networks. The linear models obtained produced the following results in terms of overall accuracy for network reconstruction

  12. Modeling Renewable Penertration Using a Network Economic Model

    Science.gov (United States)

    Lamont, A.

    2001-03-01

    This paper evaluates the accuracy of a network economic modeling approach in designing energy systems having renewable and conventional generators. The network approach models the system as a network of processes such as demands, generators, markets, and resources. The model reaches a solution by exchanging prices and quantity information between the nodes of the system. This formulation is very flexible and takes very little time to build and modify models. This paper reports an experiment designing a system with photovoltaic and base and peak fossil generators. The level of PV penetration as a function of its price and the capacities of the fossil generators were determined using the network approach and using an exact, analytic approach. It is found that the two methods agree very closely in terms of the optimal capacities and are nearly identical in terms of annual system costs.

  13. Assessing Women’s Preferences and Preference Modeling for Breast Reconstruction Decision Making

    Directory of Open Access Journals (Sweden)

    Clement S. Sun, MS

    2014-03-01

    Conclusions: We recommend the risk-averse multiplicative model for modeling the preferences of patients considering different forms of breast reconstruction because it agreed most often with the participants in this study.

  14. Machine learning in sentiment reconstruction of the simulated stock market

    Science.gov (United States)

    Goykhman, Mikhail; Teimouri, Ali

    2018-02-01

    In this paper we continue the study of the simulated stock market framework defined by the driving sentiment processes. We focus on the market environment driven by the buy/sell trading sentiment process of the Markov chain type. We apply the methodology of the Hidden Markov Models and the Recurrent Neural Networks to reconstruct the transition probabilities matrix of the Markov sentiment process and recover the underlying sentiment states from the observed stock price behavior. We demonstrate that the Hidden Markov Model can successfully recover the transition probabilities matrix for the hidden sentiment process of the Markov Chain type. We also demonstrate that the Recurrent Neural Network can successfully recover the hidden sentiment states from the observed simulated stock price time series.

  15. An evolving network model with community structure

    International Nuclear Information System (INIS)

    Li Chunguang; Maini, Philip K

    2005-01-01

    Many social and biological networks consist of communities-groups of nodes within which connections are dense, but between which connections are sparser. Recently, there has been considerable interest in designing algorithms for detecting community structures in real-world complex networks. In this paper, we propose an evolving network model which exhibits community structure. The network model is based on the inner-community preferential attachment and inter-community preferential attachment mechanisms. The degree distributions of this network model are analysed based on a mean-field method. Theoretical results and numerical simulations indicate that this network model has community structure and scale-free properties

  16. Research on the model of home networking

    Science.gov (United States)

    Yun, Xiang; Feng, Xiancheng

    2007-11-01

    It is the research hotspot of current broadband network to combine voice service, data service and broadband audio-video service by IP protocol to transport various real time and mutual services to terminal users (home). Home Networking is a new kind of network and application technology which can provide various services. Home networking is called as Digital Home Network. It means that PC, home entertainment equipment, home appliances, Home wirings, security, illumination system were communicated with each other by some composing network technology, constitute a networking internal home, and connect with WAN by home gateway. It is a new network technology and application technology, and can provide many kinds of services inside home or between homes. Currently, home networking can be divided into three kinds: Information equipment, Home appliances, Communication equipment. Equipment inside home networking can exchange information with outer networking by home gateway, this information communication is bidirectional, user can get information and service which provided by public networking by using home networking internal equipment through home gateway connecting public network, meantime, also can get information and resource to control the internal equipment which provided by home networking internal equipment. Based on the general network model of home networking, there are four functional entities inside home networking: HA, HB, HC, and HD. (1) HA (Home Access) - home networking connects function entity; (2) HB (Home Bridge) Home networking bridge connects function entity; (3) HC (Home Client) - Home networking client function entity; (4) HD (Home Device) - decoder function entity. There are many physical ways to implement four function entities. Based on theses four functional entities, there are reference model of physical layer, reference model of link layer, reference model of IP layer and application reference model of high layer. In the future home network

  17. Diffusion archeology for diffusion progression history reconstruction.

    Science.gov (United States)

    Sefer, Emre; Kingsford, Carl

    2016-11-01

    Diffusion through graphs can be used to model many real-world processes, such as the spread of diseases, social network memes, computer viruses, or water contaminants. Often, a real-world diffusion cannot be directly observed while it is occurring - perhaps it is not noticed until some time has passed, continuous monitoring is too costly, or privacy concerns limit data access. This leads to the need to reconstruct how the present state of the diffusion came to be from partial diffusion data. Here, we tackle the problem of reconstructing a diffusion history from one or more snapshots of the diffusion state. This ability can be invaluable to learn when certain computer nodes are infected or which people are the initial disease spreaders to control future diffusions. We formulate this problem over discrete-time SEIRS-type diffusion models in terms of maximum likelihood. We design methods that are based on submodularity and a novel prize-collecting dominating-set vertex cover (PCDSVC) relaxation that can identify likely diffusion steps with some provable performance guarantees. Our methods are the first to be able to reconstruct complete diffusion histories accurately in real and simulated situations. As a special case, they can also identify the initial spreaders better than the existing methods for that problem. Our results for both meme and contaminant diffusion show that the partial diffusion data problem can be overcome with proper modeling and methods, and that hidden temporal characteristics of diffusion can be predicted from limited data.

  18. An automated 3D reconstruction method of UAV images

    Science.gov (United States)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  19. An analytical reconstruction model of the spread-out Bragg peak using laser-accelerated proton beams.

    Science.gov (United States)

    Tao, Li; Zhu, Kun; Zhu, Jungao; Xu, Xiaohan; Lin, Chen; Ma, Wenjun; Lu, Haiyang; Zhao, Yanying; Lu, Yuanrong; Chen, Jia-Er; Yan, Xueqing

    2017-07-07

    With the development of laser technology, laser-driven proton acceleration provides a new method for proton tumor therapy. However, it has not been applied in practice because of the wide and decreasing energy spectrum of laser-accelerated proton beams. In this paper, we propose an analytical model to reconstruct the spread-out Bragg peak (SOBP) using laser-accelerated proton beams. Firstly, we present a modified weighting formula for protons of different energies. Secondly, a theoretical model for the reconstruction of SOBPs with laser-accelerated proton beams has been built. It can quickly calculate the number of laser shots needed for each energy interval of the laser-accelerated protons. Finally, we show the 2D reconstruction results of SOBPs for laser-accelerated proton beams and the ideal situation. The final results show that our analytical model can give an SOBP reconstruction scheme that can be used for actual tumor therapy.

  20. Bayesian network model for identification of pathways by integrating protein interaction with genetic interaction data.

    Science.gov (United States)

    Fu, Changhe; Deng, Su; Jin, Guangxu; Wang, Xinxin; Yu, Zu-Guo

    2017-09-21

    Molecular interaction data at proteomic and genetic levels provide physical and functional insights into a molecular biosystem and are helpful for the construction of pathway structures complementarily. Despite advances in inferring biological pathways using genetic interaction data, there still exists weakness in developed models, such as, activity pathway networks (APN), when integrating the data from proteomic and genetic levels. It is necessary to develop new methods to infer pathway structure by both of interaction data. We utilized probabilistic graphical model to develop a new method that integrates genetic interaction and protein interaction data and infers exquisitely detailed pathway structure. We modeled the pathway network as Bayesian network and applied this model to infer pathways for the coherent subsets of the global genetic interaction profiles, and the available data set of endoplasmic reticulum genes. The protein interaction data were derived from the BioGRID database. Our method can accurately reconstruct known cellular pathway structures, including SWR complex, ER-Associated Degradation (ERAD) pathway, N-Glycan biosynthesis pathway, Elongator complex, Retromer complex, and Urmylation pathway. By comparing N-Glycan biosynthesis pathway and Urmylation pathway identified from our approach with that from APN, we found that our method is able to overcome its weakness (certain edges are inexplicable). According to underlying protein interaction network, we defined a simple scoring function that only adopts genetic interaction information to avoid the balance difficulty in the APN. Using the effective stochastic simulation algorithm, the performance of our proposed method is significantly high. We developed a new method based on Bayesian network to infer detailed pathway structures from interaction data at proteomic and genetic levels. The results indicate that the developed method performs better in predicting signaling pathways than previously

  1. Network models in economics and finance

    CERN Document Server

    Pardalos, Panos; Rassias, Themistocles

    2014-01-01

    Using network models to investigate the interconnectivity in modern economic systems allows researchers to better understand and explain some economic phenomena. This volume presents contributions by known experts and active researchers in economic and financial network modeling. Readers are provided with an understanding of the latest advances in network analysis as applied to economics, finance, corporate governance, and investments. Moreover, recent advances in market network analysis  that focus on influential techniques for market graph analysis are also examined. Young researchers will find this volume particularly useful in facilitating their introduction to this new and fascinating field. Professionals in economics, financial management, various technologies, and network analysis, will find the network models presented in this book beneficial in analyzing the interconnectivity in modern economic systems.

  2. Honey bee-inspired algorithms for SNP haplotype reconstruction problem

    Science.gov (United States)

    PourkamaliAnaraki, Maryam; Sadeghi, Mehdi

    2016-03-01

    Reconstructing haplotypes from SNP fragments is an important problem in computational biology. There have been a lot of interests in this field because haplotypes have been shown to contain promising data for disease association research. It is proved that haplotype reconstruction in Minimum Error Correction model is an NP-hard problem. Therefore, several methods such as clustering techniques, evolutionary algorithms, neural networks and swarm intelligence approaches have been proposed in order to solve this problem in appropriate time. In this paper, we have focused on various evolutionary clustering techniques and try to find an efficient technique for solving haplotype reconstruction problem. It can be referred from our experiments that the clustering methods relying on the behaviour of honey bee colony in nature, specifically bees algorithm and artificial bee colony methods, are expected to result in more efficient solutions. An application program of the methods is available at the following link. http://www.bioinf.cs.ipm.ir/software/haprs/

  3. Reconstruction of an engine combustion process with a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, P J; Gu, F; Ball, A D [School of Engineering, University of Manchester, Manchester (United Kingdom)

    1998-12-31

    The cylinder pressure waveform in an internal combustion engine is one of the most important parameters in describing the engine combustion process. It is used for a range of diagnostic tasks such as identification of ignition faults or mechanical wear in the cylinders. However, it is very difficult to measure this parameter directly. Never-the-less, the cylinder pressure may be inferred from other more readily obtainable parameters. In this presentation it is shown how a Radial Basis Function network, which may be regarded as a form of neural network, may be used to model the cylinder pressure as a function of the instantaneous crankshaft velocity, recorded with a simple magnetic sensor. The application of the model is demonstrated on a four cylinder DI diesel engine with data from a wide range of speed and load settings. The prediction capabilities of the model once trained are validated against measured data. (orig.) 4 refs.

  4. Reconstruction of an engine combustion process with a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, P.J.; Gu, F.; Ball, A.D. [School of Engineering, University of Manchester, Manchester (United Kingdom)

    1997-12-31

    The cylinder pressure waveform in an internal combustion engine is one of the most important parameters in describing the engine combustion process. It is used for a range of diagnostic tasks such as identification of ignition faults or mechanical wear in the cylinders. However, it is very difficult to measure this parameter directly. Never-the-less, the cylinder pressure may be inferred from other more readily obtainable parameters. In this presentation it is shown how a Radial Basis Function network, which may be regarded as a form of neural network, may be used to model the cylinder pressure as a function of the instantaneous crankshaft velocity, recorded with a simple magnetic sensor. The application of the model is demonstrated on a four cylinder DI diesel engine with data from a wide range of speed and load settings. The prediction capabilities of the model once trained are validated against measured data. (orig.) 4 refs.

  5. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Venkatakrishnan, Singanallur V. [ORNL; Clayton, Dwight A. [ORNL; Polsky, Yarom [ORNL; Bouman, Charles [Purdue University; Santos-Villalobos, Hector J. [ORNL

    2018-04-01

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.

  6. Neural network and its application to CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W. [Lawrence Berkeley National Lab., CA (United States)] [and others

    1997-02-01

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  7. Short-Term Solar Irradiance Forecasting Model Based on Artificial Neural Network Using Statistical Feature Parameters

    Directory of Open Access Journals (Sweden)

    Hongshan Zhao

    2012-05-01

    Full Text Available Short-term solar irradiance forecasting (STSIF is of great significance for the optimal operation and power predication of grid-connected photovoltaic (PV plants. However, STSIF is very complex to handle due to the random and nonlinear characteristics of solar irradiance under changeable weather conditions. Artificial Neural Network (ANN is suitable for STSIF modeling and many research works on this topic are presented, but the conciseness and robustness of the existing models still need to be improved. After discussing the relation between weather variations and irradiance, the characteristics of the statistical feature parameters of irradiance under different weather conditions are figured out. A novel ANN model using statistical feature parameters (ANN-SFP for STSIF is proposed in this paper. The input vector is reconstructed with several statistical feature parameters of irradiance and ambient temperature. Thus sufficient information can be effectively extracted from relatively few inputs and the model complexity is reduced. The model structure is determined by cross-validation (CV, and the Levenberg-Marquardt algorithm (LMA is used for the network training. Simulations are carried out to validate and compare the proposed model with the conventional ANN model using historical data series (ANN-HDS, and the results indicated that the forecast accuracy is obviously improved under variable weather conditions.

  8. 3D Surface Reconstruction for Lower Limb Prosthetic Model using Radon Transform

    Science.gov (United States)

    Sobani, S. S. Mohd; Mahmood, N. H.; Zakaria, N. A.; Razak, M. A. Abdul

    2018-03-01

    This paper describes the idea to realize three-dimensional surfaces of objects with cylinder-based shapes where the techniques adopted and the strategy developed for a non-rigid three-dimensional surface reconstruction of an object from uncalibrated two-dimensional image sequences using multiple-view digital camera and turntable setup. The surface of an object is reconstructed based on the concept of tomography with the aid of performing several digital image processing algorithms on the two-dimensional images captured by a digital camera in thirty-six different projections and the three-dimensional structure of the surface is analysed. Four different objects are used as experimental models in the reconstructions and each object is placed on a manually rotated turntable. The results shown that the proposed method has successfully reconstruct the three-dimensional surface of the objects and practicable. The shape and size of the reconstructed three-dimensional objects are recognizable and distinguishable. The reconstructions of objects involved in the test are strengthened with the analysis where the maximum percent error obtained from the computation is approximately 1.4 % for the height whilst 4.0%, 4.79% and 4.7% for the diameters at three specific heights of the objects.

  9. Model-based image reconstruction for four-dimensional PET

    International Nuclear Information System (INIS)

    Li Tianfang; Thorndyke, Brian; Schreibmann, Eduard; Yang Yong; Xing Lei

    2006-01-01

    Positron emission tonography (PET) is useful in diagnosis and radiation treatment planning for a variety of cancers. For patients with cancers in thoracic or upper abdominal region, the respiratory motion produces large distortions in the tumor shape and size, affecting the accuracy in both diagnosis and treatment. Four-dimensional (4D) (gated) PET aims to reduce the motion artifacts and to provide accurate measurement of the tumor volume and the tracer concentration. A major issue in 4D PET is the lack of statistics. Since the collected photons are divided into several frames in the 4D PET scan, the quality of each reconstructed frame degrades as the number of frames increases. The increased noise in each frame heavily degrades the quantitative accuracy of the PET imaging. In this work, we propose a method to enhance the performance of 4D PET by developing a new technique of 4D PET reconstruction with incorporation of an organ motion model derived from 4D-CT images. The method is based on the well-known maximum-likelihood expectation-maximization (ML-EM) algorithm. During the processes of forward- and backward-projection in the ML-EM iterations, all projection data acquired at different phases are combined together to update the emission map with the aid of deformable model, the statistics is therefore greatly improved. The proposed algorithm was first evaluated with computer simulations using a mathematical dynamic phantom. Experiment with a moving physical phantom was then carried out to demonstrate the accuracy of the proposed method and the increase of signal-to-noise ratio over three-dimensional PET. Finally, the 4D PET reconstruction was applied to a patient case

  10. Quality Analysis on 3d Buidling Models Reconstructed from Uav Imagery

    Science.gov (United States)

    Jarzabek-Rychard, M.; Karpina, M.

    2016-06-01

    Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning), a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1) comparing generated SfM point cloud to ALS data; (2) computing internal consistency measures of the reconstruction process; (3) analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.

  11. QUALITY ANALYSIS ON 3D BUIDLING MODELS RECONSTRUCTED FROM UAV IMAGERY

    Directory of Open Access Journals (Sweden)

    M. Jarzabek-Rychard

    2016-06-01

    Full Text Available Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning, a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1 comparing generated SfM point cloud to ALS data; (2 computing internal consistency measures of the reconstruction process; (3 analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.

  12. Complex networks under dynamic repair model

    Science.gov (United States)

    Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao

    2018-01-01

    Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.

  13. Experiments in Reconstructing Twentieth-Century Sea Levels

    Science.gov (United States)

    Ray, Richard D.; Douglas, Bruce C.

    2011-01-01

    One approach to reconstructing historical sea level from the relatively sparse tide-gauge network is to employ Empirical Orthogonal Functions (EOFs) as interpolatory spatial basis functions. The EOFs are determined from independent global data, generally sea-surface heights from either satellite altimetry or a numerical ocean model. The problem is revisited here for sea level since 1900. A new approach to handling the tide-gauge datum problem by direct solution offers possible advantages over the method of integrating sea-level differences, with the potential of eventually adjusting datums into the global terrestrial reference frame. The resulting time series of global mean sea levels appears fairly insensitive to the adopted set of EOFs. In contrast, charts of regional sea level anomalies and trends are very sensitive to the adopted set of EOFs, especially for the sparser network of gauges in the early 20th century. The reconstructions appear especially suspect before 1950 in the tropical Pacific. While this limits some applications of the sea-level reconstructions, the sensitivity does appear adequately captured by formal uncertainties. All our solutions show regional trends over the past five decades to be fairly uniform throughout the global ocean, in contrast to trends observed over the shorter altimeter era. Consistent with several previous estimates, the global sea-level rise since 1900 is 1.70 +/- 0.26 mm/yr. The global trend since 1995 exceeds 3 mm/yr which is consistent with altimeter measurements, but this large trend was possibly also reached between 1935 and 1950.

  14. Adaptive-network models of collective dynamics

    Science.gov (United States)

    Zschaler, G.

    2012-09-01

    Complex systems can often be modelled as networks, in which their basic units are represented by abstract nodes and the interactions among them by abstract links. This network of interactions is the key to understanding emergent collective phenomena in such systems. In most cases, it is an adaptive network, which is defined by a feedback loop between the local dynamics of the individual units and the dynamical changes of the network structure itself. This feedback loop gives rise to many novel phenomena. Adaptive networks are a promising concept for the investigation of collective phenomena in different systems. However, they also present a challenge to existing modelling approaches and analytical descriptions due to the tight coupling between local and topological degrees of freedom. In this work, which is essentially my PhD thesis, I present a simple rule-based framework for the investigation of adaptive networks, using which a wide range of collective phenomena can be modelled and analysed from a common perspective. In this framework, a microscopic model is defined by the local interaction rules of small network motifs, which can be implemented in stochastic simulations straightforwardly. Moreover, an approximate emergent-level description in terms of macroscopic variables can be derived from the microscopic rules, which we use to analyse the system's collective and long-term behaviour by applying tools from dynamical systems theory. We discuss three adaptive-network models for different collective phenomena within our common framework. First, we propose a novel approach to collective motion in insect swarms, in which we consider the insects' adaptive interaction network instead of explicitly tracking their positions and velocities. We capture the experimentally observed onset of collective motion qualitatively in terms of a bifurcation in this non-spatial model. We find that three-body interactions are an essential ingredient for collective motion to emerge

  15. Linear approximation model network and its formation via ...

    Indian Academy of Sciences (India)

    To overcome the deficiency of `local model network' (LMN) techniques, an alternative `linear approximation model' (LAM) network approach is proposed. Such a network models a nonlinear or practical system with multiple linear models fitted along operating trajectories, where individual models are simply networked ...

  16. Reconstructing an economic space from a market metric

    OpenAIRE

    Mendes, R. Vilela; Araújo, Tanya; Louçã, Francisco

    2002-01-01

    Using a metric related to the returns correlation, a method is proposed to reconstruct an economic space from the market data. A reduced subspace, associated to the systematic structure of the market, is identified and its dimension related to the number of terms in factor models. Example were worked out involving sets of companies from the DJIA and S&P500 indexes. Having a metric defined in the space of companies, network topology coefficients may be used to extract further information from ...

  17. Modelling the structure of complex networks

    DEFF Research Database (Denmark)

    Herlau, Tue

    networks has been independently studied as mathematical objects in their own right. As such, there has been both an increased demand for statistical methods for complex networks as well as a quickly growing mathematical literature on the subject. In this dissertation we explore aspects of modelling complex....... The next chapters will treat some of the various symmetries, representer theorems and probabilistic structures often deployed in the modelling complex networks, the construction of sampling methods and various network models. The introductory chapters will serve to provide context for the included written...

  18. Spatial Epidemic Modelling in Social Networks

    Science.gov (United States)

    Simoes, Joana Margarida

    2005-06-01

    The spread of infectious diseases is highly influenced by the structure of the underlying social network. The target of this study is not the network of acquaintances, but the social mobility network: the daily movement of people between locations, in regions. It was already shown that this kind of network exhibits small world characteristics. The model developed is agent based (ABM) and comprehends a movement model and a infection model. In the movement model, some assumptions are made about its structure and the daily movement is decomposed into four types: neighborhood, intra region, inter region and random. The model is Geographical Information Systems (GIS) based, and uses real data to define its geometry. Because it is a vector model, some optimization techniques were used to increase its efficiency.

  19. Modeling of fluctuating reaction networks

    International Nuclear Information System (INIS)

    Lipshtat, A.; Biham, O.

    2004-01-01

    Full Text:Various dynamical systems are organized as reaction networks, where the population size of one component affects the populations of all its neighbors. Such networks can be found in interstellar surface chemistry, cell biology, thin film growth and other systems. I cases where the populations of reactive species are large, the network can be modeled by rate equations which provide all reaction rates within mean field approximation. However, in small systems that are partitioned into sub-micron size, these populations strongly fluctuate. Under these conditions rate equations fail and the master equation is needed for modeling these reactions. However, the number of equations in the master equation grows exponentially with the number of reactive species, severely limiting its feasibility for complex networks. Here we present a method which dramatically reduces the number of equations, thus enabling the incorporation of the master equation in complex reaction networks. The method is examplified in the context of reaction network on dust grains. Its applicability for genetic networks will be discussed. 1. Efficient simulations of gas-grain chemistry in interstellar clouds. Azi Lipshtat and Ofer Biham, Phys. Rev. Lett. 93 (2004), 170601. 2. Modeling of negative autoregulated genetic networks in single cells. Azi Lipshtat, Hagai B. Perets, Nathalie Q. Balaban and Ofer Biham, Gene: evolutionary genomics (2004), In press

  20. NASAL-Geom, a free upper respiratory tract 3D model reconstruction software

    Science.gov (United States)

    Cercos-Pita, J. L.; Cal, I. R.; Duque, D.; de Moreta, G. Sanjuán

    2018-02-01

    The tool NASAL-Geom, a free upper respiratory tract 3D model reconstruction software, is here described. As a free software, researchers and professionals are welcome to obtain, analyze, improve and redistribute it, potentially increasing the rate of development, and reducing at the same time ethical conflicts regarding medical applications which cannot be analyzed. Additionally, the tool has been optimized for the specific task of reading upper respiratory tract Computerized Tomography scans, and producing 3D geometries. The reconstruction process is divided into three stages: preprocessing (including Metal Artifact Reduction, noise removal, and feature enhancement), segmentation (where the nasal cavity is identified), and 3D geometry reconstruction. The tool has been automatized (i.e. no human intervention is required) a critical feature to avoid bias in the reconstructed geometries. The applied methodology is discussed, as well as the program robustness and precision.

  1. Deterministic ripple-spreading model for complex networks.

    Science.gov (United States)

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  2. Network model of security system

    Directory of Open Access Journals (Sweden)

    Adamczyk Piotr

    2016-01-01

    Full Text Available The article presents the concept of building a network security model and its application in the process of risk analysis. It indicates the possibility of a new definition of the role of the network models in the safety analysis. Special attention was paid to the development of the use of an algorithm describing the process of identifying the assets, vulnerability and threats in a given context. The aim of the article is to present how this algorithm reduced the complexity of the problem by eliminating from the base model these components that have no links with others component and as a result and it was possible to build a real network model corresponding to reality.

  3. Network-based H.264/AVC whole frame loss visibility model and frame dropping methods.

    Science.gov (United States)

    Chang, Yueh-Lun; Lin, Ting-Lan; Cosman, Pamela C

    2012-08-01

    We examine the visual effect of whole frame loss by different decoders. Whole frame losses are introduced in H.264/AVC compressed videos which are then decoded by two different decoders with different common concealment effects: frame copy and frame interpolation. The videos are seen by human observers who respond to each glitch they spot. We found that about 39% of whole frame losses of B frames are not observed by any of the subjects, and over 58% of the B frame losses are observed by 20% or fewer of the subjects. Using simple predictive features which can be calculated inside a network node with no access to the original video and no pixel level reconstruction of the frame, we developed models which can predict the visibility of whole B frame losses. The models are then used in a router to predict the visual impact of a frame loss and perform intelligent frame dropping to relieve network congestion. Dropping frames based on their visual scores proves superior to random dropping of B frames.

  4. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  5. Scapular flap for maxillectomy defect reconstruction and preliminary results using three-dimensional modeling.

    Science.gov (United States)

    Modest, Mara C; Moore, Eric J; Abel, Kathryn M Van; Janus, Jeffrey R; Sims, John R; Price, Daniel L; Olsen, Kerry D

    2017-01-01

    Discuss current techniques utilizing the scapular tip and subscapular system for free tissue reconstruction of maxillary defects and highlight the impact of medical modeling on these techniques with a case series. Case review series at an academic hospital of patients undergoing maxillectomy + thoracodorsal scapula composite free flap (TSCF) reconstruction. Three-dimensional (3D) models were used in the last five cases. 3D modeling, surgical, functional, and aesthetic outcomes were reviewed. Nine patients underwent TSCF reconstruction for maxillectomy defects (median age = 43 years; range, 19-66 years). Five patients (55%) had a total maxillectomy (TM) ± orbital exenteration, whereas four patients (44%) underwent subtotal palatal maxillectomy. For TM, the contralateral scapula tip was positioned with its natural concavity recreating facial contour. The laterally based vascular pedicle was ideally positioned for facial vessel anastomosis. For subtotal-palatal defect, an ipsilateral flap was harvested, but inset with the convex surface facing superiorly. Once 3D models were available from our anatomic modeling lab, they were used for intraoperative planning of the last five patients. Use of the model intraoperatively improved efficiency and allowed for better contouring/plating of the TSCF. At last follow-up, all patients had good functional outcomes. Aesthetic outcomes were more successful in patients where 3D-modeling was used (100% vs. 50%). There were no flap failures. Median follow-up >1 month was 5.2 months (range, 1-32.7 months). Reconstruction of maxillectomy defects is complex. Successful aesthetic and functional outcomes are critical to patient satisfaction. The TSCF is a versatile flap. Based on defect type, choosing laterality is crucial for proper vessel orientation and outcomes. The use of internally produced 3D models has helped refine intraoperative contouring and flap inset, leading to more successful outcomes. 4. Laryngoscope, 127:E8-E14

  6. Fast implementations of reconstruction-based scatter compensation in fully 3D SPECT image reconstruction

    International Nuclear Information System (INIS)

    Kadrmas, Dan J.; Karimi, Seemeen S.; Frey, Eric C.; Tsui, Benjamin M.W.

    1998-01-01

    Accurate scatter compensation in SPECT can be performed by modelling the scatter response function during the reconstruction process. This method is called reconstruction-based scatter compensation (RBSC). It has been shown that RBSC has a number of advantages over other methods of compensating for scatter, but using RBSC for fully 3D compensation has resulted in prohibitively long reconstruction times. In this work we propose two new methods that can be used in conjunction with existing methods to achieve marked reductions in RBSC reconstruction times. The first method, coarse-grid scatter modelling, significantly accelerates the scatter model by exploiting the fact that scatter is dominated by low-frequency information. The second method, intermittent RBSC, further accelerates the reconstruction process by limiting the number of iterations during which scatter is modelled. The fast implementations were evaluated using a Monte Carlo simulated experiment of the 3D MCAT phantom with 99m Tc tracer, and also using experimentally acquired data with 201 Tl tracer. Results indicated that these fast methods can reconstruct, with fully 3D compensation, images very similar to those obtained using standard RBSC methods, and in reconstruction times that are an order of magnitude shorter. Using these methods, fully 3D iterative reconstruction with RBSC can be performed well within the realm of clinically realistic times (under 10 minutes for 64x64x24 image reconstruction). (author)

  7. AUTOMATED RECONSTRUCTION OF WALLS FROM AIRBORNE LIDAR DATA FOR COMPLETE 3D BUILDING MODELLING

    Directory of Open Access Journals (Sweden)

    Y. He

    2012-07-01

    Full Text Available Automated 3D building model generation continues to attract research interests in photogrammetry and computer vision. Airborne Light Detection and Ranging (LIDAR data with increasing point density and accuracy has been recognized as a valuable source for automated 3D building reconstruction. While considerable achievements have been made in roof extraction, limited research has been carried out in modelling and reconstruction of walls, which constitute important components of a full building model. Low point density and irregular point distribution of LIDAR observations on vertical walls render this task complex. This paper develops a novel approach for wall reconstruction from airborne LIDAR data. The developed method commences with point cloud segmentation using a region growing approach. Seed points for planar segments are selected through principle component analysis, and points in the neighbourhood are collected and examined to form planar segments. Afterwards, segment-based classification is performed to identify roofs, walls and planar ground surfaces. For walls with sparse LIDAR observations, a search is conducted in the neighbourhood of each individual roof segment to collect wall points, and the walls are then reconstructed using geometrical and topological constraints. Finally, walls which were not illuminated by the LIDAR sensor are determined via both reconstructed roof data and neighbouring walls. This leads to the generation of topologically consistent and geometrically accurate and complete 3D building models. Experiments have been conducted in two test sites in the Netherlands and Australia to evaluate the performance of the proposed method. Results show that planar segments can be reliably extracted in the two reported test sites, which have different point density, and the building walls can be correctly reconstructed if the walls are illuminated by the LIDAR sensor.

  8. Image quality in children with low-radiation chest CT using adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    Directory of Open Access Journals (Sweden)

    Jihang Sun

    Full Text Available OBJECTIVE: To evaluate noise reduction and image quality improvement in low-radiation dose chest CT images in children using adaptive statistical iterative reconstruction (ASIR and a full model-based iterative reconstruction (MBIR algorithm. METHODS: Forty-five children (age ranging from 28 days to 6 years, median of 1.8 years who received low-dose chest CT scans were included. Age-dependent noise index (NI was used for acquisition. Images were retrospectively reconstructed using three methods: MBIR, 60% of ASIR and 40% of conventional filtered back-projection (FBP, and FBP. The subjective quality of the images was independently evaluated by two radiologists. Objective noises in the left ventricle (LV, muscle, fat, descending aorta and lung field at the layer with the largest cross-section area of LV were measured, with the region of interest about one fourth to half of the area of descending aorta. Optimized signal-to-noise ratio (SNR was calculated. RESULT: In terms of subjective quality, MBIR images were significantly better than ASIR and FBP in image noise and visibility of tiny structures, but blurred edges were observed. In terms of objective noise, MBIR and ASIR reconstruction decreased the image noise by 55.2% and 31.8%, respectively, for LV compared with FBP. Similarly, MBIR and ASIR reconstruction increased the SNR by 124.0% and 46.2%, respectively, compared with FBP. CONCLUSION: Compared with FBP and ASIR, overall image quality and noise reduction were significantly improved by MBIR. MBIR image could reconstruct eligible chest CT images in children with lower radiation dose.

  9. How to model wireless mesh networks topology

    International Nuclear Information System (INIS)

    Sanni, M L; Hashim, A A; Anwar, F; Ali, S; Ahmed, G S M

    2013-01-01

    The specification of network connectivity model or topology is the beginning of design and analysis in Computer Network researches. Wireless Mesh Networks is an autonomic network that is dynamically self-organised, self-configured while the mesh nodes establish automatic connectivity with the adjacent nodes in the relay network of wireless backbone routers. Researches in Wireless Mesh Networks range from node deployment to internetworking issues with sensor, Internet and cellular networks. These researches require modelling of relationships and interactions among nodes including technical characteristics of the links while satisfying the architectural requirements of the physical network. However, the existing topology generators model geographic topologies which constitute different architectures, thus may not be suitable in Wireless Mesh Networks scenarios. The existing methods of topology generation are explored, analysed and parameters for their characterisation are identified. Furthermore, an algorithm for the design of Wireless Mesh Networks topology based on square grid model is proposed in this paper. The performance of the topology generated is also evaluated. This research is particularly important in the generation of a close-to-real topology for ensuring relevance of design to the intended network and validity of results obtained in Wireless Mesh Networks researches

  10. Muon reconstruction with a geometrical model in JUNO

    Science.gov (United States)

    Genster, C.; Schever, M.; Ludhova, L.; Soiron, M.; Stahl, A.; Wiebusch, C.

    2018-03-01

    The Jiangmen Neutrino Underground Observatory (JUNO) is a 20 kton liquid scintillator detector currently under construction near Kaiping in China. The physics program focuses on the determination of the neutrino mass hierarchy with reactor anti-neutrinos. For this purpose, JUNO is located 650 m underground with a distance of 53 km to two nuclear power plants. As a result, it is exposed to a muon flux that requires a precise muon reconstruction to make a veto of cosmogenic backgrounds viable. Established muon tracking algorithms use time residuals to a track hypothesis. We developed an alternative muon tracking algorithm that utilizes the geometrical shape of the fastest light. It models the full shape of the first, direct light produced along the muon track. From the intersection with the spherical PMT array, the track parameters are extracted with a likelihood fit. The algorithm finds a selection of PMTs based on their first hit times and charges. Subsequently, it fits on timing information only. On a sample of through-going muons with a full simulation of readout electronics, we report a spatial resolution of 20 cm of distance from the detector's center and an angular resolution of 1.6o over the whole detector. Additionally, a dead time estimation is performed to measure the impact of the muon veto. Including the step of waveform reconstruction on top of the track reconstruction, a loss in exposure of only 4% can be achieved compared to the case of a perfect tracking algorithm. When including only the PMT time resolution, but no further electronics simulation and waveform reconstruction, the exposure loss is only 1%.

  11. Mid- and long-term runoff predictions by an improved phase-space reconstruction model

    International Nuclear Information System (INIS)

    Hong, Mei; Wang, Dong; Wang, Yuankun; Zeng, Xiankui; Ge, Shanshan; Yan, Hengqian; Singh, Vijay P.

    2016-01-01

    In recent years, the phase-space reconstruction method has usually been used for mid- and long-term runoff predictions. However, the traditional phase-space reconstruction method is still needs to be improved. Using the genetic algorithm to improve the phase-space reconstruction method, a new nonlinear model of monthly runoff is constructed. The new model does not rely heavily on embedding dimensions. Recognizing that the rainfall–runoff process is complex, affected by a number of factors, more variables (e.g. temperature and rainfall) are incorporated in the model. In order to detect the possible presence of chaos in the runoff dynamics, chaotic characteristics of the model are also analyzed, which shows the model can represent the nonlinear and chaotic characteristics of the runoff. The model is tested for its forecasting performance in four types of experiments using data from six hydrological stations on the Yellow River and the Yangtze River. Results show that the medium-and long-term runoff is satisfactorily forecasted at the hydrological stations. Not only is the forecasting trend accurate, but also the mean absolute percentage error is no more than 15%. Moreover, the forecast results of wet years and dry years are both good, which means that the improved model can overcome the traditional ‘‘wet years and dry years predictability barrier,’’ to some extent. The model forecasts for different regions are all good, showing the universality of the approach. Compared with selected conceptual and empirical methods, the model exhibits greater reliability and stability in the long-term runoff prediction. Our study provides a new thinking for research on the association between the monthly runoff and other hydrological factors, and also provides a new method for the prediction of the monthly runoff. - Highlights: • The improved phase-space reconstruction model of monthly runoff is established. • Two variables (temperature and rainfall) are incorporated

  12. Mid- and long-term runoff predictions by an improved phase-space reconstruction model

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Mei [Research Center of Ocean Environment Numerical Simulation, Institute of Meteorology and oceanography, PLA University of Science and Technology, Nanjing (China); Wang, Dong, E-mail: wangdong@nju.edu.cn [Key Laboratory of Surficial Geochemistry, Ministry of Education, Department of Hydrosciences, School of Earth Sciences and Engineering, Collaborative Innovation Center of South China Sea Studies, State Key Laboratory of Pollution Control and Resource Reuse, Nanjing University, Nanjing 210093 (China); Wang, Yuankun; Zeng, Xiankui [Key Laboratory of Surficial Geochemistry, Ministry of Education, Department of Hydrosciences, School of Earth Sciences and Engineering, Collaborative Innovation Center of South China Sea Studies, State Key Laboratory of Pollution Control and Resource Reuse, Nanjing University, Nanjing 210093 (China); Ge, Shanshan; Yan, Hengqian [Research Center of Ocean Environment Numerical Simulation, Institute of Meteorology and oceanography, PLA University of Science and Technology, Nanjing (China); Singh, Vijay P. [Department of Biological and Agricultural Engineering Zachry Department of Civil Engineering, Texas A & M University, College Station, TX 77843 (United States)

    2016-07-15

    In recent years, the phase-space reconstruction method has usually been used for mid- and long-term runoff predictions. However, the traditional phase-space reconstruction method is still needs to be improved. Using the genetic algorithm to improve the phase-space reconstruction method, a new nonlinear model of monthly runoff is constructed. The new model does not rely heavily on embedding dimensions. Recognizing that the rainfall–runoff process is complex, affected by a number of factors, more variables (e.g. temperature and rainfall) are incorporated in the model. In order to detect the possible presence of chaos in the runoff dynamics, chaotic characteristics of the model are also analyzed, which shows the model can represent the nonlinear and chaotic characteristics of the runoff. The model is tested for its forecasting performance in four types of experiments using data from six hydrological stations on the Yellow River and the Yangtze River. Results show that the medium-and long-term runoff is satisfactorily forecasted at the hydrological stations. Not only is the forecasting trend accurate, but also the mean absolute percentage error is no more than 15%. Moreover, the forecast results of wet years and dry years are both good, which means that the improved model can overcome the traditional ‘‘wet years and dry years predictability barrier,’’ to some extent. The model forecasts for different regions are all good, showing the universality of the approach. Compared with selected conceptual and empirical methods, the model exhibits greater reliability and stability in the long-term runoff prediction. Our study provides a new thinking for research on the association between the monthly runoff and other hydrological factors, and also provides a new method for the prediction of the monthly runoff. - Highlights: • The improved phase-space reconstruction model of monthly runoff is established. • Two variables (temperature and rainfall) are incorporated

  13. Reconstruction of the El Nino attractor with neural networks

    International Nuclear Information System (INIS)

    Grieger, B.; Latif, M.

    1993-01-01

    Based on a combined data set of sea surface temperature, zonal surface wind stress and upper ocean heat content the dynamics of the El Nino phenomenon is investigated. In a reduced phase space spanned by the first four EOFs two different stochastic models are estimated from the data. A nonlinear model represented by a simulated neural network is compared with a linear model obtained with the Principal Oscillation Pattern (POP) analysis. While the linear model is limited to damped oscillations onto a fix point attractor, the nonlinear model recovers a limit cycle attractor. This indicates that the real system is located above the bifurcation point in parameter space supporting self-sustained oscillations. The results are discussed with respect to consistency with current theory. (orig.)

  14. 3-D Reservoir and Stochastic Fracture Network Modeling for Enhanced Oil Recovery, Circle Ridge Phosphoria/Tensleep Reservoir, and River Reservation, Arapaho and Shoshone Tribes, Wyoming

    Energy Technology Data Exchange (ETDEWEB)

    La Pointe, Paul R.; Hermanson, Jan

    2002-09-09

    The goal of this project is to improve the recovery of oil from the Circle Ridge Oilfield, located on the Wind River Reservation in Wyoming, through an innovative integration of matrix characterization, structural reconstruction, and the characterization of the fracturing in the reservoir through the use of discrete fracture network models.

  15. 3D Volumetric Modeling and Microvascular Reconstruction of Irradiated Lumbosacral Defects After Oncologic Resection

    Directory of Open Access Journals (Sweden)

    Emilio Garcia-Tutor

    2016-12-01

    Full Text Available Background: Locoregional flaps are sufficient in most sacral reconstructions. However, large sacral defects due to malignancy necessitate a different reconstructive approach, with local flaps compromised by radiation and regional flaps inadequate for broad surface areas or substantial volume obliteration. In this report, we present our experience using free muscle transfer for volumetric reconstruction in such cases, and demonstrate 3D haptic models of the sacral defect to aid preoperative planning.Methods: Five consecutive patients with irradiated sacral defects secondary to oncologic resections were included, surface area ranging from 143-600cm2. Latissimus dorsi-based free flap sacral reconstruction was performed in each case, between 2005 and 2011. Where the superior gluteal artery was compromised, the subcostal artery was used as a recipient vessel. Microvascular technique, complications and outcomes are reported. The use of volumetric analysis and 3D printing is also demonstrated, with imaging data converted to 3D images suitable for 3D printing with Osirix software (Pixmeo, Geneva, Switzerland. An office-based, desktop 3D printer was used to print 3D models of sacral defects, used to demonstrate surface area and contour and produce a volumetric print of the dead space needed for flap obliteration. Results: The clinical series of latissimus dorsi free flap reconstructions is presented, with successful transfer in all cases, and adequate soft-tissue cover and volume obliteration achieved. The original use of the subcostal artery as a recipient vessel was successfully achieved. All wounds healed uneventfully. 3D printing is also demonstrated as a useful tool for 3D evaluation of volume and dead-space.Conclusion: Free flaps offer unique benefits in sacral reconstruction where local tissue is compromised by irradiation and tumor recurrence, and dead-space requires accurate volumetric reconstruction. We describe for the first time the use of

  16. RECONSTRUCTION OF HUMAN LUNG MORPHOLOGY MODELS FROM MAGNETIC RESONANCE IMAGES

    Science.gov (United States)

    Reconstruction of Human Lung Morphology Models from Magnetic Resonance ImagesT. B. Martonen (Experimental Toxicology Division, U.S. EPA, Research Triangle Park, NC 27709) and K. K. Isaacs (School of Public Health, University of North Carolina, Chapel Hill, NC 27514)

  17. 3D Fractal reconstruction of terrain profile data based on digital elevation model

    International Nuclear Information System (INIS)

    Huang, Y.M.; Chen, C.-J.

    2009-01-01

    Digital Elevation Model (DEM) often makes it difficult for terrain reconstruction and data storage due to the failure in acquisition of details with higher resolution. If original terrain of DEM can be simulated, resulting in geographical details can be represented precisely while reducing the data size, then an effective reconstruction scheme is essential. This paper adopts two sets of real-world 3D terrain profile data to proceed data reducing, i.e. data sampling randomly, then reconstruct them through 3D fractal reconstruction. Meanwhile, the quantitative and qualitative difference generated from different reduction rates were evaluated statistically. The research results show that, if 3D fractal interpolation method is applied to DEM reconstruction, the higher reduction rate can be obtained for DEM of larger data size with respect to that of smaller data size under the assumption that the entire terrain structure is still maintained.

  18. A singular K-space model for fast reconstruction of magnetic resonance images from undersampled data.

    Science.gov (United States)

    Luo, Jianhua; Mou, Zhiying; Qin, Binjie; Li, Wanqing; Ogunbona, Philip; Robini, Marc C; Zhu, Yuemin

    2017-12-09

    Reconstructing magnetic resonance images from undersampled k-space data is a challenging problem. This paper introduces a novel method of image reconstruction from undersampled k-space data based on the concept of singularizing operators and a novel singular k-space model. Exploring the sparsity of an image in the k-space, the singular k-space model (SKM) is proposed in terms of the k-space functions of a singularizing operator. The singularizing operator is constructed by combining basic difference operators. An algorithm is developed to reliably estimate the model parameters from undersampled k-space data. The estimated parameters are then used to recover the missing k-space data through the model, subsequently achieving high-quality reconstruction of the image using inverse Fourier transform. Experiments on physical phantom and real brain MR images have shown that the proposed SKM method constantly outperforms the popular total variation (TV) and the classical zero-filling (ZF) methods regardless of the undersampling rates, the noise levels, and the image structures. For the same objective quality of the reconstructed images, the proposed method requires much less k-space data than the TV method. The SKM method is an effective method for fast MRI reconstruction from the undersampled k-space data. Graphical abstract Two Real Images and their sparsified images by singularizing operator.

  19. A Systems Biology Approach to Understanding Alcoholic Liver Disease Molecular Mechanism: The Development of Static and Dynamic Models.

    Science.gov (United States)

    Shafaghati, Leila; Razaghi-Moghadam, Zahra; Mohammadnejad, Javad

    2017-11-01

    Alcoholic liver disease (ALD) is a complex disease characterized by damages to the liver and is the consequence of excessive alcohol consumption over years. Since this disease is associated with several pathway failures, pathway reconstruction and network analysis are likely to explicit the molecular basis of the disease. To this aim, in this paper, a network medicine approach was employed to integrate interactome (protein-protein interaction and signaling pathways) and transcriptome data to reconstruct both a static network of ALD and a dynamic model for it. Several data sources were exploited to assemble a set of ALD-associated genes which further was used for network reconstruction. Moreover, a comprehensive literature mining reveals that there are four signaling pathways with crosstalk (TLR4, NF- [Formula: see text]B, MAPK and Apoptosis) which play a major role in ALD. These four pathways were exploited to reconstruct a dynamic model of ALD. The results assure that these two models are consistent with a number of experimental observations. The static network of ALD and its dynamic model are the first models provided for ALD which offer potentially valuable information for researchers in this field.

  20. Modeling the interdependent network based on two-mode networks

    Science.gov (United States)

    An, Feng; Gao, Xiangyun; Guan, Jianhe; Huang, Shupei; Liu, Qian

    2017-10-01

    Among heterogeneous networks, there exist obviously and closely interdependent linkages. Unlike existing research primarily focus on the theoretical research of physical interdependent network model. We propose a two-layer interdependent network model based on two-mode networks to explore the interdependent features in the reality. Specifically, we construct a two-layer interdependent loan network and develop several dependent features indices. The model is verified to enable us to capture the loan dependent features of listed companies based on loan behaviors and shared shareholders. Taking Chinese debit and credit market as case study, the main conclusions are: (1) only few listed companies shoulder the main capital transmission (20% listed companies occupy almost 70% dependent degree). (2) The control of these key listed companies will be more effective of avoiding the spreading of financial risks. (3) Identifying the companies with high betweenness centrality and controlling them could be helpful to monitor the financial risk spreading. (4) The capital transmission channel among Chinese financial listed companies and Chinese non-financial listed companies are relatively strong. However, under greater pressure of demand of capital transmission (70% edges failed), the transmission channel, which constructed by debit and credit behavior, will eventually collapse.

  1. Driver Injury Risk Variability in Finite Element Reconstructions of Crash Injury Research and Engineering Network (CIREN) Frontal Motor Vehicle Crashes.

    Science.gov (United States)

    Gaewsky, James P; Weaver, Ashley A; Koya, Bharath; Stitzel, Joel D

    2015-01-01

    A 3-phase real-world motor vehicle crash (MVC) reconstruction method was developed to analyze injury variability as a function of precrash occupant position for 2 full-frontal Crash Injury Research and Engineering Network (CIREN) cases. Phase I: A finite element (FE) simplified vehicle model (SVM) was developed and tuned to mimic the frontal crash characteristics of the CIREN case vehicle (Camry or Cobalt) using frontal New Car Assessment Program (NCAP) crash test data. Phase II: The Toyota HUman Model for Safety (THUMS) v4.01 was positioned in 120 precrash configurations per case within the SVM. Five occupant positioning variables were varied using a Latin hypercube design of experiments: seat track position, seat back angle, D-ring height, steering column angle, and steering column telescoping position. An additional baseline simulation was performed that aimed to match the precrash occupant position documented in CIREN for each case. Phase III: FE simulations were then performed using kinematic boundary conditions from each vehicle's event data recorder (EDR). HIC15, combined thoracic index (CTI), femur forces, and strain-based injury metrics in the lung and lumbar vertebrae were evaluated to predict injury. Tuning the SVM to specific vehicle models resulted in close matches between simulated and test injury metric data, allowing the tuned SVM to be used in each case reconstruction with EDR-derived boundary conditions. Simulations with the most rearward seats and reclined seat backs had the greatest HIC15, head injury risk, CTI, and chest injury risk. Calculated injury risks for the head, chest, and femur closely correlated to the CIREN occupant injury patterns. CTI in the Camry case yielded a 54% probability of Abbreviated Injury Scale (AIS) 2+ chest injury in the baseline case simulation and ranged from 34 to 88% (mean = 61%) risk in the least and most dangerous occupant positions. The greater than 50% probability was consistent with the case occupant's AIS 2

  2. Model-based respiratory motion compensation for emission tomography image reconstruction

    International Nuclear Information System (INIS)

    Reyes, M; Malandain, G; Koulibaly, P M; Gonzalez-Ballester, M A; Darcourt, J

    2007-01-01

    In emission tomography imaging, respiratory motion causes artifacts in lungs and cardiac reconstructed images, which lead to misinterpretations, imprecise diagnosis, impairing of fusion with other modalities, etc. Solutions like respiratory gating, correlated dynamic PET techniques, list-mode data based techniques and others have been tested, which lead to improvements over the spatial activity distribution in lungs lesions, but which have the disadvantages of requiring additional instrumentation or the need of discarding part of the projection data used for reconstruction. The objective of this study is to incorporate respiratory motion compensation directly into the image reconstruction process, without any additional acquisition protocol consideration. To this end, we propose an extension to the maximum likelihood expectation maximization (MLEM) algorithm that includes a respiratory motion model, which takes into account the displacements and volume deformations produced by the respiratory motion during the data acquisition process. We present results from synthetic simulations incorporating real respiratory motion as well as from phantom and patient data

  3. Reconstruction of t anti tH (H → bb) events using deep neural networks with the CMS detector

    Energy Technology Data Exchange (ETDEWEB)

    Rieger, Marcel; Erdmann, Martin; Fischer, Benjamin; Fischer, Robert; Heidemann, Fabian; Quast, Thorben; Rath, Yannik [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    The measurement of Higgs boson production in association with top-quark pairs (t anti tH) is an important goal of Run 2 of the LHC as it allows for a direct measurement of the underlying Yukawa coupling. Due to the complex final state, however, the analysis of semi-leptonic t anti tH events with the Higgs boson decaying into a pair of bottom-quarks is challenging. A promising method for tackling jet parton associations are Deep Neural Networks (DNN). While being a widely spread machine learning algorithm in modern industry, DNNs are on the way to becoming established in high energy physics. We present a study on the reconstruction of the final state using DNNs, comparing to Boosted Decision Trees (BDT) as benchmark scenario. This is accomplished by generating permutations of simulated events and comparing them with truth information to extract reconstruction efficiencies.

  4. Entropy Characterization of Random Network Models

    Directory of Open Access Journals (Sweden)

    Pedro J. Zufiria

    2017-06-01

    Full Text Available This paper elaborates on the Random Network Model (RNM as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc. and entropy-based complexity measures, providing new insight on the generation and characterization of random networks. Some theoretical and computational results illustrate the utility of the proposed framework.

  5. The model of social crypto-network

    Directory of Open Access Journals (Sweden)

    Марк Миколайович Орел

    2015-06-01

    Full Text Available The article presents the theoretical model of social network with the enhanced mechanism of privacy policy. It covers the problems arising in the process of implementing the mentioned type of network. There are presented the methods of solving problems arising in the process of building the social network with privacy policy. It was built a theoretical model of social networks with enhanced information protection methods based on information and communication blocks

  6. Towards reproducible descriptions of neuronal network models.

    Directory of Open Access Journals (Sweden)

    Eilen Nordlie

    2009-08-01

    Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.

  7. Designing Network-based Business Model Ontology

    DEFF Research Database (Denmark)

    Hashemi Nekoo, Ali Reza; Ashourizadeh, Shayegheh; Zarei, Behrouz

    2015-01-01

    Survival on dynamic environment is not achieved without a map. Scanning and monitoring of the market show business models as a fruitful tool. But scholars believe that old-fashioned business models are dead; as they are not included the effect of internet and network in themselves. This paper...... is going to propose e-business model ontology from the network point of view and its application in real world. The suggested ontology for network-based businesses is composed of individuals` characteristics and what kind of resources they own. also, their connections and pre-conceptions of connections...... such as shared-mental model and trust. However, it mostly covers previous business model elements. To confirm the applicability of this ontology, it has been implemented in business angel network and showed how it works....

  8. An Improved Car-Following Model in Vehicle Networking Based on Network Control

    Directory of Open Access Journals (Sweden)

    D. Y. Kong

    2014-01-01

    Full Text Available Vehicle networking is a system to realize information interoperability between vehicles and people, vehicles and roads, vehicles and vehicles, and cars and transport facilities, through the network information exchange, in order to achieve the effective monitoring of the vehicle and traffic flow. Realizing information interoperability between vehicles and vehicles, which can affect the traffic flow, is an important application of network control system (NCS. In this paper, a car-following model using vehicle networking theory is established, based on network control principle. The car-following model, which is an improvement of the traditional traffic model, describes the traffic in vehicle networking condition. The impact that vehicle networking has on the traffic flow is quantitatively assessed in a particular scene of one-way, no lane changing highway. The examples show that the capacity of the road is effectively enhanced by using vehicle networking.

  9. Progress towards the use of publicly available data networks to conduct cross-scale historical reconstructions of carbon dynamics in US Drylands

    Science.gov (United States)

    Washington-Allen, R. A.; Landolt, K.; Emanuel, R. E.; Therrell, M. D.; Nagle, N.; Grissino-Mayer, H. D.; Poulter, B.

    2016-12-01

    Emergent scale properties of water-limited or Dryland ecosystem's carbon flux are unknown at spatial scales from local to global and time scales of 10 - 1000 years or greater. The width of a tree ring is a metric of production that has been correlated with the amount of precipitation. This relationship has been used to reconstruct rainfall and fire histories in the Drylands of the southwestern US. The normalized difference vegetation index (NDVI) is globally measured by selected satellite sensors and is highly correlated with the fraction of solar radiation which is absorbed for photosynthesis by plants (FPAR), as well as with vegetation biomass, net primary productivity (NPP), and tree ring width. Publicly available web-based archives of free NDVI and tree ring data exist and have allowed historical temporal reconstructions of carbon dynamics for the past 300 to 500 years. Climate and tree ring databases have been used to spatially reconstruct drought dynamics for the last 500 years in the western US. In 2007, we hypothesized that NDVI and tree ring width could be used to spatially reconstruct carbon dynamics in US Drylands. In 2015, we succeeded with a 300-year historical spatial reconstruction of NPP in California using a Blue Oak tree ring chronology. Online eddy covariance flux tower measures of NPP are well correlated with satellite measures of NPP. This suggests that net ecosystem exchange (NEE = NPP - soil Respiration) could be historically reconstructed across Drylands. Ongoing research includes 1) scaling historical spatial reconstruction to US Drylands, 2) comparing the use of single versus multiple tree ring species (r2 = 68) and 3) use of the eddy flux tower network, remote sensing, and tree ring data to historically spatially reconstruct Dryland NEE.

  10. Multiplicative Attribute Graph Model of Real-World Networks

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myunghwan [Stanford Univ., CA (United States); Leskovec, Jure [Stanford Univ., CA (United States)

    2010-10-20

    Large scale real-world network data, such as social networks, Internet andWeb graphs, is ubiquitous in a variety of scientific domains. The study of such social and information networks commonly finds patterns and explain their emergence through tractable models. In most networks, especially in social networks, nodes also have a rich set of attributes (e.g., age, gender) associatedwith them. However, most of the existing network models focus only on modeling the network structure while ignoring the features of nodes in the network. Here we present a class of network models that we refer to as the Multiplicative Attribute Graphs (MAG), which naturally captures the interactions between the network structure and node attributes. We consider a model where each node has a vector of categorical features associated with it. The probability of an edge between a pair of nodes then depends on the product of individual attributeattribute similarities. The model yields itself to mathematical analysis as well as fit to real data. We derive thresholds for the connectivity, the emergence of the giant connected component, and show that the model gives rise to graphs with a constant diameter. Moreover, we analyze the degree distribution to show that the model can produce networks with either lognormal or power-law degree distribution depending on certain conditions.

  11. Error Concealment using Neural Networks for Block-Based Image Coding

    Directory of Open Access Journals (Sweden)

    M. Mokos

    2006-06-01

    Full Text Available In this paper, a novel adaptive error concealment (EC algorithm, which lowers the requirements for channel coding, is proposed. It conceals errors in block-based image coding systems by using neural network. In this proposed algorithm, only the intra-frame information is used for reconstruction of the image with separated damaged blocks. The information of pixels surrounding a damaged block is used to recover the errors using the neural network models. Computer simulation results show that the visual quality and the MSE evaluation of a reconstructed image are significantly improved using the proposed EC algorithm. We propose also a simple non-neural approach for comparison.

  12. Application of a Laplace transform pair model for high-energy x-ray spectral reconstruction.

    Science.gov (United States)

    Archer, B R; Almond, P R; Wagner, L K

    1985-01-01

    A Laplace transform pair model, previously shown to accurately reconstruct x-ray spectra at diagnostic energies, has been applied to megavoltage energy beams. The inverse Laplace transforms of 2-, 6-, and 25-MV attenuation curves were evaluated to determine the energy spectra of these beams. The 2-MV data indicate that the model can reliably reconstruct spectra in the low megavoltage range. Experimental limitations in acquiring the 6-MV transmission data demonstrate the sensitivity of the model to systematic experimental error. The 25-MV data result in a physically realistic approximation of the present spectrum.

  13. Cellular neural networks, the Navier-Stokes equation, and microarray image reconstruction.

    Science.gov (United States)

    Zineddin, Bachar; Wang, Zidong; Liu, Xiaohui

    2011-11-01

    Although the last decade has witnessed a great deal of improvements achieved for the microarray technology, many major developments in all the main stages of this technology, including image processing, are still needed. Some hardware implementations of microarray image processing have been proposed in the literature and proved to be promising alternatives to the currently available software systems. However, the main drawback of those proposed approaches is the unsuitable addressing of the quantification of the gene spot in a realistic way without any assumption about the image surface. Our aim in this paper is to present a new image-reconstruction algorithm using the cellular neural network that solves the Navier-Stokes equation. This algorithm offers a robust method for estimating the background signal within the gene-spot region. The MATCNN toolbox for Matlab is used to test the proposed method. Quantitative comparisons are carried out, i.e., in terms of objective criteria, between our approach and some other available methods. It is shown that the proposed algorithm gives highly accurate and realistic measurements in a fully automated manner within a remarkably efficient time.

  14. Inferring network topology from complex dynamics

    International Nuclear Information System (INIS)

    Shandilya, Srinivas Gorur; Timme, Marc

    2011-01-01

    Inferring the network topology from dynamical observations is a fundamental problem pervading research on complex systems. Here, we present a simple, direct method for inferring the structural connection topology of a network, given an observation of one collective dynamical trajectory. The general theoretical framework is applicable to arbitrary network dynamical systems described by ordinary differential equations. No interference (external driving) is required and the type of dynamics is hardly restricted in any way. In particular, the observed dynamics may be arbitrarily complex; stationary, invariant or transient; synchronous or asynchronous and chaotic or periodic. Presupposing a knowledge of the functional form of the dynamical units and of the coupling functions between them, we present an analytical solution to the inverse problem of finding the network topology from observing a time series of state variables only. Robust reconstruction is achieved in any sufficiently long generic observation of the system. We extend our method to simultaneously reconstructing both the entire network topology and all parameters appearing linear in the system's equations of motion. Reconstruction of network topology and system parameters is viable even in the presence of external noise that distorts the original dynamics substantially. The method provides a conceptually new step towards reconstructing a variety of real-world networks, including gene and protein interaction networks and neuronal circuits.

  15. Integrated Main Propulsion System Performance Reconstruction Process/Models

    Science.gov (United States)

    Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael

    2013-01-01

    The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.

  16. Developing Personal Network Business Models

    DEFF Research Database (Denmark)

    Saugstrup, Dan; Henten, Anders

    2006-01-01

    The aim of the paper is to examine the issue of business modeling in relation to personal networks, PNs. The paper builds on research performed on business models in the EU 1ST MAGNET1 project (My personal Adaptive Global NET). The paper presents the Personal Network concept and briefly reports...

  17. Bias in iterative reconstruction of low-statistics PET data: benefits of a resolution model

    Energy Technology Data Exchange (ETDEWEB)

    Walker, M D; Asselin, M-C; Julyan, P J; Feldmann, M; Matthews, J C [School of Cancer and Enabling Sciences, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Talbot, P S [Mental Health and Neurodegeneration Research Group, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Jones, T, E-mail: matthew.walker@manchester.ac.uk [Academic Department of Radiation Oncology, Christie Hospital, University of Manchester, Manchester M20 4BX (United Kingdom)

    2011-02-21

    Iterative image reconstruction methods such as ordered-subset expectation maximization (OSEM) are widely used in PET. Reconstructions via OSEM are however reported to be biased for low-count data. We investigated this and considered the impact for dynamic PET. Patient listmode data were acquired in [{sup 11}C]DASB and [{sup 15}O]H{sub 2}O scans on the HRRT brain PET scanner. These data were subsampled to create many independent, low-count replicates. The data were reconstructed and the images from low-count data were compared to the high-count originals (from the same reconstruction method). This comparison enabled low-statistics bias to be calculated for the given reconstruction, as a function of the noise-equivalent counts (NEC). Two iterative reconstruction methods were tested, one with and one without an image-based resolution model (RM). Significant bias was observed when reconstructing data of low statistical quality, for both subsampled human and simulated data. For human data, this bias was substantially reduced by including a RM. For [{sup 11}C]DASB the low-statistics bias in the caudate head at 1.7 M NEC (approx. 30 s) was -5.5% and -13% with and without RM, respectively. We predicted biases in the binding potential of -4% and -10%. For quantification of cerebral blood flow for the whole-brain grey- or white-matter, using [{sup 15}O]H{sub 2}O and the PET autoradiographic method, a low-statistics bias of <2.5% and <4% was predicted for reconstruction with and without the RM. The use of a resolution model reduces low-statistics bias and can hence be beneficial for quantitative dynamic PET.

  18. A novel Direct Small World network model

    Directory of Open Access Journals (Sweden)

    LIN Tao

    2016-10-01

    Full Text Available There is a certain degree of redundancy and low efficiency of existing computer networks.This paper presents a novel Direct Small World network model in order to optimize networks.In this model,several nodes construct a regular network.Then,randomly choose and replot some nodes to generate Direct Small World network iteratively.There is no change in average distance and clustering coefficient.However,the network performance,such as hops,is improved.The experiments prove that compared to traditional small world network,the degree,average of degree centrality and average of closeness centrality are lower in Direct Small World network.This illustrates that the nodes in Direct Small World networks are closer than Watts-Strogatz small world network model.The Direct Small World can be used not only in the communication of the community information,but also in the research of epidemics.

  19. Quantitative utilization of prior biological knowledge in the Bayesian network modeling of gene expression data

    Directory of Open Access Journals (Sweden)

    Gao Shouguo

    2011-08-01

    Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.

  20. "Growing trees backwards": Description of a stand reconstruction model (P-53)

    Science.gov (United States)

    Jonathan D. Bakker; Andrew J. Sanchez Meador; Peter Z. Fule; David W. Huffman; Margaret M. Moore

    2008-01-01

    We describe an individual-tree model that uses contemporary measurements to "grow trees backward" and reconstruct past tree diameters and stand structure in ponderosa pine dominated stands of the Southwest. Model inputs are contemporary structural measurements of all snags, logs, stumps, and living trees, and radial growth measurements, if available. Key...

  1. Non-consensus Opinion Models on Complex Networks

    Science.gov (United States)

    Li, Qian; Braunstein, Lidia A.; Wang, Huijuan; Shao, Jia; Stanley, H. Eugene; Havlin, Shlomo

    2013-04-01

    Social dynamic opinion models have been widely studied to understand how interactions among individuals cause opinions to evolve. Most opinion models that utilize spin interaction models usually produce a consensus steady state in which only one opinion exists. Because in reality different opinions usually coexist, we focus on non-consensus opinion models in which above a certain threshold two opinions coexist in a stable relationship. We revisit and extend the non-consensus opinion (NCO) model introduced by Shao et al. (Phys. Rev. Lett. 103:01870, 2009). The NCO model in random networks displays a second order phase transition that belongs to regular mean field percolation and is characterized by the appearance (above a certain threshold) of a large spanning cluster of the minority opinion. We generalize the NCO model by adding a weight factor W to each individual's original opinion when determining their future opinion (NCO W model). We find that as W increases the minority opinion holders tend to form stable clusters with a smaller initial minority fraction than in the NCO model. We also revisit another non-consensus opinion model based on the NCO model, the inflexible contrarian opinion (ICO) model (Li et al. in Phys. Rev. E 84:066101, 2011), which introduces inflexible contrarians to model the competition between two opinions in a steady state. Inflexible contrarians are individuals that never change their original opinion but may influence the opinions of others. To place the inflexible contrarians in the ICO model we use two different strategies, random placement and one in which high-degree nodes are targeted. The inflexible contrarians effectively decrease the size of the largest rival-opinion cluster in both strategies, but the effect is more pronounced under the targeted method. All of the above models have previously been explored in terms of a single network, but human communities are usually interconnected, not isolated. Because opinions propagate not

  2. Bayesian reconstruction of gravitational wave bursts using chirplets

    Science.gov (United States)

    Millhouse, Margaret; Cornish, Neil J.; Littenberg, Tyson

    2018-05-01

    The LIGO-Virgo Collaboration uses a variety of techniques to detect and characterize gravitational waves. One approach is to use templates—models for the signals derived from Einstein's equations. Another approach is to extract the signals directly from the coherent response of the detectors in the LIGO-Virgo network. Both approaches played an important role in the first gravitational wave detections. Here we extend the BayesWave analysis algorithm, which reconstructs gravitational wave signals using a collection of continuous wavelets, to use a generalized wavelet family, known as chirplets, that have time-evolving frequency content. Since generic gravitational wave signals have frequency content that evolves in time, a collection of chirplets provides a more compact representation of the signal, resulting in more accurate waveform reconstructions, especially for low signal-to-noise events, and events that occupy a large time-frequency volume.

  3. The effect of 18F-FDG-PET image reconstruction algorithms on the expression of characteristic metabolic brain network in Parkinson's disease.

    Science.gov (United States)

    Tomše, Petra; Jensterle, Luka; Rep, Sebastijan; Grmek, Marko; Zaletel, Katja; Eidelberg, David; Dhawan, Vijay; Ma, Yilong; Trošt, Maja

    2017-09-01

    To evaluate the reproducibility of the expression of Parkinson's Disease Related Pattern (PDRP) across multiple sets of 18F-FDG-PET brain images reconstructed with different reconstruction algorithms. 18F-FDG-PET brain imaging was performed in two independent cohorts of Parkinson's disease (PD) patients and normal controls (NC). Slovenian cohort (20 PD patients, 20 NC) was scanned with Siemens Biograph mCT camera and reconstructed using FBP, FBP+TOF, OSEM, OSEM+TOF, OSEM+PSF and OSEM+PSF+TOF. American Cohort (20 PD patients, 7 NC) was scanned with GE Advance camera and reconstructed using 3DRP, FORE-FBP and FORE-Iterative. Expressions of two previously-validated PDRP patterns (PDRP-Slovenia and PDRP-USA) were calculated. We compared the ability of PDRP to discriminate PD patients from NC, differences and correlation between the corresponding subject scores and ROC analysis results across the different reconstruction algorithms. The expression of PDRP-Slovenia and PDRP-USA networks was significantly elevated in PD patients compared to NC (palgorithms. PDRP expression strongly correlated between all studied algorithms and the reference algorithm (r⩾0.993, palgorithms varied within 0.73 and 0.08 of the reference value for PDRP-Slovenia and PDRP-USA, respectively. ROC analysis confirmed high similarity in sensitivity, specificity and AUC among all studied reconstruction algorithms. These results show that the expression of PDRP is reproducible across a variety of reconstruction algorithms of 18F-FDG-PET brain images. PDRP is capable of providing a robust metabolic biomarker of PD for multicenter 18F-FDG-PET images acquired in the context of differential diagnosis or clinical trials. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. Compartmentalized metabolic network reconstruction of microbial communities to determine the effect of agricultural intervention on soils

    Science.gov (United States)

    Álvarez-Yela, Astrid Catalina; Gómez-Cano, Fabio; Zambrano, María Mercedes; Husserl, Johana; Danies, Giovanna; Restrepo, Silvia; González-Barrios, Andrés Fernando

    2017-01-01

    Soil microbial communities are responsible for a wide range of ecological processes and have an important economic impact in agriculture. Determining the metabolic processes performed by microbial communities is crucial for understanding and managing ecosystem properties. Metagenomic approaches allow the elucidation of the main metabolic processes that determine the performance of microbial communities under different environmental conditions and perturbations. Here we present the first compartmentalized metabolic reconstruction at a metagenomics scale of a microbial ecosystem. This systematic approach conceives a meta-organism without boundaries between individual organisms and allows the in silico evaluation of the effect of agricultural intervention on soils at a metagenomics level. To characterize the microbial ecosystems, topological properties, taxonomic and metabolic profiles, as well as a Flux Balance Analysis (FBA) were considered. Furthermore, topological and optimization algorithms were implemented to carry out the curation of the models, to ensure the continuity of the fluxes between the metabolic pathways, and to confirm the metabolite exchange between subcellular compartments. The proposed models provide specific information about ecosystems that are generally overlooked in non-compartmentalized or non-curated networks, like the influence of transport reactions in the metabolic processes, especially the important effect on mitochondrial processes, as well as provide more accurate results of the fluxes used to optimize the metabolic processes within the microbial community. PMID:28767679

  5. Compartmentalized metabolic network reconstruction of microbial communities to determine the effect of agricultural intervention on soils.

    Directory of Open Access Journals (Sweden)

    María Camila Alvarez-Silva

    Full Text Available Soil microbial communities are responsible for a wide range of ecological processes and have an important economic impact in agriculture. Determining the metabolic processes performed by microbial communities is crucial for understanding and managing ecosystem properties. Metagenomic approaches allow the elucidation of the main metabolic processes that determine the performance of microbial communities under different environmental conditions and perturbations. Here we present the first compartmentalized metabolic reconstruction at a metagenomics scale of a microbial ecosystem. This systematic approach conceives a meta-organism without boundaries between individual organisms and allows the in silico evaluation of the effect of agricultural intervention on soils at a metagenomics level. To characterize the microbial ecosystems, topological properties, taxonomic and metabolic profiles, as well as a Flux Balance Analysis (FBA were considered. Furthermore, topological and optimization algorithms were implemented to carry out the curation of the models, to ensure the continuity of the fluxes between the metabolic pathways, and to confirm the metabolite exchange between subcellular compartments. The proposed models provide specific information about ecosystems that are generally overlooked in non-compartmentalized or non-curated networks, like the influence of transport reactions in the metabolic processes, especially the important effect on mitochondrial processes, as well as provide more accurate results of the fluxes used to optimize the metabolic processes within the microbial community.

  6. Analysis of PWR control rod ejection accident with the coupled code system SKETCH-INS/TRACE by incorporating pin power reconstruction model

    International Nuclear Information System (INIS)

    Nakajima, T.; Sakai, T.

    2010-01-01

    The pin power reconstruction model was incorporated in the 3-D nodal kinetics code SKETCH-INS in order to produce accurate calculation of three-dimensional pin power distributions throughout the reactor core. In order to verify the employed pin power reconstruction model, the PWR MOX/UO_2 core transient benchmark problem was analyzed with the coupled code system SKETCH-INS/TRACE by incorporating the model and the influence of pin power reconstruction model was studied. SKETCH-INS pin power distributions for 3 benchmark problems were compared with the PARCS solutions which were provided by the host organisation of the benchmark. SKETCH-INS results were in good agreement with the PARCS results. The capability of employed pin power reconstruction model was confirmed through the analysis of benchmark problems. A PWR control rod ejection benchmark problem was analyzed with the coupled code system SKETCH-INS/ TRACE by incorporating the pin power reconstruction model. The influence of pin power reconstruction model was studied by comparing with the result of conventional node averaged flux model. The results indicate that the pin power reconstruction model has significant effect on the pin powers during transient and hence on the fuel enthalpy

  7. Improved quantitative 90 Y bremsstrahlung SPECT/CT reconstruction with Monte Carlo scatter modeling.

    Science.gov (United States)

    Dewaraja, Yuni K; Chun, Se Young; Srinivasa, Ravi N; Kaza, Ravi K; Cuneo, Kyle C; Majdalany, Bill S; Novelli, Paula M; Ljungberg, Michael; Fessler, Jeffrey A

    2017-12-01

    In 90 Y microsphere radioembolization (RE), accurate post-therapy imaging-based dosimetry is important for establishing absorbed dose versus outcome relationships for developing future treatment planning strategies. Additionally, accurately assessing microsphere distributions is important because of concerns for unexpected activity deposition outside the liver. Quantitative 90 Y imaging by either SPECT or PET is challenging. In 90 Y SPECT model based methods are necessary for scatter correction because energy window-based methods are not feasible with the continuous bremsstrahlung energy spectrum. The objective of this work was to implement and evaluate a scatter estimation method for accurate 90 Y bremsstrahlung SPECT/CT imaging. Since a fully Monte Carlo (MC) approach to 90 Y SPECT reconstruction is computationally very demanding, in the present study the scatter estimate generated by a MC simulator was combined with an analytical projector in the 3D OS-EM reconstruction model. A single window (105 to 195-keV) was used for both the acquisition and the projector modeling. A liver/lung torso phantom with intrahepatic lesions and low-uptake extrahepatic objects was imaged to evaluate SPECT/CT reconstruction without and with scatter correction. Clinical application was demonstrated by applying the reconstruction approach to five patients treated with RE to determine lesion and normal liver activity concentrations using a (liver) relative calibration. There was convergence of the scatter estimate after just two updates, greatly reducing computational requirements. In the phantom study, compared with reconstruction without scatter correction, with MC scatter modeling there was substantial improvement in activity recovery in intrahepatic lesions (from > 55% to > 86%), normal liver (from 113% to 104%), and lungs (from 227% to 104%) with only a small degradation in noise (13% vs. 17%). Similarly, with scatter modeling contrast improved substantially both visually and in

  8. ISTA-Net: Iterative Shrinkage-Thresholding Algorithm Inspired Deep Network for Image Compressive Sensing

    KAUST Repository

    Zhang, Jian

    2017-06-24

    Traditional methods for image compressive sensing (CS) reconstruction solve a well-defined inverse problem that is based on a predefined CS model, which defines the underlying structure of the problem and is generally solved by employing convergent iterative solvers. These optimization-based CS methods face the challenge of choosing optimal transforms and tuning parameters in their solvers, while also suffering from high computational complexity in most cases. Recently, some deep network based CS algorithms have been proposed to improve CS reconstruction performance, while dramatically reducing time complexity as compared to optimization-based methods. Despite their impressive results, the proposed networks (either with fully-connected or repetitive convolutional layers) lack any structural diversity and they are trained as a black box, void of any insights from the CS domain. In this paper, we combine the merits of both types of CS methods: the structure insights of optimization-based method and the performance/speed of network-based ones. We propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $l_1$ norm CS reconstruction model. ISTA-Net essentially implements a truncated form of ISTA, where all ISTA-Net parameters are learned end-to-end to minimize a reconstruction error in training. Borrowing more insights from the optimization realm, we propose an accelerated version of ISTA-Net, dubbed FISTA-Net, which is inspired by the fast iterative shrinkage-thresholding algorithm (FISTA). Interestingly, this acceleration naturally leads to skip connections in the underlying network design. Extensive CS experiments demonstrate that the proposed ISTA-Net and FISTA-Net outperform existing optimization-based and network-based CS methods by large margins, while maintaining a fast runtime.

  9. Novel Plasmodium falciparum metabolic network reconstruction identifies shifts associated with clinical antimalarial resistance.

    Science.gov (United States)

    Carey, Maureen A; Papin, Jason A; Guler, Jennifer L

    2017-07-19

    Malaria remains a major public health burden and resistance has emerged to every antimalarial on the market, including the frontline drug, artemisinin. Our limited understanding of Plasmodium biology hinders the elucidation of resistance mechanisms. In this regard, systems biology approaches can facilitate the integration of existing experimental knowledge and further understanding of these mechanisms. Here, we developed a novel genome-scale metabolic network reconstruction, iPfal17, of the asexual blood-stage P. falciparum parasite to expand our understanding of metabolic changes that support resistance. We identified 11 metabolic tasks to evaluate iPfal17 performance. Flux balance analysis and simulation of gene knockouts and enzyme inhibition predict candidate drug targets unique to resistant parasites. Moreover, integration of clinical parasite transcriptomes into the iPfal17 reconstruction reveals patterns associated with antimalarial resistance. These results predict that artemisinin sensitive and resistant parasites differentially utilize scavenging and biosynthetic pathways for multiple essential metabolites, including folate and polyamines. Our findings are consistent with experimental literature, while generating novel hypotheses about artemisinin resistance and parasite biology. We detect evidence that resistant parasites maintain greater metabolic flexibility, perhaps representing an incomplete transition to the metabolic state most appropriate for nutrient-rich blood. Using this systems biology approach, we identify metabolic shifts that arise with or in support of the resistant phenotype. This perspective allows us to more productively analyze and interpret clinical expression data for the identification of candidate drug targets for the treatment of resistant parasites.

  10. A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network

    Science.gov (United States)

    Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.

    2018-02-01

    Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.

  11. Network structure exploration via Bayesian nonparametric models

    International Nuclear Information System (INIS)

    Chen, Y; Wang, X L; Xiang, X; Tang, B Z; Bu, J Z

    2015-01-01

    Complex networks provide a powerful mathematical representation of complex systems in nature and society. To understand complex networks, it is crucial to explore their internal structures, also called structural regularities. The task of network structure exploration is to determine how many groups there are in a complex network and how to group the nodes of the network. Most existing structure exploration methods need to specify either a group number or a certain type of structure when they are applied to a network. In the real world, however, the group number and also the certain type of structure that a network has are usually unknown in advance. To explore structural regularities in complex networks automatically, without any prior knowledge of the group number or the certain type of structure, we extend a probabilistic mixture model that can handle networks with any type of structure but needs to specify a group number using Bayesian nonparametric theory. We also propose a novel Bayesian nonparametric model, called the Bayesian nonparametric mixture (BNPM) model. Experiments conducted on a large number of networks with different structures show that the BNPM model is able to explore structural regularities in networks automatically with a stable, state-of-the-art performance. (paper)

  12. Reconstruction of binary geological images using analytical edge and object models

    Science.gov (United States)

    Abdollahifard, Mohammad J.; Ahmadi, Sadegh

    2016-04-01

    Reconstruction of fields using partial measurements is of vital importance in different applications in geosciences. Solving such an ill-posed problem requires a well-chosen model. In recent years, training images (TI) are widely employed as strong prior models for solving these problems. However, in the absence of enough evidence it is difficult to find an adequate TI which is capable of describing the field behavior properly. In this paper a very simple and general model is introduced which is applicable to a fairly wide range of binary images without any modifications. The model is motivated by the fact that nearly all binary images are composed of simple linear edges in micro-scale. The analytic essence of this model allows us to formulate the template matching problem as a convex optimization problem having efficient and fast solutions. The model has the potential to incorporate the qualitative and quantitative information provided by geologists. The image reconstruction problem is also formulated as an optimization problem and solved using an iterative greedy approach. The proposed method is capable of recovering the image unknown values with accuracies about 90% given samples representing as few as 2% of the original image.

  13. FIRST PRISMATIC BUILDING MODEL RECONSTRUCTION FROM TOMOSAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Y. Sun

    2016-06-01

    Full Text Available This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007 and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.

  14. Bayesian image reconstruction in SPECT using higher order mechanical models as priors

    International Nuclear Information System (INIS)

    Lee, S.J.; Gindi, G.; Rangarajan, A.

    1995-01-01

    While the ML-EM (maximum-likelihood-expectation maximization) algorithm for reconstruction for emission tomography is unstable due to the ill-posed nature of the problem, Bayesian reconstruction methods overcome this instability by introducing prior information, often in the form of a spatial smoothness regularizer. More elaborate forms of smoothness constraints may be used to extend the role of the prior beyond that of a stabilizer in order to capture actual spatial information about the object. Previously proposed forms of such prior distributions were based on the assumption of a piecewise constant source distribution. Here, the authors propose an extension to a piecewise linear model--the weak plate--which is more expressive than the piecewise constant model. The weak plate prior not only preserves edges but also allows for piecewise ramplike regions in the reconstruction. Indeed, for the application in SPECT, such ramplike regions are observed in ground-truth source distributions in the form of primate autoradiographs of rCBF radionuclides. To incorporate the weak plate prior in a MAP approach, the authors model the prior as a Gibbs distribution and use a GEM formulation for the optimization. They compare quantitative performance of the ML-EM algorithm, a GEM algorithm with a prior favoring piecewise constant regions, and a GEM algorithm with the weak plate prior. Pointwise and regional bias and variance of ensemble image reconstructions are used as indications of image quality. The results show that the weak plate and membrane priors exhibit improved bias and variance relative to ML-EM techniques

  15. A method for climate and vegetation reconstruction through the inversion of a dynamic vegetation model

    Energy Technology Data Exchange (ETDEWEB)

    Garreta, Vincent; Guiot, Joel; Hely, Christelle [CEREGE, UMR 6635, CNRS, Universite Aix-Marseille, Europole de l' Arbois, Aix-en-Provence (France); Miller, Paul A.; Sykes, Martin T. [Lund University, Department of Physical Geography and Ecosystems Analysis, Geobiosphere Science Centre, Lund (Sweden); Brewer, Simon [Universite de Liege, Institut d' Astrophysique et de Geophysique, Liege (Belgium); Litt, Thomas [University of Bonn, Paleontological Institute, Bonn (Germany)

    2010-08-15

    Climate reconstructions from data sensitive to past climates provide estimates of what these climates were like. Comparing these reconstructions with simulations from climate models allows to validate the models used for future climate prediction. It has been shown that for fossil pollen data, gaining estimates by inverting a vegetation model allows inclusion of past changes in carbon dioxide values. As a new generation of dynamic vegetation model is available we have developed an inversion method for one model, LPJ-GUESS. When this novel method is used with high-resolution sediment it allows us to bypass the classic assumptions of (1) climate and pollen independence between samples and (2) equilibrium between the vegetation, represented as pollen, and climate. Our dynamic inversion method is based on a statistical model to describe the links among climate, simulated vegetation and pollen samples. The inversion is realised thanks to a particle filter algorithm. We perform a validation on 30 modern European sites and then apply the method to the sediment core of Meerfelder Maar (Germany), which covers the Holocene at a temporal resolution of approximately one sample per 30 years. We demonstrate that reconstructed temperatures are constrained. The reconstructed precipitation is less well constrained, due to the dimension considered (one precipitation by season), and the low sensitivity of LPJ-GUESS to precipitation changes. (orig.)

  16. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  17. 2-D Fused Image Reconstruction approach for Microwave Tomography: a theoretical assessment using FDTD Model.

    Science.gov (United States)

    Bindu, G; Semenov, S

    2013-01-01

    This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.

  18. Study on Reverse Reconstruction Method of Vehicle Group Situation in Urban Road Network Based on Driver-Vehicle Feature Evolution

    Directory of Open Access Journals (Sweden)

    Xiaoyuan Wang

    2017-01-01

    Full Text Available Vehicle group situation is the status and situation of dynamic permutation which is composed of target vehicle and neighboring traffic entities. It is a concept which is frequently involved in the research of traffic flow theory, especially the active vehicle security. Studying vehicle group situation in depth is of great significance for traffic safety. Three-lane condition was taken as an example; the characteristics of target vehicle and its neighboring vehicles were synthetically considered to restructure the vehicle group situation in this paper. The Gamma distribution theory was used to identify the vehicle group situation when target vehicle arrived at the end of the study area. From the perspective of driver-vehicle feature evolution, the reverse reconstruction method of vehicle group situation in the urban road network was proposed. Results of actual driving, virtual driving, and simulation experiments showed that the model established in this paper was reasonable and feasible.

  19. Homophyly/Kinship Model: Naturally Evolving Networks

    Science.gov (United States)

    Li, Angsheng; Li, Jiankou; Pan, Yicheng; Yin, Xianchen; Yong, Xi

    2015-10-01

    It has been a challenge to understand the formation and roles of social groups or natural communities in the evolution of species, societies and real world networks. Here, we propose the hypothesis that homophyly/kinship is the intrinsic mechanism of natural communities, introduce the notion of the affinity exponent and propose the homophyly/kinship model of networks. We demonstrate that the networks of our model satisfy a number of topological, probabilistic and combinatorial properties and, in particular, that the robustness and stability of natural communities increase as the affinity exponent increases and that the reciprocity of the networks in our model decreases as the affinity exponent increases. We show that both homophyly/kinship and reciprocity are essential to the emergence of cooperation in evolutionary games and that the homophyly/kinship and reciprocity determined by the appropriate affinity exponent guarantee the emergence of cooperation in evolutionary games, verifying Darwin’s proposal that kinship and reciprocity are the means of individual fitness. We propose the new principle of structure entropy minimisation for detecting natural communities of networks and verify the functional module property and characteristic properties by a healthy tissue cell network, a citation network, some metabolic networks and a protein interaction network.

  20. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  1. Atmospheric inverse modeling via sparse reconstruction

    Science.gov (United States)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  2. Weighted regularized statistical shape space projection for breast 3D model reconstruction.

    Science.gov (United States)

    Ruiz, Guillermo; Ramon, Eduard; García, Jaime; Sukno, Federico M; Ballester, Miguel A González

    2018-05-02

    The use of 3D imaging has increased as a practical and useful tool for plastic and aesthetic surgery planning. Specifically, the possibility of representing the patient breast anatomy in a 3D shape and simulate aesthetic or plastic procedures is a great tool for communication between surgeon and patient during surgery planning. For the purpose of obtaining the specific 3D model of the breast of a patient, model-based reconstruction methods can be used. In particular, 3D morphable models (3DMM) are a robust and widely used method to perform 3D reconstruction. However, if additional prior information (i.e., known landmarks) is combined with the 3DMM statistical model, shape constraints can be imposed to improve the 3DMM fitting accuracy. In this paper, we present a framework to fit a 3DMM of the breast to two possible inputs: 2D photos and 3D point clouds (scans). Our method consists in a Weighted Regularized (WR) projection into the shape space. The contribution of each point in the 3DMM shape is weighted allowing to assign more relevance to those points that we want to impose as constraints. Our method is applied at multiple stages of the 3D reconstruction process. Firstly, it can be used to obtain a 3DMM initialization from a sparse set of 3D points. Additionally, we embed our method in the 3DMM fitting process in which more reliable or already known 3D points or regions of points, can be weighted in order to preserve their shape information. The proposed method has been tested in two different input settings: scans and 2D pictures assessing both reconstruction frameworks with very positive results. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Cyber threat model for tactical radio networks

    Science.gov (United States)

    Kurdziel, Michael T.

    2014-05-01

    The shift to a full information-centric paradigm in the battlefield has allowed ConOps to be developed that are only possible using modern network communications systems. Securing these Tactical Networks without impacting their capabilities has been a challenge. Tactical networks with fixed infrastructure have similar vulnerabilities to their commercial counterparts (although they need to be secure against adversaries with greater capabilities, resources and motivation). However, networks with mobile infrastructure components and Mobile Ad hoc Networks (MANets) have additional unique vulnerabilities that must be considered. It is useful to examine Tactical Network based ConOps and use them to construct a threat model and baseline cyber security requirements for Tactical Networks with fixed infrastructure, mobile infrastructure and/or ad hoc modes of operation. This paper will present an introduction to threat model assessment. A definition and detailed discussion of a Tactical Network threat model is also presented. Finally, the model is used to derive baseline requirements that can be used to design or evaluate a cyber security solution that can be scaled and adapted to the needs of specific deployments.

  4. Tool wear modeling using abductive networks

    Science.gov (United States)

    Masory, Oren

    1992-09-01

    A tool wear model based on Abductive Networks, which consists of a network of `polynomial' nodes, is described. The model relates the cutting parameters, components of the cutting force, and machining time to flank wear. Thus real time measurements of the cutting force can be used to monitor the machining process. The model is obtained by a training process in which the connectivity between the network's nodes and the polynomial coefficients of each node are determined by optimizing a performance criteria. Actual wear measurements of coated and uncoated carbide inserts were used for training and evaluating the established model.

  5. iCN718, an Updated and Improved Genome-Scale Metabolic Network Reconstruction of Acinetobacter baumannii AYE.

    Science.gov (United States)

    Norsigian, Charles J; Kavvas, Erol; Seif, Yara; Palsson, Bernhard O; Monk, Jonathan M

    2018-01-01

    Acinetobacter baumannii has become an urgent clinical threat due to the recent emergence of multi-drug resistant strains. There is thus a significant need to discover new therapeutic targets in this organism. One means for doing so is through the use of high-quality genome-scale reconstructions. Well-curated and accurate genome-scale models (GEMs) of A. baumannii would be useful for improving treatment options. We present an updated and improved genome-scale reconstruction of A. baumannii AYE, named iCN718, that improves and standardizes previous A. baumannii AYE reconstructions. iCN718 has 80% accuracy for predicting gene essentiality data and additionally can predict large-scale phenotypic data with as much as 89% accuracy, a new capability for an A. baumannii reconstruction. We further demonstrate that iCN718 can be used to analyze conserved metabolic functions in the A. baumannii core genome and to build strain-specific GEMs of 74 other A. baumannii strains from genome sequence alone. iCN718 will serve as a resource to integrate and synthesize new experimental data being generated for this urgent threat pathogen.

  6. Reconstructing a Network of Stress-Response Regulators via Dynamic System Modeling of Gene Regulation

    Directory of Open Access Journals (Sweden)

    Wei-Sheng Wu

    2008-01-01

    Full Text Available Unicellular organisms such as yeasts have evolved mechanisms to respond to environmental stresses by rapidly reorganizing the gene expression program. Although many stress-response genes in yeast have been discovered by DNA microarrays, the stress-response transcription factors (TFs that regulate these stress-response genes remain to be investigated. In this study, we use a dynamic system model of gene regulation to describe the mechanism of how TFs may control a gene’s expression. Then, based on the dynamic system model, we develop the Stress Regulator Identification Algorithm (SRIA to identify stress-response TFs for six kinds of stresses. We identified some general stress-response TFs that respond to various stresses and some specific stress-response TFs that respond to one specifi c stress. The biological significance of our findings is validated by the literature. We found that a small number of TFs is probably suffi cient to control a wide variety of expression patterns in yeast under different stresses. Two implications can be inferred from this observation. First, the response mechanisms to different stresses may have a bow-tie structure. Second, there may be regulatory cross-talks among different stress responses. In conclusion, this study proposes a network of stress-response regulators and the details of their actions.

  7. Multi-scale computational model of three-dimensional hemodynamics within a deformable full-body arterial network

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Nan [Department of Bioengineering, Stanford University, Stanford, CA 94305 (United States); Department of Biomedical Engineering, King’s College London, London SE1 7EH (United Kingdom); Humphrey, Jay D. [Department of Biomedical Engineering, Yale University, New Haven, CT 06520 (United States); Figueroa, C. Alberto, E-mail: alberto.figueroa@kcl.ac.uk [Department of Biomedical Engineering, King’s College London, London SE1 7EH (United Kingdom)

    2013-07-01

    In this article, we present a computational multi-scale model of fully three-dimensional and unsteady hemodynamics within the primary large arteries in the human. Computed tomography image data from two different patients were used to reconstruct a nearly complete network of the major arteries from head to foot. A linearized coupled-momentum method for fluid–structure-interaction was used to describe vessel wall deformability and a multi-domain method for outflow boundary condition specification was used to account for the distal circulation. We demonstrated that physiologically realistic results can be obtained from the model by comparing simulated quantities such as regional blood flow, pressure and flow waveforms, and pulse wave velocities to known values in the literature. We also simulated the impact of age-related arterial stiffening on wave propagation phenomena by progressively increasing the stiffness of the central arteries and found that the predicted effects on pressure amplification and pulse wave velocity are in agreement with findings in the clinical literature. This work demonstrates the feasibility of three-dimensional techniques for simulating hemodynamics in a full-body compliant arterial network.

  8. Pseudoproxy Experiments Using the BARCAST Reconstruction Technique: Effects on Spatiotemporal Persistence Properties

    Science.gov (United States)

    Nilsen, T.; Divine, D.; Rypdal, M.; Werner, J.; Rypdal, K.

    2016-12-01

    A modified two-dimensional stochastic-diffusive energy balance model (EBM) defined on a sphere was used for generating pseudoproxy/instrumental data and target data for surface temperature. The EBM is described in Rypdal et al. (2015). The target field has prescribed long-range memory (LRM) properties in time, and a frequency-dependent autocorrelation function in space. The Bayesian hierarchical model BARCAST, was used to generate surface temperature field reconstructions of an area corresponding to the European landmass for the past millennium. BARCAST has a built-in multivariate AR(1) model for the evolution of the temperature field, with an exponential, spatial covariance function, (Tingley & Huybers, 2010). The AR(1) process has a short-range memory, and we seek to find out how the competing spatiotemporal models influence the persistence of the reconstruction. A number of pseudoproxy experiments were performed with a fixed proxy network, using different signal-to-noise ratios (SNR) and colors of noise, (white/red). To study the persistence properties, the power-law relation of the power spectral density for LRM processes was used: S(f) f-β. The spectral exponent β was estimated both for local data and the spatial mean of the full region. The local β for the target varies between (0.1, 0.4), and for the spatial mean β 0.6. Results for the reconstructions show that the local and global memory is influenced by the noise color and level. Low noise levels or absence of noise results in reconstructions that exhibit similar properties as the target, while for higher noise levels the reconstructions have memory properties of a white/red character, (SNR=0.3 by standard deviation). Since an SNR of 0.5-0.25 is considered realistic for real proxy records, this implies that estimates of temporal persistence from proxy-based reconstructions reflect the proxy noise to a high degree, and not the signal as desired. Rypdal et al., 2015: Spatiotemporal Long-Range Persistence

  9. Large-scale urban point cloud labeling and reconstruction

    Science.gov (United States)

    Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu

    2018-04-01

    The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.

  10. Signal process and profile reconstruction of stress corrosion crack by eddy current test

    International Nuclear Information System (INIS)

    Zhang Siquan; Chen Tiequn; Liu Guixiong

    2008-01-01

    The reconstruction of crack profiles is very important in the NDE (nondestructive evaluation) of critical structures, such as pressure vessel and tubes in heat exchangers. First a wavelet transform signal processing technique is used to reduce noise and other non-defect signals from the signals of crack, and then based on an artificial neural network method, the crack profiles are reconstructed. Although the results reveal that this method is with many advantages such as a short CPU time and precision for reconstruction,it does have some drawbacks, for example, the database generation and network training is a much time consuming work. Moreover, this approach does not expressly reconstruct the distribution of conductivity inside a crack, so the reliability of a reconstructed crack shape is unknown. But in practical application, if we do not consider the multiple cracks, this method can be used to reconstruct the natural crack. (authors)

  11. Integration of Plant Metabolomics Data with Metabolic Networks: Progresses and Challenges.

    Science.gov (United States)

    Töpfer, Nadine; Seaver, Samuel M D; Aharoni, Asaph

    2018-01-01

    In the last decade, plant genome-scale modeling has developed rapidly and modeling efforts have advanced from representing metabolic behavior of plant heterotrophic cell suspensions to studying the complex interplay of cell types, tissues, and organs. A crucial driving force for such developments is the availability and integration of "omics" data (e.g., transcriptomics, proteomics, and metabolomics) which enable the reconstruction, extraction, and application of context-specific metabolic networks. In this chapter, we demonstrate a workflow to integrate gas chromatography coupled to mass spectrometry (GC-MS)-based metabolomics data of tomato fruit pericarp (flesh) tissue, at five developmental stages, with a genome-scale reconstruction of tomato metabolism. This method allows for the extraction of context-specific networks reflecting changing activities of metabolic pathways throughout fruit development and maturation.

  12. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo; Artina, Marco; Foransier, Massimo; Markowich, Peter A.

    2015-01-01

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation

  13. Synergistic effects in threshold models on networks

    Science.gov (United States)

    Juul, Jonas S.; Porter, Mason A.

    2018-01-01

    Network structure can have a significant impact on the propagation of diseases, memes, and information on social networks. Different types of spreading processes (and other dynamical processes) are affected by network architecture in different ways, and it is important to develop tractable models of spreading processes on networks to explore such issues. In this paper, we incorporate the idea of synergy into a two-state ("active" or "passive") threshold model of social influence on networks. Our model's update rule is deterministic, and the influence of each meme-carrying (i.e., active) neighbor can—depending on a parameter—either be enhanced or inhibited by an amount that depends on the number of active neighbors of a node. Such a synergistic system models social behavior in which the willingness to adopt either accelerates or saturates in a way that depends on the number of neighbors who have adopted that behavior. We illustrate that our model's synergy parameter has a crucial effect on system dynamics, as it determines whether degree-k nodes are possible or impossible to activate. We simulate synergistic meme spreading on both random-graph models and networks constructed from empirical data. Using a heterogeneous mean-field approximation, which we derive under the assumption that a network is locally tree-like, we are able to determine which synergy-parameter values allow degree-k nodes to be activated for many networks and for a broad family of synergistic models.

  14. Cross disease analysis of co-functional microRNA pairs on a reconstructed network of disease-gene-microRNA tripartite.

    Science.gov (United States)

    Peng, Hui; Lan, Chaowang; Zheng, Yi; Hutvagner, Gyorgy; Tao, Dacheng; Li, Jinyan

    2017-03-24

    MicroRNAs always function cooperatively in their regulation of gene expression. Dysfunctions of these co-functional microRNAs can play significant roles in disease development. We are interested in those multi-disease associated co-functional microRNAs that regulate their common dysfunctional target genes cooperatively in the development of multiple diseases. The research is potentially useful for human disease studies at the transcriptional level and for the study of multi-purpose microRNA therapeutics. We designed a computational method to detect multi-disease associated co-functional microRNA pairs and conducted cross disease analysis on a reconstructed disease-gene-microRNA (DGR) tripartite network. The construction of the DGR tripartite network is by the integration of newly predicted disease-microRNA associations with those relationships of diseases, microRNAs and genes maintained by existing databases. The prediction method uses a set of reliable negative samples of disease-microRNA association and a pre-computed kernel matrix instead of kernel functions. From this reconstructed DGR tripartite network, multi-disease associated co-functional microRNA pairs are detected together with their common dysfunctional target genes and ranked by a novel scoring method. We also conducted proof-of-concept case studies on cancer-related co-functional microRNA pairs as well as on non-cancer disease-related microRNA pairs. With the prioritization of the co-functional microRNAs that relate to a series of diseases, we found that the co-function phenomenon is not unusual. We also confirmed that the regulation of the microRNAs for the development of cancers is more complex and have more unique properties than those of non-cancer diseases.

  15. Modeling geomagnetic induced currents in Australian power networks

    Science.gov (United States)

    Marshall, R. A.; Kelly, A.; Van Der Walt, T.; Honecker, A.; Ong, C.; Mikkelsen, D.; Spierings, A.; Ivanovich, G.; Yoshikawa, A.

    2017-07-01

    Geomagnetic induced currents (GICs) have been considered an issue for high-latitude power networks for some decades. More recently, GICs have been observed and studied in power networks located in lower latitude regions. This paper presents the results of a model aimed at predicting and understanding the impact of geomagnetic storms on power networks in Australia, with particular focus on the Queensland and Tasmanian networks. The model incorporates a "geoelectric field" determined using a plane wave magnetic field incident on a uniform conducting Earth, and the network model developed by Lehtinen and Pirjola (1985). Model results for two intense geomagnetic storms of solar cycle 24 are compared with transformer neutral monitors at three locations within the Queensland network and one location within the Tasmanian network. The model is then used to assess the impacts of the superintense geomagnetic storm of 29-31 October 2003 on the flow of GICs within these networks. The model results show good correlation with the observations with coefficients ranging from 0.73 to 0.96 across the observing sites. For Queensland, modeled GIC magnitudes during the superstorm of 29-31 October 2003 exceed 40 A with the larger GICs occurring in the south-east section of the network. Modeled GICs in Tasmania for the same storm do not exceed 30 A. The larger distance spans and general east-west alignment of the southern section of the Queensland network, in conjunction with some relatively low branch resistance values, result in larger modeled GICs despite Queensland being a lower latitude network than Tasmania.

  16. Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging

    Science.gov (United States)

    Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2016-02-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.1

  17. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2016-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  18. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2015-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  19. Modeling Distillation Column Using ARX Model Structure and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Reza Pirmoradi

    2012-04-01

    Full Text Available Distillation is a complex and highly nonlinear industrial process. In general it is not always possible to obtain accurate first principles models for high-purity distillation columns. On the other hand the development of first principles models is usually time consuming and expensive. To overcome these problems, empirical models such as neural networks can be used. One major drawback of empirical models is that the prediction is valid only inside the data domain that is sufficiently covered by measurement data. Modeling distillation columns by means of neural networks is reported in literature by using recursive networks. The recursive networks are proper for modeling purpose, but such models have the problems of high complexity and high computational cost. The objective of this paper is to propose a simple and reliable model for distillation column. The proposed model uses feed forward neural networks which results in a simple model with less parameters and faster training time. Simulation results demonstrate that predictions of the proposed model in all regions are close to outputs of the dynamic model and the error in negligible. This implies that the model is reliable in all regions.

  20. TIGER: Toolbox for integrating genome-scale metabolic models, expression data, and transcriptional regulatory networks

    Directory of Open Access Journals (Sweden)

    Jensen Paul A

    2011-09-01

    Full Text Available Abstract Background Several methods have been developed for analyzing genome-scale models of metabolism and transcriptional regulation. Many of these methods, such as Flux Balance Analysis, use constrained optimization to predict relationships between metabolic flux and the genes that encode and regulate enzyme activity. Recently, mixed integer programming has been used to encode these gene-protein-reaction (GPR relationships into a single optimization problem, but these techniques are often of limited generality and lack a tool for automating the conversion of rules to a coupled regulatory/metabolic model. Results We present TIGER, a Toolbox for Integrating Genome-scale Metabolism, Expression, and Regulation. TIGER converts a series of generalized, Boolean or multilevel rules into a set of mixed integer inequalities. The package also includes implementations of existing algorithms to integrate high-throughput expression data with genome-scale models of metabolism and transcriptional regulation. We demonstrate how TIGER automates the coupling of a genome-scale metabolic model with GPR logic and models of transcriptional regulation, thereby serving as a platform for algorithm development and large-scale metabolic analysis. Additionally, we demonstrate how TIGER's algorithms can be used to identify inconsistencies and improve existing models of transcriptional regulation with examples from the reconstructed transcriptional regulatory network of Saccharomyces cerevisiae. Conclusion The TIGER package provides a consistent platform for algorithm development and extending existing genome-scale metabolic models with regulatory networks and high-throughput data.

  1. Feature network models for proximity data : statistical inference, model selection, network representations and links with related models

    NARCIS (Netherlands)

    Frank, Laurence Emmanuelle

    2006-01-01

    Feature Network Models (FNM) are graphical structures that represent proximity data in a discrete space with the use of features. A statistical inference theory is introduced, based on the additivity properties of networks and the linear regression framework. Considering features as predictor

  2. Improved convergence of gradient-based reconstruction using multi-scale models

    International Nuclear Information System (INIS)

    Cunningham, G.S.; Hanson, K.M.; Koyfman, I.

    1996-01-01

    Geometric models have received increasing attention in medical imaging for tasks such as segmentation, reconstruction, restoration, and registration. In order to determine the best configuration of the geometric model in the context of any of these tasks, one needs to perform a difficult global optimization of an energy function that may have many local minima. Explicit models of geometry, also called deformable models, snakes, or active contours, have been used extensively to solve image segmentation problems in a non-Bayesian framework. Researchers have seen empirically that multi-scale analysis is useful for convergence to a configuration that is near the global minimum. In this type of analysis, the image data are convolved with blur functions of increasing resolution, and an optimal configuration of the snake is found for each blurred image. The configuration obtained using the highest resolution blur is used as the solution to the global optimization problem. In this article, the authors use explicit models of geometry for a variety of Bayesian estimation problems, including image segmentation, reconstruction and restoration. The authors introduce a multi-scale approach that blurs the geometric model, rather than the image data, and show that this approach turns a global, highly nonquadratic optimization into a sequence of local, approximately quadratic problems that converge to the global minimum. The result is a deterministic, robust, and efficient optimization strategy applicable to a wide variety of Bayesian estimation problems in which geometric models of images are an important component

  3. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models......Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...

  4. Modelling and designing electric energy networks

    International Nuclear Information System (INIS)

    Retiere, N.

    2003-11-01

    The author gives an overview of his research works in the field of electric network modelling. After a brief overview of technological evolutions from the telegraph to the all-electric fly-by-wire aircraft, he reports and describes various works dealing with a simplified modelling of electric systems and with fractal simulation. Then, he outlines the challenges for the design of electric networks, proposes a design process, gives an overview of various design models, methods and tools, and reports an application in the design of electric networks for future jumbo jets

  5. Determining Regulatory Networks Governing the Differentiation of Embryonic Stem Cells to Pancreatic Lineage

    Science.gov (United States)

    Banerjee, Ipsita

    2009-03-01

    Knowledge of pathways governing cellular differentiation to specific phenotype will enable generation of desired cell fates by careful alteration of the governing network by adequate manipulation of the cellular environment. With this aim, we have developed a novel method to reconstruct the underlying regulatory architecture of a differentiating cell population from discrete temporal gene expression data. We utilize an inherent feature of biological networks, that of sparsity, in formulating the network reconstruction problem as a bi-level mixed-integer programming problem. The formulation optimizes the network topology at the upper level and the network connectivity strength at the lower level. The method is first validated by in-silico data, before applying it to the complex system of embryonic stem (ES) cell differentiation. This formulation enables efficient identification of the underlying network topology which could accurately predict steps necessary for directing differentiation to subsequent stages. Concurrent experimental verification demonstrated excellent agreement with model prediction.

  6. Modelling traffic congestion using queuing networks

    Indian Academy of Sciences (India)

    Flow-density curves; uninterrupted traffic; Jackson networks. ... ness - also suffer from a big handicap vis-a-vis the Indian scenario: most of these models do .... more well-known queuing network models and onsite data, a more exact Road Cell ...

  7. From GCode to STL: Reconstruct Models from 3D Printing as a Service

    Science.gov (United States)

    Baumann, Felix W.; Schuermann, Martin; Odefey, Ulrich; Pfeil, Markus

    2017-12-01

    The authors present a method to reverse engineer 3D printer specific machine instructions (GCode) to a point cloud representation and then a STL (Stereolithography) file format. GCode is a machine code that is used for 3D printing among other applications, such as CNC routers. Such code files contain instructions for the 3D printer to move and control its actuator, in case of Fused Deposition Modeling (FDM), the printhead that extrudes semi-molten plastics. The reverse engineering method presented here is based on the digital simulation of the extrusion process of FDM type 3D printing. The reconstructed models and pointclouds do not accommodate for hollow structures, such as holes or cavities. The implementation is performed in Python and relies on open source software and libraries, such as Matplotlib and OpenCV. The reconstruction is performed on the model’s extrusion boundary and considers mechanical imprecision. The complete reconstruction mechanism is available as a RESTful (Representational State Transfer) Web service.

  8. Modeling, Optimization & Control of Hydraulic Networks

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat

    2014-01-01

    . The nonlinear network model is derived based on the circuit theory. A suitable projection is used to reduce the state vector and to express the model in standard state-space form. Then, the controllability of nonlinear nonaffine hydraulic networks is studied. The Lie algebra-based controllability matrix is used......Water supply systems consist of a number of pumping stations, which deliver water to the customers via pipeline networks and elevated reservoirs. A huge amount of drinking water is lost before it reaches to end-users due to the leakage in pipe networks. A cost effective solution to reduce leakage...... in water network is pressure management. By reducing the pressure in the water network, the leakage can be reduced significantly. Also it reduces the amount of energy consumption in water networks. The primary purpose of this work is to develop control algorithms for pressure control in water supply...

  9. Mirror-Imaged Rapid Prototype Skull Model and Pre-Molded Synthetic Scaffold to Achieve Optimal Orbital Cavity Reconstruction.

    Science.gov (United States)

    Park, Sung Woo; Choi, Jong Woo; Koh, Kyung S; Oh, Tae Suk

    2015-08-01

    Reconstruction of traumatic orbital wall defects has evolved to restore the original complex anatomy with the rapidly growing use of computer-aided design and prototyping. This study evaluated a mirror-imaged rapid prototype skull model and a pre-molded synthetic scaffold for traumatic orbital wall reconstruction. A single-center retrospective review was performed of patients who underwent orbital wall reconstruction after trauma from 2012 to 2014. Patients were included by admission through the emergency department after facial trauma or by a tertiary referral for post-traumatic orbital deformity. Three-dimensional (3D) computed tomogram-based mirror-imaged reconstruction images of the orbit and an individually manufactured rapid prototype skull model by a 3D printing technique were obtained for each case. Synthetic scaffolds were anatomically pre-molded using the skull model as guide and inserted at the individual orbital defect. Postoperative complications were assessed and 3D volumetric measurements of the orbital cavity were performed. Paired samples t test was used for statistical analysis. One hundred four patients with immediate orbital defect reconstructions and 23 post-traumatic orbital deformity reconstructions were included in this study. All reconstructions were successful without immediate postoperative complications, although there were 10 cases with mild enophthalmos and 2 cases with persistent diplopia. Reoperations were performed for 2 cases of persistent diplopia and secondary touchup procedures were performed to contour soft tissue in 4 cases. Postoperative volumetric measurement of the orbital cavity showed nonsignificant volume differences between the damaged orbit and the reconstructed orbit (21.35 ± 1.93 vs 20.93 ± 2.07 cm(2); P = .98). This protocol was extended to severe cases in which more than 40% of the orbital frame was lost and combined with extensive soft tissue defects. Traumatic orbital reconstruction can be optimized and

  10. A random spatial network model based on elementary postulates

    Science.gov (United States)

    Karlinger, Michael R.; Troutman, Brent M.

    1989-01-01

    A model for generating random spatial networks that is based on elementary postulates comparable to those of the random topology model is proposed. In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. For a preliminary model evaluation, scale-dependent network characteristics, such as geometric diameter and link length properties, and topologic characteristics, such as bifurcation ratio, are computed for sets of drainage networks generated on square and rectangular grids. Statistics of the bifurcation and length ratios fall within the range of values reported for natural drainage networks, but geometric diameters tend to be relatively longer than those for natural networks.

  11. Comparison of adaptive statistical iterative reconstruction (ASiRTM) and model-based iterative reconstruction (VeoTM) for paediatric abdominal CT examinations: an observer performance study of diagnostic image quality

    International Nuclear Information System (INIS)

    Hultenmo, Maria; Caisander, Haakan; Mack, Karsten; Thilander-Klang, Anne

    2016-01-01

    The diagnostic image quality of 75 paediatric abdominal computed tomography (CT) examinations reconstructed with two different iterative reconstruction (IR) algorithms-adaptive statistical IR (ASiR TM ) and model-based IR (Veo TM )-was compared. Axial and coronal images were reconstructed with 70 % ASiR with the Soft TM convolution kernel and with the Veo algorithm. The thickness of the reconstructed images was 2.5 or 5 mm depending on the scanning protocol used. Four radiologists graded the delineation of six abdominal structures and the diagnostic usefulness of the image quality. The Veo reconstruction significantly improved the visibility of most of the structures compared with ASiR in all subgroups of images. For coronal images, the Veo reconstruction resulted in significantly improved ratings of the diagnostic use of the image quality compared with the ASiR reconstruction. This was not seen for the axial images. The greatest improvement using Veo reconstruction was observed for the 2.5 mm coronal slices. (authors)

  12. Modeling Network Traffic in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Sheng Ma

    2004-12-01

    Full Text Available This work discovers that although network traffic has the complicated short- and long-range temporal dependence, the corresponding wavelet coefficients are no longer long-range dependent. Therefore, a "short-range" dependent process can be used to model network traffic in the wavelet domain. Both independent and Markov models are investigated. Theoretical analysis shows that the independent wavelet model is sufficiently accurate in terms of the buffer overflow probability for Fractional Gaussian Noise traffic. Any model, which captures additional correlations in the wavelet domain, only improves the performance marginally. The independent wavelet model is then used as a unified approach to model network traffic including VBR MPEG video and Ethernet data. The computational complexity is O(N for developing such wavelet models and generating synthesized traffic of length N, which is among the lowest attained.

  13. Implementing network constraints in the EMPS model

    Energy Technology Data Exchange (ETDEWEB)

    Helseth, Arild; Warland, Geir; Mo, Birger; Fosso, Olav B.

    2010-02-15

    This report concerns the coupling of detailed market and network models for long-term hydro-thermal scheduling. Currently, the EPF model (Samlast) is the only tool available for this task for actors in the Nordic market. A new prototype for solving the coupled market and network problem has been developed. The prototype is based on the EMPS model (Samkjoeringsmodellen). Results from the market model are distributed to a detailed network model, where a DC load flow detects if there are overloads on monitored lines or intersections. In case of overloads, network constraints are generated and added to the market problem. Theoretical and implementation details for the new prototype are elaborated in this report. The performance of the prototype is tested against the EPF model on a 20-area Nordic dataset. (Author)

  14. Malware Propagation and Prevention Model for Time-Varying Community Networks within Software Defined Networks

    Directory of Open Access Journals (Sweden)

    Lan Liu

    2017-01-01

    Full Text Available As the adoption of Software Defined Networks (SDNs grows, the security of SDN still has several unaddressed limitations. A key network security research area is in the study of malware propagation across the SDN-enabled networks. To analyze the spreading processes of network malware (e.g., viruses in SDN, we propose a dynamic model with a time-varying community network, inspired by research models on the spread of epidemics in complex networks across communities. We assume subnets of the network as communities and links that are dense in subnets but sparse between subnets. Using numerical simulation and theoretical analysis, we find that the efficiency of network malware propagation in this model depends on the mobility rate q of the nodes between subnets. We also find that there exists a mobility rate threshold qc. The network malware will spread in the SDN when the mobility rate q>qc. The malware will survive when q>qc and perish when qmodel is effective, and the results may help to decide the SDN control strategy to defend against network malware and provide a theoretical basis to reduce and prevent network security incidents.

  15. Creating, generating and comparing random network models with NetworkRandomizer.

    Science.gov (United States)

    Tosadori, Gabriele; Bestvina, Ivan; Spoto, Fausto; Laudanna, Carlo; Scardoni, Giovanni

    2016-01-01

    Biological networks are becoming a fundamental tool for the investigation of high-throughput data in several fields of biology and biotechnology. With the increasing amount of information, network-based models are gaining more and more interest and new techniques are required in order to mine the information and to validate the results. To fill the validation gap we present an app, for the Cytoscape platform, which aims at creating randomised networks and randomising existing, real networks. Since there is a lack of tools that allow performing such operations, our app aims at enabling researchers to exploit different, well known random network models that could be used as a benchmark for validating real, biological datasets. We also propose a novel methodology for creating random weighted networks, i.e. the multiplication algorithm, starting from real, quantitative data. Finally, the app provides a statistical tool that compares real versus randomly computed attributes, in order to validate the numerical findings. In summary, our app aims at creating a standardised methodology for the validation of the results in the context of the Cytoscape platform.

  16. Model-based iterative reconstruction and adaptive statistical iterative reconstruction: dose-reduced CT for detecting pancreatic calcification

    International Nuclear Information System (INIS)

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2016-01-01

    Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67–0.89) compared to L-ASIR or UL-ASIR (0.11–0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818–0.860) was comparable to that for L-ASIR (0.696–0.844). The specificity was lower with UL-MBIR (0.79–0.92) than with L-ASIR or UL-ASIR (0.96–0.99), and a significant difference was seen for one reader (P < 0.01). In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity

  17. Model-based iterative reconstruction and adaptive statistical iterative reconstruction: dose-reduced CT for detecting pancreatic calcification.

    Science.gov (United States)

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2016-01-01

    Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67-0.89) compared to L-ASIR or UL-ASIR (0.11-0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818-0.860) was comparable to that for L-ASIR (0.696-0.844). The specificity was lower with UL-MBIR (0.79-0.92) than with L-ASIR or UL-ASIR (0.96-0.99), and a significant difference was seen for one reader (P < 0.01). In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity.

  18. Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Liang Jinghang

    2012-08-01

    Full Text Available Abstract Background Various computational models have been of interest due to their use in the modelling of gene regulatory networks (GRNs. As a logical model, probabilistic Boolean networks (PBNs consider molecular and genetic noise, so the study of PBNs provides significant insights into the understanding of the dynamics of GRNs. This will ultimately lead to advances in developing therapeutic methods that intervene in the process of disease development and progression. The applications of PBNs, however, are hindered by the complexities involved in the computation of the state transition matrix and the steady-state distribution of a PBN. For a PBN with n genes and N Boolean networks, the complexity to compute the state transition matrix is O(nN22n or O(nN2n for a sparse matrix. Results This paper presents a novel implementation of PBNs based on the notions of stochastic logic and stochastic computation. This stochastic implementation of a PBN is referred to as a stochastic Boolean network (SBN. An SBN provides an accurate and efficient simulation of a PBN without and with random gene perturbation. The state transition matrix is computed in an SBN with a complexity of O(nL2n, where L is a factor related to the stochastic sequence length. Since the minimum sequence length required for obtaining an evaluation accuracy approximately increases in a polynomial order with the number of genes, n, and the number of Boolean networks, N, usually increases exponentially with n, L is typically smaller than N, especially in a network with a large number of genes. Hence, the computational efficiency of an SBN is primarily limited by the number of genes, but not directly by the total possible number of Boolean networks. Furthermore, a time-frame expanded SBN enables an efficient analysis of the steady-state distribution of a PBN. These findings are supported by the simulation results of a simplified p53 network, several randomly generated networks and a

  19. CT image reconstruction system based on hardware implementation

    International Nuclear Information System (INIS)

    Silva, Hamilton P. da; Evseev, Ivan; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P.; Zibetti, Marcelo; Hormaza, Joel M.; Lopes, Ricardo T.

    2009-01-01

    Full text: The timing factor is very important for medical imaging systems, which can nowadays be synchronized by vital human signals, like heartbeats or breath. The use of hardware implemented devices in such a system has advantages considering the high speed of information treatment combined with arbitrary low cost on the market. This article refers to a hardware system which is based on electronic programmable logic called FPGA, model Cyclone II from ALTERA Corporation. The hardware was implemented on the UP3 ALTERA Kit. A partially connected neural network with unitary weights was programmed. The system was tested with 60 topographic projections, 100 points in each, of the Shepp and Logan phantom created by MATLAB. The main restriction was found to be the memory size available on the device: the dynamic range of reconstructed image was limited to 0 65535. Also, the normalization factor must be observed in order to do not saturate the image during the reconstruction and filtering process. The test shows a principal possibility to build CT image reconstruction systems for any reasonable amount of input data by arranging the parallel work of the hardware units like we have tested. However, further studies are necessary for better understanding of the error propagation from topographic projections to reconstructed image within the implemented method. (author)

  20. Network interconnections: an architectural reference model

    NARCIS (Netherlands)

    Butscher, B.; Lenzini, L.; Morling, R.; Vissers, C.A.; Popescu-Zeletin, R.; van Sinderen, Marten J.; Heger, D.; Krueger, G.; Spaniol, O.; Zorn, W.

    1985-01-01

    One of the major problems in understanding the different approaches in interconnecting networks of different technologies is the lack of reference to a general model. The paper develops the rationales for a reference model of network interconnection and focuses on the architectural implications for

  1. A general evolving model for growing bipartite networks

    International Nuclear Information System (INIS)

    Tian, Lixin; He, Yinghuan; Liu, Haijun; Du, Ruijin

    2012-01-01

    In this Letter, we propose and study an inner evolving bipartite network model. Significantly, we prove that the degree distribution of two different kinds of nodes both obey power-law form with adjustable exponents. Furthermore, the joint degree distribution of any two nodes for bipartite networks model is calculated analytically by the mean-field method. The result displays that such bipartite networks are nearly uncorrelated networks, which is different from one-mode networks. Numerical simulations and empirical results are given to verify the theoretical results. -- Highlights: ► We proposed a general evolving bipartite network model which was based on priority connection, reconnection and breaking edges. ► We prove that the degree distribution of two different kinds of nodes both obey power-law form with adjustable exponents. ► The joint degree distribution of any two nodes for bipartite networks model is calculated analytically by the mean-field method. ► The result displays that such bipartite networks are nearly uncorrelated networks, which is different from one-mode networks.

  2. Model of community emergence in weighted social networks

    Science.gov (United States)

    Kumpula, J. M.; Onnela, J.-P.; Saramäki, J.; Kertész, J.; Kaski, K.

    2009-04-01

    Over the years network theory has proven to be rapidly expanding methodology to investigate various complex systems and it has turned out to give quite unparalleled insight to their structure, function, and response through data analysis, modeling, and simulation. For social systems in particular the network approach has empirically revealed a modular structure due to interplay between the network topology and link weights between network nodes or individuals. This inspired us to develop a simple network model that could catch some salient features of mesoscopic community and macroscopic topology formation during network evolution. Our model is based on two fundamental mechanisms of network sociology for individuals to find new friends, namely cyclic closure and focal closure, which are mimicked by local search-link-reinforcement and random global attachment mechanisms, respectively. In addition we included to the model a node deletion mechanism by removing all its links simultaneously, which corresponds for an individual to depart from the network. Here we describe in detail the implementation of our model algorithm, which was found to be computationally efficient and produce many empirically observed features of large-scale social networks. Thus this model opens a new perspective for studying such collective social phenomena as spreading, structure formation, and evolutionary processes.

  3. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model.

    Science.gov (United States)

    Mei, Shuang; Wang, Yudan; Wen, Guojun

    2018-04-02

    Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality). Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  4. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model

    Directory of Open Access Journals (Sweden)

    Shuang Mei

    2018-04-01

    Full Text Available Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality. Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  5. Dynamic concision for three-dimensional reconstruction of human organ built with virtual reality modelling language (VRML)*

    Science.gov (United States)

    Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun

    2005-01-01

    This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging. PMID:15973760

  6. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  7. Reconstructing consensus Bayesian network structures with application to learning molecular interaction networks

    NARCIS (Netherlands)

    Fröhlich, H.; Klau, G.W.

    2013-01-01

    Bayesian Networks are an established computational approach for data driven network inference. However, experimental data is limited in its availability and corrupted by noise. This leads to an unavoidable uncertainty about the correct network structure. Thus sampling or bootstrap based strategies

  8. A deep learning-based reconstruction of cosmic ray-induced air showers

    Science.gov (United States)

    Erdmann, M.; Glombitza, J.; Walz, D.

    2018-01-01

    We describe a method of reconstructing air showers induced by cosmic rays using deep learning techniques. We simulate an observatory consisting of ground-based particle detectors with fixed locations on a regular grid. The detector's responses to traversing shower particles are signal amplitudes as a function of time, which provide information on transverse and longitudinal shower properties. In order to take advantage of convolutional network techniques specialized in local pattern recognition, we convert all information to the image-like grid of the detectors. In this way, multiple features, such as arrival times of the first particles and optimized characterizations of time traces, are processed by the network. The reconstruction quality of the cosmic ray arrival direction turns out to be competitive with an analytic reconstruction algorithm. The reconstructed shower direction, energy and shower depth show the expected improvement in resolution for higher cosmic ray energy.

  9. Modeling acquaintance networks based on balance theory

    Directory of Open Access Journals (Sweden)

    Vukašinović Vida

    2014-09-01

    Full Text Available An acquaintance network is a social structure made up of a set of actors and the ties between them. These ties change dynamically as a consequence of incessant interactions between the actors. In this paper we introduce a social network model called the Interaction-Based (IB model that involves well-known sociological principles. The connections between the actors and the strength of the connections are influenced by the continuous positive and negative interactions between the actors and, vice versa, the future interactions are more likely to happen between the actors that are connected with stronger ties. The model is also inspired by the social behavior of animal species, particularly that of ants in their colony. A model evaluation showed that the IB model turned out to be sparse. The model has a small diameter and an average path length that grows in proportion to the logarithm of the number of vertices. The clustering coefficient is relatively high, and its value stabilizes in larger networks. The degree distributions are slightly right-skewed. In the mature phase of the IB model, i.e., when the number of edges does not change significantly, most of the network properties do not change significantly either. The IB model was found to be the best of all the compared models in simulating the e-mail URV (University Rovira i Virgili of Tarragona network because the properties of the IB model more closely matched those of the e-mail URV network than the other models

  10. Mathematical Modelling Plant Signalling Networks

    KAUST Repository

    Muraro, D.

    2013-01-01

    During the last two decades, molecular genetic studies and the completion of the sequencing of the Arabidopsis thaliana genome have increased knowledge of hormonal regulation in plants. These signal transduction pathways act in concert through gene regulatory and signalling networks whose main components have begun to be elucidated. Our understanding of the resulting cellular processes is hindered by the complex, and sometimes counter-intuitive, dynamics of the networks, which may be interconnected through feedback controls and cross-regulation. Mathematical modelling provides a valuable tool to investigate such dynamics and to perform in silico experiments that may not be easily carried out in a laboratory. In this article, we firstly review general methods for modelling gene and signalling networks and their application in plants. We then describe specific models of hormonal perception and cross-talk in plants. This mathematical analysis of sub-cellular molecular mechanisms paves the way for more comprehensive modelling studies of hormonal transport and signalling in a multi-scale setting. © EDP Sciences, 2013.

  11. Cool-season precipitation in the southwestern USA since AD 1000: comparison of linear and nonlinear techniques for reconstruction

    Science.gov (United States)

    Ni, Fenbiao; Cavazos, Tereza; Hughes, Malcolm K.; Comrie, Andrew C.; Funkhouser, Gary

    2002-11-01

    A 1000 year reconstruction of cool-season (November-April) precipitation was developed for each climate division in Arizona and New Mexico from a network of 19 tree-ring chronologies in the southwestern USA. Linear regression (LR) and artificial neural network (NN) models were used to identify the cool-season precipitation signal in tree rings. Using 1931-88 records, the stepwise LR model was cross-validated with a leave-one-out procedure and the NN was validated with a bootstrap technique. The final models were also independently validated using the 1896-1930 precipitation data. In most of the climate divisions, both techniques can successfully reconstruct dry and normal years, and the NN seems to capture large precipitation events and more variability better than the LR. In the 1000 year reconstructions the NN also produces more distinctive wet events and more variability, whereas the LR produces more distinctive dry events. The 1000 year reconstructed precipitation from the two models shows several sustained dry and wet periods comparable to the 1950s drought (e.g. 16th century mega drought) and to the post-1976 wet period (e.g. 1330s, 1610s). The impact of extreme periods on the environment may be stronger during sudden reversals from dry to wet, which were not uncommon throughout the millennium, such as the 1610s wet interval that followed the 16th century mega drought. The instrumental records suggest that strong dry to wet precipitation reversals in the past 1000 years might be linked to strong shifts from cold to warm El Niño-southern oscillation events and from a negative to positive Pacific decadal oscillation.

  12. Ripple-Spreading Network Model Optimization by Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiao-Bing Hu

    2013-01-01

    Full Text Available Small-world and scale-free properties are widely acknowledged in many real-world complex network systems, and many network models have been developed to capture these network properties. The ripple-spreading network model (RSNM is a newly reported complex network model, which is inspired by the natural ripple-spreading phenomenon on clam water surface. The RSNM exhibits good potential for describing both spatial and temporal features in the development of many real-world networks where the influence of a few local events spreads out through nodes and then largely determines the final network topology. However, the relationships between ripple-spreading related parameters (RSRPs of RSNM and small-world and scale-free topologies are not as obvious or straightforward as in many other network models. This paper attempts to apply genetic algorithm (GA to tune the values of RSRPs, so that the RSNM may generate these two most important network topologies. The study demonstrates that, once RSRPs are properly tuned by GA, the RSNM is capable of generating both network topologies and therefore has a great flexibility to study many real-world complex network systems.

  13. Constitutive modelling of composite biopolymer networks.

    Science.gov (United States)

    Fallqvist, B; Kroon, M

    2016-04-21

    The mechanical behaviour of biopolymer networks is to a large extent determined at a microstructural level where the characteristics of individual filaments and the interactions between them determine the response at a macroscopic level. Phenomena such as viscoelasticity and strain-hardening followed by strain-softening are observed experimentally in these networks, often due to microstructural changes (such as filament sliding, rupture and cross-link debonding). Further, composite structures can also be formed with vastly different mechanical properties as compared to the individual networks. In this present paper, we present a constitutive model presented in a continuum framework aimed at capturing these effects. Special care is taken to formulate thermodynamically consistent evolution laws for dissipative effects. This model, incorporating possible anisotropic network properties, is based on a strain energy function, split into an isochoric and a volumetric part. Generalisation to three dimensions is performed by numerical integration over the unit sphere. Model predictions indicate that the constitutive model is well able to predict the elastic and viscoelastic response of biological networks, and to an extent also composite structures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. CT of the chest with model-based, fully iterative reconstruction: comparison with adaptive statistical iterative reconstruction.

    Science.gov (United States)

    Ichikawa, Yasutaka; Kitagawa, Kakuya; Nagasawa, Naoki; Murashima, Shuichi; Sakuma, Hajime

    2013-08-09

    The recently developed model-based iterative reconstruction (MBIR) enables significant reduction of image noise and artifacts, compared with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP). The purpose of this study was to evaluate lesion detectability of low-dose chest computed tomography (CT) with MBIR in comparison with ASIR and FBP. Chest CT was acquired with 64-slice CT (Discovery CT750HD) with standard-dose (5.7 ± 2.3 mSv) and low-dose (1.6 ± 0.8 mSv) conditions in 55 patients (aged 72 ± 7 years) who were suspected of lung disease on chest radiograms. Low-dose CT images were reconstructed with MBIR, ASIR 50% and FBP, and standard-dose CT images were reconstructed with FBP, using a reconstructed slice thickness of 0.625 mm. Two observers evaluated the image quality of abnormal lung and mediastinal structures on a 5-point scale (Score 5 = excellent and score 1 = non-diagnostic). The objective image noise was also measured as the standard deviation of CT intensity in the descending aorta. The image quality score of enlarged mediastinal lymph nodes on low-dose MBIR CT (4.7 ± 0.5) was significantly improved in comparison with low-dose FBP and ASIR CT (3.0 ± 0.5, p = 0.004; 4.0 ± 0.5, p = 0.02, respectively), and was nearly identical to the score of standard-dose FBP image (4.8 ± 0.4, p = 0.66). Concerning decreased lung attenuation (bulla, emphysema, or cyst), the image quality score on low-dose MBIR CT (4.9 ± 0.2) was slightly better compared to low-dose FBP and ASIR CT (4.5 ± 0.6, p = 0.01; 4.6 ± 0.5, p = 0.01, respectively). There were no significant differences in image quality scores of visualization of consolidation or mass, ground-glass attenuation, or reticular opacity among low- and standard-dose CT series. Image noise with low-dose MBIR CT (11.6 ± 1.0 Hounsfield units (HU)) were significantly lower than with low-dose ASIR (21.1 ± 2.6 HU, p standard-dose FBP CT (16.6 ± 2.3 HU, p 70%, MBIR can provide

  15. Segmentation-DrivenTomographic Reconstruction

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas

    such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction......The tomographic reconstruction problem is concerned with creating a model of the interior of an object from some measured data, typically projections of the object. After reconstructing an object it is often desired to segment it, either automatically or manually. For computed tomography (CT...

  16. Bayesian latent feature modeling for modeling bipartite networks with overlapping groups

    DEFF Research Database (Denmark)

    Jørgensen, Philip H.; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2016-01-01

    Bi-partite networks are commonly modelled using latent class or latent feature models. Whereas the existing latent class models admit marginalization of parameters specifying the strength of interaction between groups, existing latent feature models do not admit analytical marginalization...... by the notion of community structure such that the edge density within groups is higher than between groups. Our model further assumes that entities can have different propensities of generating links in one of the modes. The proposed framework is contrasted on both synthetic and real bi-partite networks...... feature representations in bipartite networks provides a new framework for accounting for structure in bi-partite networks using binary latent feature representations providing interpretable representations that well characterize structure as quantified by link prediction....

  17. Fusion of intraoperative force sensoring, surface reconstruction and biomechanical modeling

    Science.gov (United States)

    Röhl, S.; Bodenstedt, S.; Küderle, C.; Suwelack, S.; Kenngott, H.; Müller-Stich, B. P.; Dillmann, R.; Speidel, S.

    2012-02-01

    Minimally invasive surgery is medically complex and can heavily benefit from computer assistance. One way to help the surgeon is to integrate preoperative planning data into the surgical workflow. This information can be represented as a customized preoperative model of the surgical site. To use it intraoperatively, it has to be updated during the intervention due to the constantly changing environment. Hence, intraoperative sensor data has to be acquired and registered with the preoperative model. Haptic information which could complement the visual sensor data is still not established. In addition, biomechanical modeling of the surgical site can help in reflecting the changes which cannot be captured by intraoperative sensors. We present a setting where a force sensor is integrated into a laparoscopic instrument. In a test scenario using a silicone liver phantom, we register the measured forces with a reconstructed surface model from stereo endoscopic images and a finite element model. The endoscope, the instrument and the liver phantom are tracked with a Polaris optical tracking system. By fusing this information, we can transfer the deformation onto the finite element model. The purpose of this setting is to demonstrate the principles needed and the methods developed for intraoperative sensor data fusion. One emphasis lies on the calibration of the force sensor with the instrument and first experiments with soft tissue. We also present our solution and first results concerning the integration of the force sensor as well as accuracy to the fusion of force measurements, surface reconstruction and biomechanical modeling.

  18. A Search Model with a Quasi-Network

    DEFF Research Database (Denmark)

    Ejarque, Joao Miguel

    This paper adds a quasi-network to a search model of the labor market. Fitting the model to an average unemployment rate and to other moments in the data implies the presence of the network is not noticeable in the basic properties of the unemployment and job finding rates. However, the network...

  19. From Ecology to Finance (and Back?): A Review on Entropy-Based Null Models for the Analysis of Bipartite Networks

    Science.gov (United States)

    Straka, Mika J.; Caldarelli, Guido; Squartini, Tiziano; Saracco, Fabio

    2018-04-01

    Bipartite networks provide an insightful representation of many systems, ranging from mutualistic networks of species interactions to investment networks in finance. The analyses of their topological structures have revealed the ubiquitous presence of properties which seem to characterize many—apparently different—systems. Nestedness, for example, has been observed in biological plant-pollinator as well as in country-product exportation networks. Due to the interdisciplinary character of complex networks, tools developed in one field, for example ecology, can greatly enrich other areas of research, such as economy and finance, and vice versa. With this in mind, we briefly review several entropy-based bipartite null models that have been recently proposed and discuss their application to real-world systems. The focus on these models is motivated by the fact that they show three very desirable features: analytical character, general applicability, and versatility. In this respect, entropy-based methods have been proven to perform satisfactorily both in providing benchmarks for testing evidence-based null hypotheses and in reconstructing unknown network configurations from partial information. Furthermore, entropy-based models have been successfully employed to analyze ecological as well as economic systems. As an example, the application of entropy-based null models has detected early-warning signals, both in economic and financial systems, of the 2007-2008 world crisis. Moreover, they have revealed a statistically-significant export specialization phenomenon of country export baskets in international trade, a result that seems to reconcile Ricardo's hypothesis in classical economics with recent findings on the (empirical) diversification industrial production at the national level. Finally, these null models have shown that the information contained in the nestedness is already accounted for by the degree sequence of the corresponding graphs.

  20. Genome-Scale Reconstruction of the Human Astrocyte Metabolic Network

    OpenAIRE

    Mart?n-Jim?nez, Cynthia A.; Salazar-Barreto, Diego; Barreto, George E.; Gonz?lez, Janneth

    2017-01-01

    Astrocytes are the most abundant cells of the central nervous system; they have a predominant role in maintaining brain metabolism. In this sense, abnormal metabolic states have been found in different neuropathological diseases. Determination of metabolic states of astrocytes is difficult to model using current experimental approaches given the high number of reactions and metabolites present. Thus, genome-scale metabolic networks derived from transcriptomic data can be used as a framework t...